NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
Application of Classical and Lie Transform Methods to Zonal Perturbation in the Artificial Satellite
NASA Astrophysics Data System (ADS)
San-Juan, J. F.; San-Martin, M.; Perez, I.; Lopez-Ochoa, L. M.
2013-08-01
A scalable second-order analytical orbit propagator program is being carried out. This analytical orbit propagator combines modern perturbation methods, based on the canonical frame of the Lie transform, and classical perturbation methods in function of orbit types or the requirements needed for a space mission, such as catalog maintenance operations, long period evolution, and so on. As a first step on the validation of part of our orbit propagator, in this work we only consider the perturbation produced by zonal harmonic coefficients in the Earth's gravity potential, so that it is possible to analyze the behaviour of the perturbation methods involved in the corresponding analytical theories.
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
A new frequency approach for light flicker evaluation in electric power systems
NASA Astrophysics Data System (ADS)
Feola, Luigi; Langella, Roberto; Testa, Alfredo
2015-12-01
In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph
2003-06-01
A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.
Bioassays as one of the Green Chemistry tools for assessing environmental quality: A review.
Wieczerzak, M; Namieśnik, J; Kudłak, B
2016-09-01
For centuries, mankind has contributed to irreversible environmental changes, but due to the modern science of recent decades, scientists are able to assess the scale of this impact. The introduction of laws and standards to ensure environmental cleanliness requires comprehensive environmental monitoring, which should also meet the requirements of Green Chemistry. The broad spectrum of Green Chemistry principle applications should also include all of the techniques and methods of pollutant analysis and environmental monitoring. The classical methods of chemical analyses do not always match the twelve principles of Green Chemistry, and they are often expensive and employ toxic and environmentally unfriendly solvents in large quantities. These solvents can generate hazardous and toxic waste while consuming large volumes of resources. Therefore, there is a need to develop reliable techniques that would not only meet the requirements of Green Analytical Chemistry, but they could also complement and sometimes provide an alternative to conventional classical analytical methods. These alternatives may be found in bioassays. Commercially available certified bioassays often come in the form of ready-to-use toxkits, and they are easy to use and relatively inexpensive in comparison with certain conventional analytical methods. The aim of this study is to provide evidence that bioassays can be a complementary alternative to classical methods of analysis and can fulfil Green Analytical Chemistry criteria. The test organisms discussed in this work include single-celled organisms, such as cell lines, fungi (yeast), and bacteria, and multicellular organisms, such as invertebrate and vertebrate animals and plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
Classical Civilization (Greece-Hellenistic-Rome). Teacher's Manual. 1968 Edition.
ERIC Educational Resources Information Center
Leppert, Ella C.; Smith, Rozella B.
This secondary teachers guide builds upon a previous sequential course described in SO 003 173, and consists of three sections on the classical civilizations--Greek, Hellenistic, and Rome. Major emphasis is upon students gaining an understanding of cultural development and transmission. Using an analytic method, students learn to examine primary…
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S
2016-03-01
Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.
Complexometric Determination of Mercury Based on a Selective Masking Reaction
ERIC Educational Resources Information Center
Romero, Mercedes; Guidi, Veronica; Ibarrolaza, Agustin; Castells, Cecilia
2009-01-01
In the first analytical chemistry course, students are introduced to the concepts of equilibrium in water solutions and classical (non-instrumental) analytical methods. Our teaching experience shows that "real samples" stimulate students' enthusiasm for the laboratory work. From this diagnostic, we implemented an optional activity at the end of…
Artificial Intelligence Methods in Pursuit Evasion Differential Games
1990-07-30
objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of
Extended Analytic Device Optimization Employing Asymptotic Expansion
NASA Technical Reports Server (NTRS)
Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred
2013-01-01
Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.
Structural analysis at aircraft conceptual design stage
NASA Astrophysics Data System (ADS)
Mansouri, Reza
In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions. Considering the strength and limitations of both methodologies, the question to be answered in this thesis is: How valuable and compatible are the classical analytical methods in today's conceptual design environment? And can these methods complement each other? To answer these questions, this thesis investigates the pros and cons of classical analytical structural analysis methods during the conceptual design stage through the following objectives: Illustrate structural design methodology of these methods within the framework of Aerospace Vehicle Design (AVD) lab's design lifecycle. Demonstrate the effectiveness of moment distribution method through four case studies. This will be done by considering and evaluating the strength and limitation of these methods. In order to objectively quantify the limitation and capabilities of the analytical method at the conceptual design stage, each case study becomes more complex than the one before.
Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research
ERIC Educational Resources Information Center
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne
2018-01-01
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
Introducing Chemometrics to the Analytical Curriculum: Combining Theory and Lab Experience
ERIC Educational Resources Information Center
Gilbert, Michael K.; Luttrell, Robert D.; Stout, David; Vogt, Frank
2008-01-01
Beer's law is an ideal technique that works only in certain situations. A method for dealing with more complex conditions needs to be integrated into the analytical chemistry curriculum. For that reason, the capabilities and limitations of two common chemometric algorithms, classical least squares (CLS) and principal component regression (PCR),…
New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods
NASA Astrophysics Data System (ADS)
S Saha, Ray
2016-04-01
In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
New insights into classical solutions of the local instability of the sandwich panels problem
NASA Astrophysics Data System (ADS)
Pozorska, Jolanta; Pozorski, Zbigniew
2016-06-01
The paper concerns the problem of local instability of thin facings of a sandwich panel. The classic analytical solutions are compared and examined. The Airy stress function is applied in the case of the state of plane stress and the state of plane strain. Wrinkling stress values are presented. The differences between the results obtained using the differential equations method and energy method are discussed. The relations between core strain energies are presented.
ERIC Educational Resources Information Center
Helms, LuAnn Sherbeck
This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…
Modified harmonic balance method for the solution of nonlinear jerk equations
NASA Astrophysics Data System (ADS)
Rahman, M. Saifur; Hasan, A. S. M. Z.
2018-03-01
In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)
This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...
Sound Emission of Rotor Induced Deformations of Generator Casings
NASA Technical Reports Server (NTRS)
Polifke, W.; Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
The casing of large electrical generators can be deformed slightly by the rotor's magnetic field. The sound emission produced by these periodic deformations, which could possibly exceed guaranteed noise emission limits, is analysed analytically and numerically. From the deformation of the casing, the normal velocity of the generator's surface is computed. Taking into account the corresponding symmetry, an analytical solution for the acoustic pressure outside the generator is round in terms of the Hankel function of second order. The normal velocity or the generator surface provides the required boundary condition for the acoustic pressure and determines the magnitude of pressure oscillations. For the numerical simulation, the nonlinear 2D Euler equations are formulated In a perturbation form for low Mach number Computational Aeroacoustics (CAA). The spatial derivatives are discretized by the classical sixth-order central interior scheme and a third-order boundary scheme. Spurious high frequency oscillations are damped by a characteristic-based artificial compression method (ACM) filter. The time derivatives are approximated by the classical 4th-order Runge-Kutta method. The numerical results are In excellent agreement with the analytical solution.
Gore, Christopher J; Sharpe, Ken; Garvican-Lewis, Laura A; Saunders, Philo U; Humberstone, Clare E; Robertson, Eileen Y; Wachsmuth, Nadine B; Clark, Sally A; McLean, Blake D; Friedmann-Bette, Birgit; Neya, Mitsuo; Pottgiesser, Torben; Schumacher, Yorck O; Schmidt, Walter F
2013-01-01
Objective To characterise the time course of changes in haemoglobin mass (Hbmass) in response to altitude exposure. Methods This meta-analysis uses raw data from 17 studies that used carbon monoxide rebreathing to determine Hbmass prealtitude, during altitude and postaltitude. Seven studies were classic altitude training, eight were live high train low (LHTL) and two mixed classic and LHTL. Separate linear-mixed models were fitted to the data from the 17 studies and the resultant estimates of the effects of altitude used in a random effects meta-analysis to obtain an overall estimate of the effect of altitude, with separate analyses during altitude and postaltitude. In addition, within-subject differences from the prealtitude phase for altitude participant and all the data on control participants were used to estimate the analytical SD. The ‘true’ between-subject response to altitude was estimated from the within-subject differences on altitude participants, between the prealtitude and during-altitude phases, together with the estimated analytical SD. Results During-altitude Hbmass was estimated to increase by ∼1.1%/100 h for LHTL and classic altitude. Postaltitude Hbmass was estimated to be 3.3% higher than prealtitude values for up to 20 days. The within-subject SD was constant at ∼2% for up to 7 days between observations, indicative of analytical error. A 95% prediction interval for the ‘true’ response of an athlete exposed to 300 h of altitude was estimated to be 1.1–6%. Conclusions Camps as short as 2 weeks of classic and LHTL altitude will quite likely increase Hbmass and most athletes can expect benefit. PMID:24282204
Zhou, Yan; Cao, Hui
2013-01-01
We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...
COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R825173)
This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard...
Artificial neural network and classical least-squares methods for neurotransmitter mixture analysis.
Schulze, H G; Greek, L S; Gorzalka, B B; Bree, A V; Blades, M W; Turner, R F
1995-02-01
Identification of individual components in biological mixtures can be a difficult problem regardless of the analytical method employed. In this work, Raman spectroscopy was chosen as a prototype analytical method due to its inherent versatility and applicability to aqueous media, making it useful for the study of biological samples. Artificial neural networks (ANNs) and the classical least-squares (CLS) method were used to identify and quantify the Raman spectra of the small-molecule neurotransmitters and mixtures of such molecules. The transfer functions used by a network, as well as the architecture of a network, played an important role in the ability of the network to identify the Raman spectra of individual neurotransmitters and the Raman spectra of neurotransmitter mixtures. Specifically, networks using sigmoid and hyperbolic tangent transfer functions generalized better from the mixtures in the training data set to those in the testing data sets than networks using sine functions. Networks with connections that permit the local processing of inputs generally performed better than other networks on all the testing data sets. and better than the CLS method of curve fitting, on novel spectra of some neurotransmitters. The CLS method was found to perform well on noisy, shifted, and difference spectra.
Equilibrium Solutions of the Logarithmic Hamiltonian Leapfrog for the N-body Problem
NASA Astrophysics Data System (ADS)
Minesaki, Yukitaka
2018-04-01
We prove that a second-order logarithmic Hamiltonian leapfrog for the classical general N-body problem (CGNBP) designed by Mikkola and Tanikawa and some higher-order logarithmic Hamiltonian methods based on symmetric multicompositions of the logarithmic algorithm exactly reproduce the orbits of elliptic relative equilibrium solutions in the original CGNBP. These methods are explicit symplectic methods. Before this proof, only some implicit discrete-time CGNBPs proposed by Minesaki had been analytically shown to trace the orbits of elliptic relative equilibrium solutions. The proof is therefore the first existence proof for explicit symplectic methods. Such logarithmic Hamiltonian methods with a variable time step can also precisely retain periodic orbits in the classical general three-body problem, which generic numerical methods with a constant time step cannot do.
Mirski, Tomasz; Bartoszcze, Michał; Bielawska-Drózd, Agata; Cieślik, Piotr; Michalski, Aleksander J; Niemcewicz, Marcin; Kocik, Janusz; Chomiczewski, Krzysztof
2014-01-01
Modern threats of bioterrorism force the need to develop methods for rapid and accurate identification of dangerous biological agents. Currently, there are many types of methods used in this field of studies that are based on immunological or genetic techniques, or constitute a combination of both methods (immuno-genetic). There are also methods that have been developed on the basis of physical and chemical properties of the analytes. Each group of these analytical assays can be further divided into conventional methods (e.g. simple antigen-antibody reactions, classical PCR, real-time PCR), and modern technologies (e.g. microarray technology, aptamers, phosphors, etc.). Nanodiagnostics constitute another group of methods that utilize the objects at a nanoscale (below 100 nm). There are also integrated and automated diagnostic systems, which combine different methods and allow simultaneous sampling, extraction of genetic material and detection and identification of the analyte using genetic, as well as immunological techniques.
[Blood sampling using "dried blood spot": a clinical biology revolution underway?].
Hirtz, Christophe; Lehmann, Sylvain
2015-01-01
Blood testing using the dried blood spot (DBS) is used since the 1960s in clinical analysis, mainly within the framework of the neonatal screening (Guthrie test). Since then numerous analytes such as nucleic acids, small molecules or lipids, were successfully measured on the DBS. While this pre-analytical method represents an interesting alternative to classic blood sampling, its use in routine is still limited. We review here the different clinical applications of the blood sampling on DBS and estimate its future place, supported by the new methods of analysis as the LC-MS mass spectrometry.
Analytic Methods for Adjusting Subjective Rating Schemes.
ERIC Educational Resources Information Center
Cooper, Richard V. L.; Nelson, Gary R.
Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
The Shock and Vibration Bulletin. Part 2. Invited Papers, Structural Dynamics
1974-08-01
VIKING LANDER DYNAMICS 41 Mr. Joseph C. Pohlen, Martin Marietta Aerospace, Denver, Colorado Structural Dynamics PERFORMANCE OF STATISTICAL ENERGY ANALYSIS 47...aerospace structures. Analytical prediction of these environments is beyond the current scope of classical modal techniques. Statistical energy analysis methods...have been developed that circumvent the difficulties of high-frequency nodal analysis. These statistical energy analysis methods are evaluated
Quantum localization for a kicked rotor with accelerator mode islands.
Iomin, A; Fishman, S; Zaslavsky, G M
2002-03-01
Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.
NASA Astrophysics Data System (ADS)
Barsan, Victor
2018-05-01
Several classes of transcendental equations, mainly eigenvalue equations associated to non-relativistic quantum mechanical problems, are analyzed. Siewert's systematic approach of such equations is discussed from the perspective of the new results recently obtained in the theory of generalized Lambert functions and of algebraic approximations of various special or elementary functions. Combining exact and approximate analytical methods, quite precise analytical outputs are obtained for apparently untractable problems. The results can be applied in quantum and classical mechanics, magnetism, elasticity, solar energy conversion, etc.
Classical theory of atomic collisions - The first hundred years
NASA Astrophysics Data System (ADS)
Grujić, Petar V.
2012-05-01
Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.
Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress
NASA Astrophysics Data System (ADS)
Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed
2013-07-01
The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.
Numerical methods for coupled fracture problems
NASA Astrophysics Data System (ADS)
Viesca, Robert C.; Garagash, Dmitry I.
2018-04-01
We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.
Huang, Yande; Su, Bao-Ning; Ye, Qingmei; Palaniswamy, Venkatapuram A; Bolgar, Mark S; Raglione, Thomas V
2014-01-01
The classical internal standard quantitative NMR (qNMR) method determines the purity of an analyte by the determination of a solution containing the analyte and a standard. Therefore, the standard must meet the requirements of chemical compatibility and lack of resonance interference with the analyte as well as a known purity. The identification of such a standard can be time consuming and must be repeated for each analyte. In contrast, the external standard qNMR method utilizes a standard with a known purity to calibrate the NMR instrument. The external standard and the analyte are measured separately, thereby eliminating the matter of chemical compatibility and resonance interference between the standard and the analyte. However, the instrumental factors, including the quality of NMR tubes, must be kept the same. Any deviations will compromise the accuracy of the results. An innovative qNMR method reported herein utilizes an internal reference substance along with an external standard to assume the role of the standard used in the traditional internal standard qNMR method. In this new method, the internal reference substance must only be chemically compatible and be free of resonance-interference with the analyte or external standard whereas the external standard must only be of a known purity. The exact purity or concentration of the internal reference substance is not required as long as the same quantity is added to the external standard and the analyte. The new method reduces the burden of searching for an appropriate standard for each analyte significantly. Therefore the efficiency of the qNMR purity assay increases while the precision of the internal standard method is retained. Copyright © 2013 Elsevier B.V. All rights reserved.
A finite-element method for large-amplitude, two-dimensional panel flutter at hypersonic speeds
NASA Technical Reports Server (NTRS)
Mei, Chuh; Gray, Carl E.
1989-01-01
The nonlinear flutter behavior of a two-dimensional panel in hypersonic flow is investigated analytically. An FEM formulation based unsteady third-order piston theory (Ashley and Zartarian, 1956; McIntosh, 1970) and taking nonlinear structural and aerodynamic phenomena into account is derived; the solution procedure is outlined; and typical results are presented in extensive tables and graphs. A 12-element finite-element solution obtained using an alternative method for linearizing the assumed limit-cycle time function is shown to give predictions in good agreement with classical analytical results for large-amplitude vibration in a vacuum and large-amplitude panel flutter, using linear aerodynamics.
On Boundaries of the Language of Physics
NASA Astrophysics Data System (ADS)
Kvasz, Ladislav
The aim of the present paper is to outline a method of reconstruction of the historical development of the language of physical theories. We will apply the theory presented in Patterns of Change, Linguistic Innovations in the Development of Classical Mathematics to the analysis of linguistic innovations in physics. Our method is based on a reconstruction of the following potentialities of language: analytical power, expressive power, integrative power, and explanatory power, as well as analytical boundaries and expressive boundaries. One of the results of our reconstruction is a new interpretation of Kant's antinomies of pure reason. If we relate Kant's antinomies to the language, they retain validity.
NASA Astrophysics Data System (ADS)
Chen, Jiahui; Zhou, Hui; Duan, Changkui; Peng, Xinhua
2017-03-01
Entanglement, a unique quantum resource with no classical counterpart, remains at the heart of quantum information. The Greenberger-Horne-Zeilinger (GHZ) and W states are two inequivalent classes of multipartite entangled states which cannot be transformed into each other by means of local operations and classic communication. In this paper, we present the methods to prepare the GHZ and W states via global controls on a long-range Ising spin model. For the GHZ state, general solutions are analytically obtained for an arbitrary-size spin system, while for the W state, we find a standard way to prepare the W state that is analytically illustrated in three- and four-spin systems and numerically demonstrated for larger-size systems. The number of parameters required in the numerical search increases only linearly with the size of the system.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
NASA Astrophysics Data System (ADS)
Hadad, Ghada M.; El-Gindy, Alaa; Mahmoud, Waleed M. M.
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C 18 analytical column with a mobile phase consisting of a mixture of 20 mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ( 1DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
Modified symplectic schemes with nearly-analytic discrete operators for acoustic wave simulations
NASA Astrophysics Data System (ADS)
Liu, Shaolin; Yang, Dinghui; Lang, Chao; Wang, Wenshuai; Pan, Zhide
2017-04-01
Using a structure-preserving algorithm significantly increases the computational efficiency of solving wave equations. However, only a few explicit symplectic schemes are available in the literature, and the capabilities of these symplectic schemes have not been sufficiently exploited. Here, we propose a modified strategy to construct explicit symplectic schemes for time advance. The acoustic wave equation is transformed into a Hamiltonian system. The classical symplectic partitioned Runge-Kutta (PRK) method is used for the temporal discretization. Additional spatial differential terms are added to the PRK schemes to form the modified symplectic methods and then two modified time-advancing symplectic methods with all of positive symplectic coefficients are then constructed. The spatial differential operators are approximated by nearly-analytic discrete (NAD) operators, and we call the fully discretized scheme modified symplectic nearly analytic discrete (MSNAD) method. Theoretical analyses show that the MSNAD methods exhibit less numerical dispersion and higher stability limits than conventional methods. Three numerical experiments are conducted to verify the advantages of the MSNAD methods, such as their numerical accuracy, computational cost, stability, and long-term calculation capability.
Hadad, Ghada M; El-Gindy, Alaa; Mahmoud, Waleed M M
2008-08-01
High-performance liquid chromatography (HPLC) and multivariate spectrophotometric methods are described for the simultaneous determination of ambroxol hydrochloride (AM) and doxycycline (DX) in combined pharmaceutical capsules. The chromatographic separation was achieved on reversed-phase C(18) analytical column with a mobile phase consisting of a mixture of 20mM potassium dihydrogen phosphate, pH 6-acetonitrile in ratio of (1:1, v/v) and UV detection at 245 nm. Also, the resolution has been accomplished by using numerical spectrophotometric methods as classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS-1) applied to the UV spectra of the mixture and graphical spectrophotometric method as first derivative of the ratio spectra ((1)DD) method. Analytical figures of merit (FOM), such as sensitivity, selectivity, analytical sensitivity, limit of quantitation and limit of detection were determined for CLS, PLS-1 and PCR methods. The proposed methods were validated and successfully applied for the analysis of pharmaceutical formulation and laboratory-prepared mixtures containing the two component combination.
Qualitative methods in quantum theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Migdal, A.B.
The author feels that the solution of most problems in theoretical physics begins with the application of qualitative methods - dimensional estimates and estimates made from simple models, the investigation of limiting cases, the use of the analytic properties of physical quantities, etc. This book proceeds in this spirit, rather than in a formal, mathematical way with no traces of the sweat involved in the original work left to show. The chapters are entitled Dimensional and model approximations, Various types of perturbation theory, The quasi-classical approximation, Analytic properties of physical quantities, Methods in the many-body problem, and Qualitative methods inmore » quantum field theory. Each chapter begins with a detailed introduction, in which the physical meaning of the results obtained in that chapter is explained in a simple way. 61 figures. (RWR)« less
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
Fallback options for airgap sensor fault of an electromagnetic suspension system
NASA Astrophysics Data System (ADS)
Michail, Konstantinos; Zolotas, Argyrios C.; Goodall, Roger M.
2013-06-01
The paper presents a method to recover the performance of an electromagnetic suspension under faulty airgap sensor. The proposed control scheme is a combination of classical control loops, a Kalman Estimator and analytical redundancy (for the airgap signal). In this way redundant airgap sensors are not essential for reliable operation of this system. When the airgap sensor fails the required signal is recovered using a combination of a Kalman estimator and analytical redundancy. The performance of the suspension is optimised using genetic algorithms and some preliminary robustness issues to load and operating airgap variations are discussed. Simulations on a realistic model of such type of suspension illustrate the efficacy of the proposed sensor tolerant control method.
Exact analytical solution of a classical Josephson tunnel junction problem
NASA Astrophysics Data System (ADS)
Kuplevakhsky, S. V.; Glukhov, A. M.
2010-10-01
We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.
NASA Astrophysics Data System (ADS)
Ciancio, P. M.; Rossit, C. A.; Laura, P. A. A.
2007-05-01
This study is concerned with the vibration analysis of a cantilevered rectangular anisotropic plate when a concentrated mass is rigidly attached to its center point. Based on the classical theory of anisotropic plates, the Ritz method is employed to perform the analysis. The deflection of the plate is approximated by a set of beam functions in each principal coordinate direction. The influence of the mass magnitude on the natural frequencies and modal shapes of vibration is studied for a boron-epoxy plate and also in the case of a generic anisotropic material. The classical Ritz method with beam functions as the spatial approximation proved to be a suitable procedure to solve a problem of this analytical complexity.
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.
Numerical Algorithm for Delta of Asian Option
Zhang, Boxiang; Yu, Yang; Wang, Weiguo
2015-01-01
We study the numerical solution of the Greeks of Asian options. In particular, we derive a close form solution of Δ of Asian geometric option and use this analytical form as a control to numerically calculate Δ of Asian arithmetic option, which is known to have no explicit close form solution. We implement our proposed numerical method and compare the standard error with other classical variance reduction methods. Our method provides an efficient solution to the hedging strategy with Asian options. PMID:26266271
NASA Astrophysics Data System (ADS)
Nguyen, S. T.; Vu, M.-H.; Vu, M. N.; Tang, A. M.
2017-05-01
The present work aims to modeling the thermal conductivity of fractured materials using homogenization-based analytical and pattern-based numerical methods. These materials are considered as a network of cracks distributed inside a solid matrix. Heat flow through such media is perturbed by the crack system. The problem of heat flow across a single crack is firstly investigated. The classical Eshelby's solution, extended to the thermal conduction problem of an ellipsoidal inclusion embedding in an infinite homogeneous matrix, gives an analytical solution of temperature discontinuity across a non-conducting penny-shaped crack. This solution is then validated by the numerical simulation based on the finite elements method. The numerical simulation allows analyzing the effect of crack conductivity. The problem of a single crack is then extended to a medium containing multiple cracks. Analytical estimations for effective thermal conductivity, that take into account the interaction between cracks and their spatial distribution, are developed for the case of non-conducting cracks. Pattern-based numerical method is then employed for both cases non-conducting and conducting cracks. In the case of non-conducting cracks, numerical and analytical methods, both account for the spatial distribution of the cracks, fit perfectly. In the case of conducting cracks, the numerical analyzing of crack conductivity effect shows that highly conducting cracks weakly affect heat flow and the effective thermal conductivity of fractured media.
Analytic theory of orbit contraction
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.
1977-01-01
The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.
Simultaneous determination of three herbicides by differential pulse voltammetry and chemometrics.
Ni, Yongnian; Wang, Lin; Kokot, Serge
2011-01-01
A novel differential pulse voltammetry method (DPV) was researched and developed for the simultaneous determination of Pendimethalin, Dinoseb and sodium 5-nitroguaiacolate (5NG) with the aid of chemometrics. The voltammograms of these three compounds overlapped significantly, and to facilitate the simultaneous determination of the three analytes, chemometrics methods were applied. These included classical least squares (CLS), principal component regression (PCR), partial least squares (PLS) and radial basis function-artificial neural networks (RBF-ANN). A separately prepared verification data set was used to confirm the calibrations, which were built from the original and first derivative data matrices of the voltammograms. On the basis relative prediction errors and recoveries of the analytes, the RBF-ANN and the DPLS (D - first derivative spectra) models performed best and are particularly recommended for application. The DPLS calibration model was applied satisfactorily for the prediction of the three analytes from market vegetables and lake water samples.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Feng; Drumm, Eric; Guiochon, Georges A
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less
Quasi-Static Analysis of Round LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of round LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Quasi-Static Analysis of LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.
On analytic modeling of lunar perturbations of artificial satellites of the earth
NASA Astrophysics Data System (ADS)
Lane, M. T.
1989-06-01
Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.
Models of dyadic social interaction.
Griffin, Dale; Gonzalez, Richard
2003-01-01
We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382
NASA Astrophysics Data System (ADS)
Lin, Ji; Wang, Hou
2013-07-01
We use the classical Lie-group method to study the evolution equation describing a photovoltaic-photorefractive media with the effects of diffusion process and the external electric field. We reduce it to some similarity equations firstly, and then obtain some analytically exact solutions including the soliton solution, the exponential solution and the oscillatory solution. We also obtain the numeric solitons from these similarity equations. Moreover, We show theoretically that these solutions have two types of trajectories. One type is a straight line. The other is a parabolic curve, which indicates these solitons have self-deflection.
Quantum decay model with exact explicit analytical solution
NASA Astrophysics Data System (ADS)
Marchewka, Avi; Granot, Er'El
2009-01-01
A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.
Suba, Dávid; Urbányi, Zoltán; Salgó, András
2016-10-01
Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Diffusion Dynamics and Creative Destruction in a Simple Classical Model
2015-01-01
ABSTRACT The article explores the impact of the diffusion of new methods of production on output and employment growth and income distribution within a Classical one‐sector framework. Disequilibrium paths are studied analytically and in terms of simulations. Diffusion by differential growth affects aggregate dynamics through several channels. The analysis reveals the non‐steady nature of economic change and shows that the adaptation pattern depends both on the innovation's factor‐saving bias and on the extent of the bias, which determines the strength of the selection pressure on non‐innovators. The typology of different cases developed shows various aspects of Schumpeter's concept of creative destruction. PMID:27642192
Piezoelectrically actuated flextensional micromachined ultrasound transducers--I: theory.
Perçin, Gökhan; Khuri-Yakub, Butrus T
2002-05-01
This series of two papers considers piezoelectrically actuated flextensional micromachined ultrasound transducers (PAFMUTs) and consists of theory, fabrication, and experimental parts. The theory presented in this paper is developed for an ultrasound transducer application presented in the second part. In the absence of analytical expressions for the equivalent circuit parameters of a flextensional transducer, it is difficult to calculate its optimal parameters and dimensions and difficult to choose suitable materials. The influence of coupling between flexural and extensional deformation and that of coupling between the structure and the acoustic volume on the dynamic response of piezoelectrically actuated flextensional transducer are analyzed using two analytical methods: classical thin (Kirchhoff) plate theory and Mindlin plate theory. Classical thin plate theory and Mindlin plate theory are applied to derive two-dimensional plate equations for the transducer and to calculate the coupled electromechanical field variables such as mechanical displacement and electrical input impedance. In these methods, the variations across the thickness direction vanish by using the bending moments per unit length or stress resultants. Thus, two-dimensional plate equations for a step-wise laminated circular plate are obtained as well as two different solutions to the corresponding systems. An equivalent circuit of the transducer is also obtained from these solutions.
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2009-10-05
In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
A new time domain random walk method for solute transport in 1-D heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banton, O.; Delay, F.; Porel, G.
A new method to simulate solute transport in 1-D heterogeneous media is presented. This time domain random walk method (TDRW), similar in concept to the classical random walk method, calculates the arrival time of a particle cloud at a given location (directly providing the solute breakthrough curve). The main advantage of the method is that the restrictions on the space increments and the time steps which exist with the finite differences and random walk methods are avoided. In a homogeneous zone, the breakthrough curve (BTC) can be calculated directly at a given distance using a few hundred particles or directlymore » at the boundary of the zone. Comparisons with analytical solutions and with the classical random walk method show the reliability of this method. The velocity and dispersivity calculated from the simulated results agree within two percent with the values used as input in the model. For contrasted heterogeneous media, the random walk can generate high numerical dispersion, while the time domain approach does not.« less
Bagheri, Zahra; Massudi, Reza
2017-05-01
An analytical quantum model is used to calculate electrical permittivity of a metal nanoparticle located in an adjacent molecule. Different parameters, such as radiative and non-radiative decay rates, quantum yield, electrical field enhancement factor, and fluorescence enhancement are calculated by such a model and they are compared with those obtained by using the classical Drude model. It is observed that using an analytical quantum model presents a higher enhancement factor, up to 30%, as compared to classical model for nanoparticles smaller than 10 nm. Furthermore, the results are in better agreement with those experimentally realized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
Dönmez, Ozlem Aksu; Aşçi, Bürge; Bozdoğan, Abdürrezzak; Sungur, Sidika
2011-02-15
A simple and rapid analytical procedure was proposed for the determination of chromatographic peaks by means of partial least squares multivariate calibration (PLS) of high-performance liquid chromatography with diode array detection (HPLC-DAD). The method is exemplified with analysis of quaternary mixtures of potassium guaiacolsulfonate (PG), guaifenesin (GU), diphenhydramine HCI (DP) and carbetapentane citrate (CP) in syrup preparations. In this method, the area does not need to be directly measured and predictions are more accurate. Though the chromatographic and spectral peaks of the analytes were heavily overlapped and interferents coeluted with the compounds studied, good recoveries of analytes could be obtained with HPLC-DAD coupled with PLS calibration. This method was tested by analyzing the synthetic mixture of PG, GU, DP and CP. As a comparison method, a classsical HPLC method was used. The proposed methods were applied to syrups samples containing four drugs and the obtained results were statistically compared with each other. Finally, the main advantage of HPLC-PLS method over the classical HPLC method tried to emphasized as the using of simple mobile phase, shorter analysis time and no use of internal standard and gradient elution. Copyright © 2010 Elsevier B.V. All rights reserved.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Plate and butt-weld stresses beyond elastic limit, material and structural modeling
NASA Technical Reports Server (NTRS)
Verderaime, V.
1991-01-01
Ultimate safety factors of high performance structures depend on stress behavior beyond the elastic limit, a region not too well understood. An analytical modeling approach was developed to gain fundamental insights into inelastic responses of simple structural elements. Nonlinear material properties were expressed in engineering stresses and strains variables and combined with strength of material stress and strain equations similar to numerical piece-wise linear method. Integrations are continuous which allows for more detailed solutions. Included with interesting results are the classical combined axial tension and bending load model and the strain gauge conversion to stress beyond the elastic limit. Material discontinuity stress factors in butt-welds were derived. This is a working-type document with analytical methods and results applicable to all industries of high reliability structures.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Odoardi, Sara; Fisichella, Marco; Romolo, Francesco Saverio; Strano-Rossi, Sabina
2015-09-01
The increasing number of new psychoactive substances (NPS) present in the illicit market render their identification in biological fluids/tissues of great concern for clinical and forensic toxicology. Analytical methods able to detect the huge number of substances that can be used are sought, considering also that many NPS are not detected by the standard immunoassays generally used for routine drug screening. The aim of this work was to develop a method for the screening of different classes of NPS (a total of 78 analytes including cathinones, synthetic cannabinoids, phenethylamines, piperazines, ketamine and analogues, benzofurans, tryptamines) from blood samples. The simultaneous extraction of analytes was performed by Dispersive Liquid/Liquid Microextraction DLLME, a very rapid, cheap and efficient extraction technique that employs microliters amounts of organic solvents. Analyses were performed by a target Ultrahigh Performance Liquid Chromatography tandem Mass Spectrometry (UHPLC-MS/MS) method in multiple reaction monitoring (MRM). The method allowed the detection of the studied analytes with limits of detection (LODs) ranging from 0.2 to 2ng/mL. The proposed DLLME method can be used as an alternative to classical liquid/liquid or solid-phase extraction techniques due to its rapidity, necessity to use only microliters amounts of organic solvents, cheapness, and to its ability to extract simultaneously a huge number of analytes also from different chemical classes. The method was then applied to 60 authentic real samples from forensic cases, demonstrating its suitability for the screening of a wide number of NPS. Copyright © 2015 Elsevier B.V. All rights reserved.
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vanicek, Jiri
2004-03-01
We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.
Calculations of Total Classical Cross Sections for a Central Field
NASA Astrophysics Data System (ADS)
Tsyganov, D. L.
2018-07-01
In order to find the total collision cross-section a direct method of the effective potential (EPM) in the framework of classical mechanics was proposed. EPM allows to over come both the direct scattering problem (calculation of the total collision cross-section) and the inverse scattering problem (reconstruction of the scattering potential) quickly and effectively. A general analytical expression was proposed for the generalized Lennard-Jones potentials: (6-3), (9-3), (12-3), (6-4), (8-4), (12-4), (8-6), (12-6), (18-6). The values for the scattering potential of the total cross section for pairs such as electron-N2, N-N, and O-O2 were obtained in a good approximation.
Dietary fibre: challenges in production and use of food composition data.
Westenbrink, Susanne; Brunt, Kommer; van der Kamp, Jan-Willem
2013-10-01
Dietary fibre is a heterogeneous group of components for which several definitions and analytical methods were developed over the past decades, causing confusion among users and producers of dietary fibre data in food composition databases. An overview is given of current definitions and analytical methods. Some of the issues related to maintaining dietary fibre values in food composition databases are discussed. Newly developed AOAC methods (2009.01 or modifications) yield higher dietary fibre values, due to the inclusion of low molecular weight dietary fibre and resistant starch. For food composition databases procedures need to be developed to combine 'classic' and 'new' dietary fibre values since re-analysing all foods on short notice is impossible due to financial restrictions. Standardised value documentation procedures are important to evaluate dietary fibre values from several sources before exchanging and using the data, e.g. for dietary intake research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wigner phase space distribution via classical adiabatic switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801
2015-09-21
Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less
Analytic Methods for Adjusting Subjective Rating Schemes
1976-06-01
individual performance. The approach developed here is a variant of the classical linear regression model. Specifically, it la proposed that...values of y and X. Moreover, this difference la gener- ally independent of sample size, so that LS estimates are different from ML estimates at...baervationa. H^ever, aa T. -. - ,„ aU . th(. Hit (4.10) la aatlafled, and EKV and ML eatlnatea are equlvalent A practical proble, in applying
Bukhvostov-Lipatov model and quantum-classical duality
NASA Astrophysics Data System (ADS)
Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.
2018-02-01
The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
NASA Astrophysics Data System (ADS)
Dutykh, Denys; Hoefer, Mark; Mitsotakis, Dimitrios
2018-04-01
Some effects of surface tension on fully nonlinear, long, surface water waves are studied by numerical means. The differences between various solitary waves and their interactions in subcritical and supercritical surface tension regimes are presented. Analytical expressions for new peaked traveling wave solutions are presented in the dispersionless case of critical surface tension. Numerical experiments are performed using a high-accurate finite element method based on smooth cubic splines and the four-stage, classical, explicit Runge-Kutta method of order 4.
An analytic approach to resolving problems in medical ethics.
Candee, D; Puka, B
1984-01-01
Education in ethics among practising professionals should provide a systematic procedure for resolving moral problems. A method for such decision-making is outlined using the two classical orientations in moral philosophy, teleology and deontology. Teleological views such as utilitarianism resolve moral dilemmas by calculating the excess of good over harm expected to be produced by each feasible alternative for action. The deontological view focuses on rights, duties, and principles of justice. Both methods are used to resolve the 1971 Johns Hopkins case of a baby born with Down's syndrome and duodenal atresia. PMID:6234395
Tachyon field in loop quantum cosmology: Inflation and evolution picture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong Huaui; Zhu Jianyang
2007-04-15
Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universne through a bounce in the high energy region. We show that this is always true in tachyon matter LQC. Differing from the classical Friedman-Robertson-Walker (FRW) cosmology, the super inflation can appear in the tachyon matter LQC; furthermore, the inflation can be extended to the region where classical inflation stops. Using the numerical method, we give an evolution picture of the tachyon field with an exponential potential in the context of LQC. It indicates that the quantum dynamical solutions have the same attractive behavior as the classical solutions do. Themore » whole evolution of the tachyon field is that in the distant past, the tachyon field--being in the contracting cosmology--accelerates to climb up the potential hill with a negative velocity; then at the boundary the tachyon field is bounced into an expanding universe with positive velocity rolling down to the bottom of the potential. In the slow roll limit, we compare the quantum inflation with the classical case in both an analytic and a numerical way.« less
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Garnier, Alain; Gaillet, Bruno
2015-12-01
Not so many fermentation mathematical models allow analytical solutions of batch process dynamics. The most widely used is the combination of the logistic microbial growth kinetics with Luedeking-Piret bioproduct synthesis relation. However, the logistic equation is principally based on formalistic similarities and only fits a limited range of fermentation types. In this article, we have developed an analytical solution for the combination of Monod growth kinetics with Luedeking-Piret relation, which can be identified by linear regression and used to simulate batch fermentation evolution. Two classical examples are used to show the quality of fit and the simplicity of the method proposed. A solution for the combination of Haldane substrate-limited growth model combined with Luedeking-Piret relation is also provided. These models could prove useful for the analysis of fermentation data in industry as well as academia. © 2015 Wiley Periodicals, Inc.
McCain, Stephanie L; Flatland, Bente; Schumacher, Juergen P; Clarke Iii, Elsburgh O; Fry, Michael M
2010-12-01
Advantages of handheld and small bench-top biochemical analyzers include requirements for smaller sample volume and practicality for use in the field or in practices, but little has been published on the performance of these instruments compared with standard reference methods in analysis of reptilian blood. The aim of this study was to compare reptilian blood biochemical values obtained using the Abaxis VetScan Classic bench-top analyzer and a Heska i-STAT handheld analyzer with values obtained using a Roche Hitachi 911 chemical analyzer. Reptiles, including 14 bearded dragons (Pogona vitticeps), 4 blue-tongued skinks (Tiliqua gigas), 8 Burmese star tortoises (Geochelone platynota), 10 Indian star tortoises (Geochelone elegans), 5 red-tailed boas (Boa constrictor), and 5 Northern pine snakes (Pituophis melanoleucus melanoleucus), were manually restrained, and a single blood sample was obtained and divided for analysis. Results for concentrations of albumin, bile acids, calcium, glucose, phosphates, potassium, sodium, total protein, and uric acid and activities of aspartate aminotransferase and creatine kinase obtained from the VetScan Classic and Hitachi 911 were compared. Results for concentrations of chloride, glucose, potassium, and sodium obtained from the i-STAT and Hitachi 911 were compared. Compared with results from the Hitachi 911, those from the VetScan Classic and i-STAT had variable correlations, and constant or proportional bias was found for many analytes. Bile acid data could not be evaluated because results for 44 of 45 samples fell below the lower linearity limit of the VetScan Classic. Although the 2 portable instruments might provide measurements with clinical utility, there were significant differences compared with the reference analyzer, and development of analyzer-specific reference intervals is recommended. ©2010 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Gerstmayr, Johannes; Irschik, Hans
2008-12-01
In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.
NASA Astrophysics Data System (ADS)
Kiani, Keivan
2017-09-01
Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.
Principles of Micellar Electrokinetic Capillary Chromatography Applied in Pharmaceutical Analysis
Hancu, Gabriel; Simon, Brigitta; Rusu, Aura; Mircia, Eleonora; Gyéresi, Árpád
2013-01-01
Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve. Micellar electrokinetic capillary chromatography, a hybrid method that combines chromatographic and electrophoretic separation principles, extends the applicability of capillary electrophoretic methods to neutral analytes. In micellar electrokinetic capillary chromatography, surfactants are added to the buffer solution in concentration above their critical micellar concentrations, consequently micelles are formed; micelles that undergo electrophoretic migration like any other charged particle. The separation is based on the differential partitioning of an analyte between the two-phase system: the mobile aqueous phase and micellar pseudostationary phase. The present paper aims to summarize the basic aspects regarding separation principles and practical applications of micellar electrokinetic capillary chromatography, with particular attention to those relevant in pharmaceutical analysis. PMID:24312804
NASA Astrophysics Data System (ADS)
Mojahedi, Mahdi; Shekoohinejad, Hamidreza
2018-02-01
In this paper, temperature distribution in the continuous and pulsed end-pumped Nd:YAG rod crystal is determined using nonclassical and classical heat conduction theories. In order to find the temperature distribution in crystal, heat transfer differential equations of crystal with consideration of boundary conditions are derived based on non-Fourier's model and temperature distribution of the crystal is achieved by an analytical method. Then, by transferring non-Fourier differential equations to matrix equations, using finite element method, temperature and stress of every point of crystal are calculated in the time domain. According to the results, a comparison between classical and nonclassical theories is represented to investigate rupture power values. In continuous end pumping with equal input powers, non-Fourier theory predicts greater temperature and stress compared to Fourier theory. It also shows that with an increase in relaxation time, crystal rupture power decreases. Despite of these results, in single rectangular pulsed end-pumping condition, with an equal input power, Fourier theory indicates higher temperature and stress rather than non-Fourier theory. It is also observed that, when the relaxation time increases, maximum amounts of temperature and stress decrease.
Completed Beltrami-Michell Formulation for Analyzing Radially Symmetrical Bodies
NASA Technical Reports Server (NTRS)
Kaljevic, Igor; Saigal, Sunil; Hopkins, Dale A.; Patnaik, Surya N.
1994-01-01
A force method formulation, the completed Beltrami-Michell formulation (CBMF), has been developed for analyzing boundary value problems in elastic continua. The CBMF is obtained by augmenting the classical Beltrami-Michell formulation with novel boundary compatibility conditions. It can analyze general elastic continua with stress, displacement, or mixed boundary conditions. The CBMF alleviates the limitations of the classical formulation, which can solve stress boundary value problems only. In this report, the CBMF is specialized for plates and shells. All equations of the CBMF, including the boundary compatibility conditions, are derived from the variational formulation of the integrated force method (IFM). These equations are defined only in terms of stresses. Their solution for kinematically stable elastic continua provides stress fields without any reference to displacements. In addition, a stress function formulation for plates and shells is developed by augmenting the classical Airy's formulation with boundary compatibility conditions expressed in terms of the stress function. The versatility of the CBMF and the augmented stress function formulation is demonstrated through analytical solutions of several mixed boundary value problems. The example problems include a composite circular plate and a composite circular cylindrical shell under the simultaneous actions of mechanical and thermal loads.
Non-homogeneous harmonic analysis: 16 years of development
NASA Astrophysics Data System (ADS)
Volberg, A. L.; Èiderman, V. Ya
2013-12-01
This survey contains results and methods in the theory of singular integrals, a theory which has been developing dramatically in the last 15-20 years. The central (although not the only) topic of the paper is the connection between the analytic properties of integrals and operators with Calderón-Zygmund kernels and the geometric properties of the measures. The history is traced of the classical Painlevé problem of describing removable singularities of bounded analytic functions, which has provided a strong incentive for the development of this branch of harmonic analysis. The progress of recent decades has largely been based on the creation of an apparatus for dealing with non-homogeneous measures, and much attention is devoted to this apparatus here. Several open questions are stated, first and foremost in the multidimensional case, where the method of curvature of a measure is not available. Bibliography: 128 titles.
On the Analysis of Multistep-Out-of-Grid Method for Celestial Mechanics Tasks
NASA Astrophysics Data System (ADS)
Olifer, L.; Choliy, V.
2016-09-01
Occasionally, there is a necessity in high-accurate prediction of celestial body trajectory. The most common way to do that is to solve Kepler's equation analytically or to use Runge-Kutta or Adams integrators to solve equation of motion numerically. For low-orbit satellites, there is a critical need in accounting geopotential and another forces which influence motion. As the result, the right side of equation of motion becomes much bigger, and classical integrators will not be quite effective. On the other hand, there is a multistep-out-of-grid (MOG) method which combines Runge-Kutta and Adams methods. The MOG method is based on using m on-grid values of the solution and n × m off-grid derivative estimations. Such method could provide stable integrators of maximum possible order, O (hm+mn+n-1). The main subject of this research was to implement and analyze the MOG method for solving satellite equation of motion with taking into account Earth geopotential model (ex. EGM2008 (Pavlis at al., 2008)) and with possibility to add other perturbations such as atmospheric drag or solar radiation pressure. Simulations were made for satellites on low orbit and with various eccentricities (from 0.1 to 0.9). Results of the MOG integrator were compared with results of Runge-Kutta and Adams integrators. It was shown that the MOG method has better accuracy than classical ones of the same order and less right-hand value estimations when is working on high orders. That gives it some advantage over "classical" methods.
Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach
NASA Astrophysics Data System (ADS)
Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan
2017-05-01
Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.
Thermal stresses and deflections of cross-ply laminated plates using refined plate theories
NASA Technical Reports Server (NTRS)
Khdeir, A. A.; Reddy, J. N.
1991-01-01
Exact analytical solutions of refined plate theories are developed to study the thermal stresses and deflections of cross-ply rectangular plates. The state-space approach in conjunction with the Levy method is used to solve exactly the governing equations of the theories under various boundary conditions. Numerical results of the higher-order theory of Reddy for thermal stresses and deflections are compared with those obtained using the classical and first-order plate theories.
HUMAN EYE OPTICS: Determination of positions of optical elements of the human eye
NASA Astrophysics Data System (ADS)
Galetskii, S. O.; Cherezova, T. Yu
2009-02-01
An original method for noninvasive determining the positions of elements of intraocular optics is proposed. The analytic dependence of the measurement error on the optical-scheme parameters and the restriction in distance from the element being measured are determined within the framework of the method proposed. It is shown that the method can be efficiently used for determining the position of elements in the classical Gullstrand eye model and personalised eye models. The positions of six optical surfaces of the Gullstrand eye model and four optical surfaces of the personalised eye model can be determined with an error of less than 0.25 mm.
Fourier series expansion for nonlinear Hamiltonian oscillators.
Méndez, Vicenç; Sans, Cristina; Campos, Daniel; Llopis, Isaac
2010-06-01
The problem of nonlinear Hamiltonian oscillators is one of the classical questions in physics. When an analytic solution is not possible, one can resort to obtaining a numerical solution or using perturbation theory around the linear problem. We apply the Fourier series expansion to find approximate solutions to the oscillator position as a function of time as well as the period-amplitude relationship. We compare our results with other recent approaches such as variational methods or heuristic approximations, in particular the Ren-He's method. Based on its application to the Duffing oscillator, the nonlinear pendulum and the eardrum equation, it is shown that the Fourier series expansion method is the most accurate.
A method of extracting speed-dependent vector correlations from 2 + 1 REMPI ion images.
Wei, Wei; Wallace, Colin J; Grubb, Michael P; North, Simon W
2017-07-07
We present analytical expressions for extracting Dixon's bipolar moments in the semi-classical limit from experimental anisotropy parameters of sliced or reconstructed non-sliced images. The current method focuses on images generated by 2 + 1 REMPI (Resonance Enhanced Multi-photon Ionization) and is a necessary extension of our previously published 1 + 1 REMPI equations. Two approaches for applying the new equations, direct inversion and forward convolution, are presented. As demonstration of the new method, bipolar moments were extracted from images of carbonyl sulfide (OCS) photodissociation at 230 nm and NO 2 photodissociation at 355 nm, and the results are consistent with previous publications.
NASA Astrophysics Data System (ADS)
Hosseini-Hashemi, Shahrokh; Sepahi-Boroujeni, Amin; Sepahi-Boroujeni, Saeid
2018-04-01
Normal impact performance of a system including a fullerene molecule and a single-layered graphene sheet is studied in the present paper. Firstly, through a mathematical approach, a new contact law is derived to describe the overall non-bonding interaction forces of the "hollow indenter-target" system. Preliminary verifications show that the derived contact law gives a reliable picture of force field of the system which is in good agreements with the results of molecular dynamics (MD) simulations. Afterwards, equation of the transversal motion of graphene sheet is utilized on the basis of both the nonlocal theory of elasticity and the assumptions of classical plate theory. Then, to derive dynamic behavior of the system, a set including the proposed contact law and the equations of motion of both graphene sheet and fullerene molecule is solved numerically. In order to evaluate outcomes of this method, the problem is modeled by MD simulation. Despite intrinsic differences between analytical and MD methods as well as various errors arise due to transient nature of the problem, acceptable agreements are established between analytical and MD outcomes. As a result, the proposed analytical method can be reliably used to address similar impact problems. Furthermore, it is found that a single-layered graphene sheet is capable of trapping fullerenes approaching with low velocities. Otherwise, in case of rebound, the sheet effectively absorbs predominant portion of fullerene energy.
Separation techniques for the clean-up of radioactive mixed waste for ICP-AES/ICP-MS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swafford, A.M.; Keller, J.M.
1993-03-17
Two separation techniques were investigated for the clean-up of typical radioactive mixed waste samples requiring elemental analysis by Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES) or Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). These measurements frequently involve regulatory or compliance criteria which include the determination of elements on the EPA Target Analyte List (TAL). These samples usually consist of both an aqueous phase and a solid phase which is mostly an inorganic sludge. Frequently, samples taken from the waste tanks contain high levels of uranium and thorium which can cause spectral interferences in ICP-AES or ICP-MS analysis. The removal of these interferences ismore » necessary to determine the presence of the EPA TAL elements in the sample. Two clean-up methods were studied on simulated aqueous waste samples containing the EPA TAL elements. The first method studied was a classical procedure based upon liquid-liquid extraction using tri-n- octylphosphine oxide (TOPO) dissolved in cyclohexane. The second method investigated was based on more recently developed techniques using extraction chromatography; specifically the use of a commercially available Eichrom TRU[center dot]Spec[trademark] column. Literature on these two methods indicates the efficient removal of uranium and thorium from properly prepared samples and provides considerable qualitative information on the extraction behavior of many other elements. However, there is a lack of quantitative data on the extraction behavior of elements on the EPA Target Analyte List. Experimental studies on these two methods consisted of determining whether any of the analytes were extracted by these methods and the recoveries obtained. Both methods produced similar results; the EPA target analytes were only slightly or not extracted. Advantages and disadvantages of each method were evaluated and found to be comparable.« less
Guthausen, Gisela; von Garnier, Agnes; Reimert, Rainer
2009-10-01
Low-field nuclear magnetic resonance (NMR) spectroscopy is applied to study the hydrogenation of toluene in a lab-scale reactor. A conventional benchtop NMR system was modified to achieve chemical shift resolution. After an off-line validity check of the approach, the reaction product is analyzed on-line during the process, applying chemometric data processing. The conversion of toluene to methylcyclohexane is compared with off-line gas chromatographic analysis. Both classic analytical and chemometric data processing was applied. As the results, which are obtained within a few tens of seconds, are equivalent within the experimental accuracy of both methods, low-field NMR spectroscopy was shown to provide an analytical tool for reaction characterization and immediate feedback.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Compilation on the use of the stroboscopic method in orbital dynamics
NASA Astrophysics Data System (ADS)
Lecohier, G.
In this paper, the application of the stroboscopic method to orbital dynamics is described. As opposed to averaging methods, the stroboscopic solutions of the perturbed Lagrangian system are derived explicitly in the osculating elements which eases greatly their utilization in practical cases. Using this semi-analytical method, the first order solutions of the Lagrange equations including the perturbations by central body gravity field, the third-bodies, the radiation pressure and by the air-drag are derived. In a next step, the accuracy of the first order solution derived for the classical and equinoctial elements is assessed for the long-term prediction of highly eccentric, low altitude, polar and geostationary orbits is estimated.
A online credit evaluation method based on AHP and SPA
NASA Astrophysics Data System (ADS)
Xu, Yingtao; Zhang, Ying
2009-07-01
Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.
Revisiting the positive DC corona discharge theory: Beyond Peek's and Townsend's law
NASA Astrophysics Data System (ADS)
Monrolin, Nicolas; Praud, Olivier; Plouraboué, Franck
2018-06-01
The classical positive Corona Discharge theory in a cylindrical axisymmetric configuration is revisited in order to find analytically the influence of gas properties and thermodynamic conditions on the corona current. The matched asymptotic expansion of Durbin and Turyn [J. Phys. D: Appl. Phys. 20, 1490-1495 (1987)] of a simplified but self-consistent problem is performed and explicit analytical solutions are derived. The mathematical derivation enables us to express a new positive DC corona current-voltage characteristic, choosing either a dimensionless or dimensional formulation. In dimensional variables, the current voltage law and the corona inception voltage explicitly depend on the electrode size and physical gas properties such as ionization and photoionization parameters. The analytical predictions are successfully confronted with experiments and Peek's and Townsend's laws. An analytical expression of the corona inception voltage φ o n is proposed, which depends on the known values of physical parameters without adjustable parameters. As a proof of consistency, the classical Townsend current-voltage law I = C φ ( φ - φ o n ) is retrieved by linearizing the non-dimensional analytical solution. A brief parametric study showcases the interest in this analytical current model, especially for exploring small corona wires or considering various thermodynamic conditions.
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
A dispersion relationship governing incompressible wall turbulence
NASA Technical Reports Server (NTRS)
Tsuge, S.
1978-01-01
The method of separation of variables is shown to make turbulent correlation equations of Karman-Howarth type tractable for shear turbulence as well under the condition of neglected triple correlation. The separated dependent variable obeys an Orr-Sommerfeld equation. A new analytical method is developed using a scaling law different from the classical one due to Heisenberg and Lin and more appropriate for wall turbulent profiles. A dispersion relationship between the wave number and the separation constant which has the dimension of a frequency is derived in support of experimental observations of wave or coherent structure of wall turbulence.
Discovering Romanticism and Classicism in the English Classroom.
ERIC Educational Resources Information Center
Stark, Sandra A.
1994-01-01
Details the concepts of romanticism and classicism and how they relate to secondary English instruction. Argues that teachers should offer students both the imaginative adventure of the romantic and the analytical power of the classicist. Describes a visual lesson by which these two modes might be illustrated and fostered. (HB)
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
NASA Technical Reports Server (NTRS)
Chambers, Jeffrey A.
1994-01-01
Finite element analysis is regularly used during the engineering cycle of mechanical systems to predict the response to static, thermal, and dynamic loads. The finite element model (FEM) used to represent the system is often correlated with physical test results to determine the validity of analytical results provided. Results from dynamic testing provide one means for performing this correlation. One of the most common methods of measuring accuracy is by classical modal testing, whereby vibratory mode shapes are compared to mode shapes provided by finite element analysis. The degree of correlation between the test and analytical mode shapes can be shown mathematically using the cross orthogonality check. A great deal of time and effort can be exhausted in generating the set of test acquired mode shapes needed for the cross orthogonality check. In most situations response data from vibration tests are digitally processed to generate the mode shapes from a combination of modal parameters, forcing functions, and recorded response data. An alternate method is proposed in which the same correlation of analytical and test acquired mode shapes can be achieved without conducting the modal survey. Instead a procedure is detailed in which a minimum of test information, specifically the acceleration response data from a random vibration test, is used to generate a set of equivalent local accelerations to be applied to the reduced analytical model at discrete points corresponding to the test measurement locations. The static solution of the analytical model then produces a set of deformations that once normalized can be used to represent the test acquired mode shapes in the cross orthogonality relation. The method proposed has been shown to provide accurate results for both a simple analytical model as well as a complex space flight structure.
Determination of mycotoxins in foods: current state of analytical methods and limitations.
Köppen, Robert; Koch, Matthias; Siegel, David; Merkel, Stefan; Maul, Ronald; Nehls, Irene
2010-05-01
Mycotoxins are natural contaminants produced by a range of fungal species. Their common occurrence in food and feed poses a threat to the health of humans and animals. This threat is caused either by the direct contamination of agricultural commodities or by a "carry-over" of mycotoxins and their metabolites into animal tissues, milk, and eggs after feeding of contaminated hay or corn. As a consequence of their diverse chemical structures and varying physical properties, mycotoxins exhibit a wide range of biological effects. Individual mycotoxins can be genotoxic, mutagenic, carcinogenic, teratogenic, and oestrogenic. To protect consumer health and to reduce economic losses, surveillance and control of mycotoxins in food and feed has become a major objective for producers, regulatory authorities and researchers worldwide. However, the variety of chemical structures makes it impossible to use one single technique for mycotoxin analysis. Hence, a vast number of analytical methods has been developed and validated. The heterogeneity of food matrices combined with the demand for a fast, simultaneous and accurate determination of multiple mycotoxins creates enormous challenges for routine analysis. The most crucial issues will be discussed in this review. These are (1) the collection of representative samples, (2) the performance of classical and emerging analytical methods based on chromatographic or immunochemical techniques, (3) the validation of official methods for enforcement, and (4) the limitations and future prospects of the current methods.
Cho, Il-Hoon; Ku, Seockmo
2017-09-30
The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.
Collective Phase in Resource Competition in a Highly Diverse Ecosystem.
Tikhonov, Mikhail; Monasson, Remi
2017-01-27
Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the nonintuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.
Parametric number covariance in quantum chaotic spectra.
Vinayak; Kumar, Sandeep; Pandey, Akhilesh
2016-03-01
We study spectral parametric correlations in quantum chaotic systems and introduce the number covariance as a measure of such correlations. We derive analytic results for the classical random matrix ensembles using the binary correlation method and obtain compact expressions for the covariance. We illustrate the universality of this measure by presenting the spectral analysis of the quantum kicked rotors for the time-reversal invariant and time-reversal noninvariant cases. A local version of the parametric number variance introduced earlier is also investigated.
Thermal stresses and deflections of cross-ply laminated plates using refined plate theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khdeir, A.A.; Reddy, J.N.
1991-12-01
Exact analytical solutions of refined plate theories are developed to study the thermal stresses and deflections of cross-ply rectangular plates. The state-space approach in conjunction with the Levy method is used to solve exactly the governing equations of the theories under various boundary conditions. Numerical results of the higher-order theory of Reddy for thermal stresses and deflections are compared with those obtained using the classical and first-order plate theories. 14 refs.
NASA Astrophysics Data System (ADS)
Clempner, Julio B.
2017-01-01
This paper presents a novel analytical method for soundness verification of workflow nets and reset workflow nets, using the well-known stability results of Lyapunov for Petri nets. We also prove that the soundness property is decidable for workflow nets and reset workflow nets. In addition, we provide evidence of several outcomes related with properties such as boundedness, liveness, reversibility and blocking using stability. Our approach is validated theoretically and by a numerical example related to traffic signal-control synchronisation.
Static Strength Characteristics of Mechanically Fastened Composite Joints
NASA Technical Reports Server (NTRS)
Fox, D. E.; Swaim, K. W.
1999-01-01
The analysis of mechanically fastened composite joints presents a great challenge to structural analysts because of the large number of parameters that influence strength. These parameters include edge distance, width, bolt diameter, laminate thickness, ply orientation, and bolt torque. The research presented in this report investigates the influence of some of these parameters through testing and analysis. A methodology is presented for estimating the strength of the bolt-hole based on classical lamination theory using the Tsai-Hill failure criteria and typical bolthole bearing analytical methods.
El-Awady, Mohamed; Pyell, Ute
2013-07-05
The application of a new method developed for the assessment of sweeping efficiency in MEKC under homogeneous and inhomogeneous electric field conditions is extended to the general case, in which the distribution coefficient and the electric conductivity of the analyte in the sample zone and in the separation compartment are varied. As test analytes p-hydroxybenzoates (parabens), benzamide and some aromatic amines are studied under MEKC conditions with SDS as anionic surfactant. We show that in the general case - in contrast to the classical description - the obtainable enrichment factor is not only dependent on the retention factor of the analyte in the sample zone but also dependent on the retention factor in the background electrolyte (BGE). It is shown that in the general case sweeping is inherently a multistep focusing process. We describe an additional focusing/defocusing step (the retention factor gradient effect, RFGE) quantitatively by extending the classical equation employed for the description of the sweeping process with an additional focusing/defocusing factor. The validity of this equation is demonstrated experimentally (and theoretically) under variation of the organic solvent content (in the sample and/or the BGE), the type of organic solvent (in the sample and/or the BGE), the electric conductivity (in the sample), the pH (in the sample), and the concentration of surfactant (in the BGE). It is shown that very high enrichment factors can be obtained, if the pH in the sample zone makes possible to convert the analyte into a charged species that has a high distribution coefficient with respect to an oppositely charged micellar phase, while the pH in the BGE enables separation of the neutral species under moderate retention factor conditions. Copyright © 2013 Elsevier B.V. All rights reserved.
Computing the Evans function via solving a linear boundary value ODE
NASA Astrophysics Data System (ADS)
Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn
2015-11-01
Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.
Analytical solution for the advection-dispersion transport equation in layered media
USDA-ARS?s Scientific Manuscript database
The advection-dispersion transport equation with first-order decay was solved analytically for multi-layered media using the classic integral transform technique (CITT). The solution procedure used an associated non-self-adjoint advection-diffusion eigenvalue problem that had the same form and coef...
NASA Astrophysics Data System (ADS)
Gets, A. V.; Krainov, V. P.
2018-01-01
The yield of spontaneous photons at the tunneling ionization of atoms by intense low-frequency laser radiation near the classical cut-off is estimated analytically by using the three-step model. The Bell-shaped dependence in the universal photon spectrum is explained qualitatively.
Lombardi, Giovanni; Barbaro, Mosè; Locatelli, Massimo; Banfi, Giuseppe
2017-06-01
The endocrine function of bone is now a recognized feature of this tissue. Bone-derived hormones that modulate whole-body homeostasis, are being discovered as for the effects on bone of novel and classic hormones produced by other tissues become known. Often, however, the data regarding these last generation bone-derived or bone-targeting hormones do not give about a clear picture of their physiological roles or concentration ranges. A certain degree of uncertainty could stem from differences in the pre-analytical management of biological samples. The pre-analytical phase comprises a series of decisions and actions (i.e., choice of sample matrix, methods of collection, transportation, treatment and storage) preceding analysis. Errors arising in this phase will inevitably be carried over to the analytical phase where they can reduce the measurement accuracy, ultimately, leading discrepant results. While the pre-analytical phase is all important, in routine laboratory medicine, it is often not given due consideration in research and clinical trials. This is particularly true for novel molecules, such as the hormones regulating the endocrine function of bone. In this review we discuss the importance of the pre-analytical variables affecting the measurement of last generation bone-associated hormones and describe their, often debated and rarely clear physiological roles.
Analytical approximations to seawater optical phase functions of scattering
NASA Astrophysics Data System (ADS)
Haltrin, Vladimir I.
2004-11-01
This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.
Second derivative in the model of classical binary system
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2016-06-01
We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.
Pythagorean fuzzy analytic hierarchy process to multi-criteria decision making
NASA Astrophysics Data System (ADS)
Mohd, Wan Rosanisah Wan; Abdullah, Lazim
2017-11-01
A numerous approaches have been proposed in the literature to determine the criteria of weight. The weight of criteria is very significant in the process of decision making. One of the outstanding approaches that used to determine weight of criteria is analytic hierarchy process (AHP). This method involves decision makers (DMs) to evaluate the decision to form the pair-wise comparison between criteria and alternatives. In classical AHP, the linguistic variable of pairwise comparison is presented in terms of crisp value. However, this method is not appropriate to present the real situation of the problems because it involved the uncertainty in linguistic judgment. For this reason, AHP has been extended by incorporating the Pythagorean fuzzy sets. In addition, no one has found in the literature proposed how to determine the weight of criteria using AHP under Pythagorean fuzzy sets. In order to solve the MCDM problem, the Pythagorean fuzzy analytic hierarchy process is proposed to determine the criteria weight of the evaluation criteria. Using the linguistic variables, pairwise comparison for evaluation criteria are made to the weights of criteria using Pythagorean fuzzy numbers (PFNs). The proposed method is implemented in the evaluation problem in order to demonstrate its applicability. This study shows that the proposed method provides us with a useful way and a new direction in solving MCDM problems with Pythagorean fuzzy context.
How much a quantum measurement is informative?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia
2014-12-04
The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.
Casella, Innocenzo G; Pierri, Marianna; Contursi, Michela
2006-02-24
The electrochemical behaviour of the polycrystalline platinum electrode towards the oxidation/reduction of short-chain unsaturated aliphatic molecules such as acrylamide and acrylic acid was investigated in acidic solutions. Analytes were separated by reverse phase liquid chromatographic and quantified using a pulsed amperometric detection. A new two-step waveform, is introduced for detection of acrylamide and acrylic acid. Detection limits (LOD) of 20 nM (1. 4 microg/kg) and 45 nM (3.2 microg/kg) were determined in water solutions containing acrylamide and acrylic acid, respectively. Compared to the classical three-step waveform, the proposed two-step waveform shows favourable analytical performance in terms of LOD, linear range, precision and improved long-term reproducibility. The proposed analytical method combined with clean-up procedure accomplished by Carrez clearing reagent and subsequent extraction with a strong cation exchanger cartridges (SPE), was successfully used for the quantification of low concentrations of acrylamide in foodstuffs such as coffee and potato fries.
Stopping power of an electron gas with anisotropic temperature
NASA Astrophysics Data System (ADS)
Khelemelia, O. V.; Kholodov, R. I.
2016-04-01
A general theory of motion of a heavy charged particle in the electron gas with an anisotropic velocity distribution is developed within the quantum-field method. The analytical expressions for the dielectric susceptibility and the stopping power of the electron gas differs in no way from well-known classic formulas in the approximation of large and small velocities. Stopping power of the electron gas with anisotropic temperature in the framework of the quantum-field method is numerically calculated for an arbitrary angle between directions of the motion of the projectile particle and the electron beam. The results of the numerical calculations are compared with the dielectric model approach.
Multigrid methods for a semilinear PDE in the theory of pseudoplastic fluids
NASA Technical Reports Server (NTRS)
Henson, Van Emden; Shaker, A. W.
1993-01-01
We show that by certain transformations the boundary layer equations for the class of non-Newtonian fluids named pseudoplastic can be generalized in the form the vector differential operator(u) + p(x)u(exp -lambda) = 0, where x is a member of the set Omega and Omega is a subset of R(exp n), n is greater than or equal to 1 under the classical conditions for steady flow over a semi-infinite flat plate. We provide a survey of the existence, uniqueness, and analyticity of the solutions for this problem. We also establish numerical solutions in one- and two-dimensional regions using multigrid methods.
Should the Bible Be Taught as a Literary Classic in Public Education?
ERIC Educational Resources Information Center
Malikow, Max
2010-01-01
The research question "Should the Bible be taught as a literary classic in public education?" was pursued by a survey of nineteen scholars from three disciplines: education, literature, and law. The collected data served to guide the researcher in the writing of an analytical essay responding to the research question. The research…
The Dispersion Relation for the 1/sinh(exp 2) Potential in the Classical Limit
NASA Technical Reports Server (NTRS)
Campbell, Joel
2009-01-01
The dispersion relation for the inverse hyperbolic potential is calculated in the classical limit. This is shown for both the low amplitude phonon branch and the high amplitude soliton branch. It is shown these results qualitatively follow that previously found for the inverse squared potential where explicit analytic solutions are known.
Classical and numerical approaches to determining V-section band clamp axial stiffness
NASA Astrophysics Data System (ADS)
Barrans, Simon M.; Khodabakhshi, Goodarz; Muller, Matthias
2015-01-01
V-band clamp joints are used in a wide range of applications to connect circular flanges, for ducts, pipes and the turbocharger housing. Previous studies and research on V-bands are either purely empirical or analytical with limited applicability on the variety of V-band design and working conditions. In this paper models of the V-band are developed based on the classical theory of solid mechanics and the finite element method to study the behaviour of theV-bands under axial loading conditions. The good agreement between results from the developed FEA and the classical model support the suitability of the latter to modelV-band joints with diameters greater than 110mm under axial loading. The results from both models suggest that the axial stiffness for thisV-band cross section reaches a peak value for V-bands with radius of approximately 150 mmacross a wide range of coefficients of friction. Also, it is shown that the coefficient of friction and the wedge angle have a significant effect on the axial stiffness of V-bands.
Casarin, Elisabetta; Lucchese, Laura; Grazioli, Santina; Facchin, Sonia; Realdon, Nicola; Brocchi, Emiliana; Morpurgo, Margherita; Nardelli, Stefano
2016-01-01
Diagnostic tests for veterinary surveillance programs should be efficient, easy to use and, possibly, economical. In this context, classic Enzyme linked ImmunoSorbent Assay (ELISA) remains the most common analytical platform employed for serological analyses. The analysis of pooled samples instead of individual ones is a common procedure that permits to certify, with one single test, entire herds as "disease-free". However, diagnostic tests for pooled samples need to be particularly sensitive, especially when the levels of disease markers are low, as in the case of anti-BoHV1 antibodies in milk as markers of Infectious Bovine Rhinotracheitis (IBR) disease. The avidin-nucleic-acid-nanoassembly (ANANAS) is a novel kind of signal amplification platform for immunodiagnostics based on colloidal poly-avidin nanoparticles that, using model analytes, was shown to strongly increase ELISA test performance as compared to monomeric avidin. Here, for the first time, we applied the ANANAS reagent integration in a real diagnostic context. The monoclonal 1G10 anti-bovine IgG1 antibody was biotinylated and integrated with the ANANAS reagents for indirect IBR diagnosis from pooled milk mimicking tank samples from herds with IBR prevalence between 1 to 8%. The sensitivity and specificity of the ANANAS integrated method was compared to that of a classic test based on the same 1G10 antibody directly linked to horseradish peroxidase, and a commercial IDEXX kit recently introduced in the market. ANANAS integration increased by 5-fold the sensitivity of the 1G10 mAb-based conventional ELISA without loosing specificity. When compared to the commercial kit, the 1G10-ANANAS integrated method was capable to detect the presence of anti-BHV1 antibodies from bulk milk of gE antibody positive animals with 2-fold higher sensitivity and similar specificity. The results demonstrate the potentials of this new amplification technology, which permits improving current classic ELISA sensitivity limits without the need for new hardware investments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti
Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less
Ammari, Faten; Jouan-Rimbaud-Bouveresse, Delphine; Boughanmi, Néziha; Rutledge, Douglas N
2012-09-15
The aim of this study was to find objective analytical methods to study the degradation of edible oils during heating and thus to suggest solutions to improve their stability. The efficiency of Nigella seed extract as natural antioxidant was compared with butylated hydroxytoluene (BHT) during accelerated oxidation of edible vegetable oils at 120 and 140 °C. The modifications during heating were monitored by 3D-front-face fluorescence spectroscopy along with Independent Components Analysis (ICA), (1)H NMR spectroscopy and classical physico-chemical methods such as anisidine value and viscosity. The results of the study clearly indicate that the natural seed extract at a level of 800 ppm exhibited antioxidant effects similar to those of the synthetic antioxidant BHT at a level of 200 ppm and thus contributes to an increase in the oxidative stability of the oil. Copyright © 2012 Elsevier B.V. All rights reserved.
Carpinteiro, J; Rodríguez, I; Cela, R
2004-11-01
The performance of solid-phase microextraction (SPME) applied to the determination of butyltin compounds in sediment samples is systematically evaluated. Matrix effects and influence of blank signals on the detection limits of the method are studied in detail. The interval of linear response is also evaluated in order to assess the applicability of the method to sediments polluted with butyltin compounds over a large range of concentrations. Advantages and drawbacks of including an SPME step, instead of the classic liquid-liquid extraction of the derivatized analytes, in the determination of butyltin compounds in sediment samples are considered in terms of achieved detection limits and experimental effort. Analytes were extracted from the samples by sonication using glacial acetic acid. An aliquot of the centrifuged extract was placed on a vial where compounds were ethylated and concentrated on a PDMS fiber using the headspace mode. Determinations were carried out using GC-MIP AED.
Holographic stress-energy tensor near the Cauchy horizon inside a rotating black hole
NASA Astrophysics Data System (ADS)
Ishibashi, Akihiro; Maeda, Kengo; Mefford, Eric
2017-07-01
We investigate a stress-energy tensor for a conformal field theory (CFT) at strong coupling inside a small five-dimensional rotating Myers-Perry black hole with equal angular momenta by using the holographic method. As a gravitational dual, we perturbatively construct a black droplet solution by applying the "derivative expansion" method, generalizing the work of Haddad [Classical Quantum Gravity 29, 245001 (2012), 10.1088/0264-9381/29/24/245001] and analytically compute the holographic stress-energy tensor for our solution. We find that the stress-energy tensor is finite at both the future and past outer (event) horizons and that the energy density is negative just outside the event horizons due to the Hawking effect. Furthermore, we apply the holographic method to the question of quantum instability of the Cauchy horizon since, by construction, our black droplet solution also admits a Cauchy horizon inside. We analytically show that the null-null component of the holographic stress-energy tensor negatively diverges at the Cauchy horizon, suggesting that a singularity appears there, in favor of strong cosmic censorship.
Screening new psychoactive substances in urban wastewater using high resolution mass spectrometry.
González-Mariño, Iria; Gracia-Lor, Emma; Bagnati, Renzo; Martins, Claudia P B; Zuccato, Ettore; Castiglioni, Sara
2016-06-01
Analysis of drug residues in urban wastewater could complement epidemiological studies in detecting the use of new psychoactive substances (NPS), a continuously changing group of drugs hard to monitor by classical methods. We initially selected 52 NPS potentially used in Italy based on seizure data and consumption alerts provided by the Antidrug Police Department and the National Early Warning System. Using a linear ion trap-Orbitrap high resolution mass spectrometer, we designed a suspect screening and a target method approach and compared them for the analysis of 24 h wastewater samples collected at the treatment plant influents of four Italian cities. This highlighted the main limitations of these two approaches, so we could propose requirements for future research. A library of MS/MS spectra of 16 synthetic cathinones and 19 synthetic cannabinoids, for which analytical standards were acquired, was built at different collision energies and is available on request. The stability of synthetic cannabinoids was studied in analytical standards and wastewater, identifying the best analytical conditions for future studies. To the best of our knowledge, these are the first stability data on NPS. Few suspects were identified in Italian wastewater samples, in accordance with recent epidemiological data reporting a very low prevalence of use of NPS in Italy. This study outlines an analytical approach for NPS identification and measurement in urban wastewater and for estimating their use in the population.
Liu, Jian; Miller, William H
2008-09-28
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
A deterministic model of electron transport for electron probe microanalysis
NASA Astrophysics Data System (ADS)
Bünger, J.; Richter, S.; Torrilhon, M.
2018-01-01
Within the last decades significant improvements in the spatial resolution of electron probe microanalysis (EPMA) were obtained by instrumental enhancements. In contrast, the quantification procedures essentially remained unchanged. As the classical procedures assume either homogeneity or a multi-layered structure of the material, they limit the spatial resolution of EPMA. The possibilities of improving the spatial resolution through more sophisticated quantification procedures are therefore almost untouched. We investigate a new analytical model (M 1-model) for the quantification procedure based on fast and accurate modelling of electron-X-ray-matter interactions in complex materials using a deterministic approach to solve the electron transport equations. We outline the derivation of the model from the Boltzmann equation for electron transport using the method of moments with a minimum entropy closure and present first numerical results for three different test cases (homogeneous, thin film and interface). Taking Monte Carlo as a reference, the results for the three test cases show that the M 1-model is able to reproduce the electron dynamics in EPMA applications very well. Compared to classical analytical models like XPP and PAP, the M 1-model is more accurate and far more flexible, which indicates the potential of deterministic models of electron transport to further increase the spatial resolution of EPMA.
Big genomics and clinical data analytics strategies for precision cancer prognosis.
Ow, Ghim Siong; Kuznetsov, Vladimir A
2016-11-07
The field of personalized and precise medicine in the era of big data analytics is growing rapidly. Previously, we proposed our model of patient classification termed Prognostic Signature Vector Matching (PSVM) and identified a 37 variable signature comprising 36 let-7b associated prognostic significant mRNAs and the age risk factor that stratified large high-grade serous ovarian cancer patient cohorts into three survival-significant risk groups. Here, we investigated the predictive performance of PSVM via optimization of the prognostic variable weights, which represent the relative importance of one prognostic variable over the others. In addition, we compared several multivariate prognostic models based on PSVM with classical machine learning techniques such as K-nearest-neighbor, support vector machine, random forest, neural networks and logistic regression. Our results revealed that negative log-rank p-values provides more robust weight values as opposed to the use of other quantities such as hazard ratios, fold change, or a combination of those factors. PSVM, together with the classical machine learning classifiers were combined in an ensemble (multi-test) voting system, which collectively provides a more precise and reproducible patient stratification. The use of the multi-test system approach, rather than the search for the ideal classification/prediction method, might help to address limitations of the individual classification algorithm in specific situation.
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
Proliferation of Observables and Measurement in Quantum-Classical Hybrids
NASA Astrophysics Data System (ADS)
Elze, Hans-Thomas
2012-01-01
Following a review of quantum-classical hybrid dynamics, we discuss the ensuing proliferation of observables and relate it to measurements of (would-be) quantum mechanical degrees of freedom performed by (would-be) classical ones (if they were separable). Hybrids consist in coupled classical (CL) and quantum mechanical (QM) objects. Numerous consistency requirements for their description have been discussed and are fulfilled here. We summarize a representation of quantum mechanics in terms of classical analytical mechanics which is naturally extended to QM-CL hybrids. This framework allows for superposition, separable, and entangled states originating in the QM sector, admits experimenter's "Free Will", and is local and nonsignaling. Presently, we study the set of hybrid observables, which is larger than the Cartesian product of QM and CL observables of its components; yet it is smaller than a corresponding product of all-classical observables. Thus, quantumness and classicality infect each other.
The theory of the gravitational potential applied to orbit prediction
NASA Technical Reports Server (NTRS)
Kirkpatrick, J. C.
1976-01-01
A complete derivation of the geopotential function and its gradient is presented. Also included is the transformation of Laplace's equation from Cartesian to spherical coordinates. The analytic solution to Laplace's equation is obtained from the transformed version, in the classical manner of separating the variables. A cursory introduction to the method devised by Pines, using direction cosines to express the orientation of a point in space, is presented together with sample computer program listings for computing the geopotential function and the components of its gradient. The use of the geopotential function is illustrated.
Extraction of shear viscosity in stationary states of relativistic particle systems
NASA Astrophysics Data System (ADS)
Reining, F.; Bouras, I.; El, A.; Wesp, C.; Xu, Z.; Greiner, C.
2012-02-01
Starting from a classical picture of shear viscosity we construct a stationary velocity gradient in a microscopic parton cascade. Employing the Navier-Stokes ansatz we extract the shear viscosity coefficient η. For elastic isotropic scatterings we find an excellent agreement with the analytic values. This confirms the applicability of this method. Furthermore, for both elastic and inelastic scatterings with pQCD based cross sections we extract the shear viscosity coefficient η for a pure gluonic system and find a good agreement with already published calculations.
ERIC Educational Resources Information Center
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa
2015-01-01
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
ERIC Educational Resources Information Center
Loughmiller-Newman, Jennifer Ann
2012-01-01
This dissertation presents a multidisciplinary means of determining the actual content (foodstuff, non-foodstuff, or lack of contents) of Classic Mayan (A.D. 250-900) vessels. Based on previous studies that have identified the residues of foodstuffs named in hieroglyphic texts (e.g. cacao), this study is designed to further investigate foodstuff…
Duration of classicality in highly degenerate interacting Bosonic systems
Sikivie, Pierre; Todarello, Elisa M.
2017-04-28
We study sets of oscillators that have high quantum occupancy and that interact by exchanging quanta. It is shown by analytical arguments and numerical simulation that such systems obey classical equations of motion only on time scales of order their relaxation time τ and not longer than that. The results are relevant to the cosmology of axions and axion-like particles.
Fate of classical solitons in one-dimensional quantum systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pustilnik, M.; Matveev, K. A.
We study one-dimensional quantum systems near the classical limit described by the Korteweg-de Vries (KdV) equation. The excitations near this limit are the well-known solitons and phonons. The classical description breaks down at long wavelengths, where quantum effects become dominant. Focusing on the spectra of the elementary excitations, we describe analytically the entire classical-to-quantum crossover. We show that the ultimate quantum fate of the classical KdV excitations is to become fermionic quasiparticles and quasiholes. We discuss in detail two exactly solvable models exhibiting such crossover, the Lieb-Liniger model of bosons with weak contact repulsion and the quantum Toda model, andmore » argue that the results obtained for these models are universally applicable to all quantum one-dimensional systems with a well-defined classical limit described by the KdV equation.« less
Local convertibility of the ground state of the perturbed toric code
NASA Astrophysics Data System (ADS)
Santra, Siddhartha; Hamma, Alioscia; Cincio, Lukasz; Subasi, Yigit; Zanardi, Paolo; Amico, Luigi
2014-12-01
We present analytical and numerical studies of the behavior of the α -Renyi entropies in the toric code in presence of several types of perturbations aimed at studying the simulability of these perturbations to the parent Hamiltonian using local operations and classical communications (LOCC)—a property called local convertibility. In particular, the derivatives, with respect to the perturbation parameter, present different signs for different values of α within the topological phase. From the information-theoretic point of view, this means that such ground states cannot be continuously deformed within the topological phase by means of catalyst assisted local operations and classical communications (LOCC). Such LOCC differential convertibility is on the other hand always possible in the trivial disordered phase. The non-LOCC convertibility is remarkable because it can be computed on a system whose size is independent of correlation length. This method can therefore constitute an experimentally feasible witness of topological order.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, W.R.; Rechnitz, G.A.
1999-01-01
A mini review of enzyme-based electrochemical biosensors for inhibition analysis of organophosphorus and carbamate pesticides is presented. Discussion includes the most recent literature to present advances in detection limits, selectivity and real sample analysis. Recent reviews on the monitoring of pesticides and their residues suggest that the classical analytical techniques of gas and liquid chromatography are the most widely used methods of detection. These techniques, although very accurate in their determinations, can be quite time consuming and expensive and usually require extensive sample clean up and pro-concentration. For these and many other reasons, the classical techniques are very difficult tomore » adapt for field use. Numerous researchers, in the past decade, have developed and made improvements on biosensors for use in pesticide analysis. This mini review will focus on recent advances made in enzyme-based electrochemical biosensors for the determinations of organophosphorus and carbamate pesticides.« less
Semiclassical evaluation of quantum fidelity
NASA Astrophysics Data System (ADS)
Vaníček, Jiří; Heller, Eric J.
2003-11-01
We present a numerically feasible semiclassical (SC) method to evaluate quantum fidelity decay (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform SC expression not only is tractable but it also gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows Monte Carlo evaluation, the uniform expression is accurate at times when there are 1070 semiclassical contributions. Remarkably, it also explicitly contains the “building blocks” of analytical theories of recent literature, and thus permits a direct test of the approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and show that within this approximation, the so-called “diagonal approximation” is automatic and does not require ensemble averaging.
Unitary evolution of the quantum Universe with a Brown-Kuchař dust
NASA Astrophysics Data System (ADS)
Maeda, Hideki
2015-12-01
We study the time evolution of a wave function for the spatially flat Friedmann-Lemaître-Robertson-Walker Universe governed by the Wheeler-DeWitt equation in both analytical and numerical methods. We consider a Brown-Kuchař dust as a matter field in order to introduce a ‘clock’ in quantum cosmology and adopt the Laplace-Beltrami operator-ordering. The Hamiltonian operator admits an infinite number of self-adjoint extensions corresponding to a one-parameter family of boundary conditions at the origin in the minisuperspace. For any value of the extension parameter in the boundary condition, the evolution of a wave function is unitary and the classical initial singularity is avoided and replaced by the big bounce in the quantum system. Exact wave functions show that the expectation value of the spatial volume of the Universe obeys the classical-time evolution in the late time but its variance diverges.
Thermodynamics of ultra-sonic cavitation bubbles in flotation ore processes
NASA Astrophysics Data System (ADS)
Royer, J. J.; Monnin, N.; Pailot-Bonnetat, N.; Filippov, L. O.; Filippova, I. V.; Lyubimova, T.
2017-07-01
Ultra-sonic enhanced flotation ore process is a more efficient technique for ore recovery than classical flotation method. A classical simplified analytical Navier-Stokes model is used to predict the effect of the ultrasonic waves on the cavitations bubble behaviour. Then, a thermodynamics approach estimates the temperature and pressure inside a bubble, and investigates the energy exchanges between flotation liquid and gas bubbles. Several gas models (including ideal gas, Soave-Redlich-Kwong, and Peng-Robinson) assuming polytropic transformations (from isothermal to adiabatic) are used to predict the evolution of the internal pressure and temperature inside the bubble during the ultrasonic treatment, together with the energy and heat exchanges between the gas and the surrounding fluid. Numerical simulation illustrates the suggest theory. If the theory is verified experimentally, it predicts an increase of the temperature and pressure inside the bubbles. Preliminary ultrasonic flotation results performed on a potash ore seem to confirm the theory.
Theoretical and experimental physical methods of neutron-capture therapy
NASA Astrophysics Data System (ADS)
Borisov, G. I.
2011-09-01
This review is based to a substantial degree on our priority developments and research at the IR-8 reactor of the Russian Research Centre Kurchatov Institute. New theoretical and experimental methods of neutron-capture therapy are developed and applied in practice; these are: A general analytical and semi-empiric theory of neutron-capture therapy (NCT) based on classical neutron physics and its main sections (elementary theories of moderation, diffuse, reflection, and absorption of neutrons) rather than on methods of mathematical simulation. The theory is, first of all, intended for practical application by physicists, engineers, biologists, and physicians. This theory can be mastered by anyone with a higher education of almost any kind and minimal experience in operating a personal computer.
Tuning fuzzy PD and PI controllers using reinforcement learning.
Boubertakh, Hamid; Tadjine, Mohamed; Glorennec, Pierre-Yves; Labiod, Salim
2010-10-01
In this paper, we propose a new auto-tuning fuzzy PD and PI controllers using reinforcement Q-learning (QL) algorithm for SISO (single-input single-output) and TITO (two-input two-output) systems. We first, investigate the design parameters and settings of a typical class of Fuzzy PD (FPD) and Fuzzy PI (FPI) controllers: zero-order Takagi-Sugeno controllers with equidistant triangular membership functions for inputs, equidistant singleton membership functions for output, Larsen's implication method, and average sum defuzzification method. Secondly, the analytical structures of these typical fuzzy PD and PI controllers are compared to their classical counterpart PD and PI controllers. Finally, the effectiveness of the proposed method is proven through simulation examples. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Formulation of the relativistic moment implicit particle-in-cell method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noguchi, Koichi; Tronci, Cesare; Zuccaro, Gianluca
2007-04-15
A new formulation is presented for the implicit moment method applied to the time-dependent relativistic Vlasov-Maxwell system. The new approach is based on a specific formulation of the implicit moment method that allows us to retain the same formalism that is valid in the classical case despite the formidable complication introduced by the nonlinear nature of the relativistic equations of motion. To demonstrate the validity of the new formulation, an implicit finite difference algorithm is developed to solve the Maxwell's equations and equations of motion. A number of benchmark problems are run: two stream instability, ion acoustic wave damping, Weibelmore » instability, and Poynting flux acceleration. The numerical results are all in agreement with analytical solutions.« less
Development of a biosensor telemetry system for monitoring fermentation in craft breweries.
Farina, Donatella; Zinellu, Manuel; Fanari, Mauro; Porcu, Maria Cristina; Scognamillo, Sergio; Puggioni, Giulia Maria Grazia; Rocchitta, Gaia; Serra, Pier Andrea; Pretti, Luca
2017-03-01
The development and applications of biosensors in the food industry has had a rapid grown due to their sensitivity, specificity and simplicity of use with respect to classical analytical methods. In this study, glucose and ethanol amperometric biosensors integrated with a wireless telemetry system were developed and used for the monitoring of top and bottom fermentations in beer wort samples. The collected data were in good agreement with those obtained by reference methods. The simplicity of construction, the low cost and the short time of analysis, combined with easy interpretation of the results, suggest that these devices could be a valuable alternative to conventional methods for monitoring fermentation processes in the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prediction of Experimental Surface Heat Flux of Thin Film Gauges using ANFIS
NASA Astrophysics Data System (ADS)
Sarma, Shrutidhara; Sahoo, Niranjan; Unal, Aynur
2018-05-01
Precise quantification of surface heat fluxes in highly transient environment is of paramount importance from the design point of view of several engineering equipment like thermal protection or cooling systems. Such environments are simulated in experimental facilities by exposing the surface with transient heat loads typically step/impulsive in nature. The surface heating rates are then determined from highly transient temperature history captured by efficient surface temperature sensors. The classical approach is to use thin film gauges (TFGs) in which temperature variations are acquired within milliseconds, thereby allowing calculation of surface heat flux, based on the theory of one-dimensional heat conduction on a semi-infinite body. With recent developments in the soft computing methods, the present study is an attempt for the application of intelligent system technique, called adaptive neuro fuzzy inference system (ANFIS) to recover surface heat fluxes from a given temperature history recorded by TFGs without having the need to solve lengthy analytical equations. Experiments have been carried out by applying known quantity of `impulse heat load' through laser beam on TFGs. The corresponding voltage signals have been acquired and surface heat fluxes are estimated through classical analytical approach. These signals are then used to `train' the ANFIS model, which later predicts output for `test' values. Results from both methods have been compared and these surface heat fluxes are used to predict the non-linear relationship between thermal and electrical properties of the gauges that are exceedingly pertinent to the design of efficient TFGs. Further, surface plots have been created to give an insight about dimensionality effect of the non-linear dependence of thermal/electrical parameters on each other. Later, it is observed that a properly optimized ANFIS model can predict the impulsive heat profiles with significant accuracy. This paper thus shows the appropriateness of soft computing technique as a practically constructive replacement for tedious analytical formulation and henceforth, effectively quantifies the modeling of TFGs.
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
Idder, Salima; Ley, Laurent; Mazellier, Patrick; Budzinski, Hélène
2013-12-17
One of the current environmental issues concerns the presence and fate of pharmaceuticals in water bodies as these compounds may represent a potential environmental problem. The characterization of pharmaceutical contamination requires powerful analytical method able to quantify these pollutants at very low concentration (few ng L(-1)). In this work, a multi-residue analytical methodology (on-line solid phase extraction-liquid chromatography-triple quadrupole mass spectrometry using positive and negative electrospray ionization) has been developed and validated for 40 multi-class pharmaceuticals and metabolites for tap and surface waters. This on-line SPE method was very convenient and efficient compared to classical off-line SPE method because of its shorter total run time including sample preparation and smaller sample volume (1 mL vs up to 1 L). The optimized method included several therapeutic classes as lipid regulators, antibiotics, beta-blockers, non-steroidal anti-inflammatories, antineoplastic, etc., with various physicochemical properties. Quantification has been achieved with the internal standards. The limits of detection are between 0.7 and 15 ng L(-1) for drinking waters and 2-15 ng L(-1) for surface waters. The inter-day precision values are below 20% for each studied level. The improvement and strength of the analytical method has been verified along a monitoring of these 40 pharmaceuticals in Isle River, a French stream located in the South West of France. During this survey, 16 pharmaceutical compounds have been detected. Copyright © 2013 Elsevier B.V. All rights reserved.
Li, Chunquan; Han, Junwei; Yao, Qianlan; Zou, Chendan; Xu, Yanjun; Zhang, Chunlong; Shang, Desi; Zhou, Lingyun; Zou, Chaoxia; Sun, Zeguo; Li, Jing; Zhang, Yunpeng; Yang, Haixiu; Gao, Xu; Li, Xia
2013-05-01
Various 'omics' technologies, including microarrays and gas chromatography mass spectrometry, can be used to identify hundreds of interesting genes, proteins and metabolites, such as differential genes, proteins and metabolites associated with diseases. Identifying metabolic pathways has become an invaluable aid to understanding the genes and metabolites associated with studying conditions. However, the classical methods used to identify pathways fail to accurately consider joint power of interesting gene/metabolite and the key regions impacted by them within metabolic pathways. In this study, we propose a powerful analytical method referred to as Subpathway-GM for the identification of metabolic subpathways. This provides a more accurate level of pathway analysis by integrating information from genes and metabolites, and their positions and cascade regions within the given pathway. We analyzed two colorectal cancer and one metastatic prostate cancer data sets and demonstrated that Subpathway-GM was able to identify disease-relevant subpathways whose corresponding entire pathways might be ignored using classical entire pathway identification methods. Further analysis indicated that the power of a joint genes/metabolites and subpathway strategy based on their topologies may play a key role in reliably recalling disease-relevant subpathways and finding novel subpathways.
Olmo, B; García, A; Marín, A; Barbas, C
2005-03-25
The development of new pharmaceutical forms with classical active compounds generates new analytical problems. That is the case of sugar-free sachets of cough-cold products containing acetaminophen, phenylephrine hydrochloride and chlorpheniramine maleate. Two cyanopropyl stationary phases have been employed to tackle the problem. The Discovery cyanopropyl (SUPELCO) column permitted the separation of the three actives, maleate and excipients (mainly saccharine and orange flavour) with a constant proportion of aqueous/ organic solvent (95:5, v/v) and a pH gradient from 7.5 to 2. The run lasted 14 min. This technique avoids many problems related to baseline shifts with classical organic solvent gradients and opens great possibilities to modify selectivity not generally used in reversed phase HPLC. On the other hand, the Agilent Zorbax SB-CN column with a different retention profile permitted us to separate not only the three actives and the excipients but also the three known related compounds: 4-aminophenol, 4-chloracetanilide and 4-nitrophenol in an isocratic method with a run time under 30 min. This method was validated following ICH guidelines and validation parameters showed that it could be employed as stability-indicating method for this pharmaceutical form.
Della Pelle, Flavio; Compagnone, Dario
2018-02-04
Polyphenolic compounds (PCs) have received exceptional attention at the end of the past millennium and as much at the beginning of the new one. Undoubtedly, these compounds in foodstuffs provide added value for their well-known health benefits, for their technological role and also marketing. Many efforts have been made to provide simple, effective and user friendly analytical methods for the determination and antioxidant capacity (AOC) evaluation of food polyphenols. In a parallel track, over the last twenty years, nanomaterials (NMs) have made their entry in the analytical chemistry domain; NMs have, in fact, opened new paths for the development of analytical methods with the common aim to improve analytical performance and sustainability, becoming new tools in quality assurance of food and beverages. The aim of this review is to provide information on the most recent developments of new NMs-based tools and strategies for total polyphenols (TP) determination and AOC evaluation in food. In this review optical, electrochemical and bioelectrochemical approaches have been reviewed. The use of nanoparticles, quantum dots, carbon nanomaterials and hybrid materials for the detection of polyphenols is the main subject of the works reported. However, particular attention has been paid to the success of the application in real samples, in addition to the NMs. In particular, the discussion has been focused on methods/devices presenting, in the opinion of the authors, clear advancement in the fields, in terms of simplicity, rapidity and usability. This review aims to demonstrate how the NM-based approaches represent valid alternatives to classical methods for polyphenols analysis, and are mature to be integrated for the rapid quality assessment of food quality in lab or directly in the field.
2018-01-01
Polyphenolic compounds (PCs) have received exceptional attention at the end of the past millennium and as much at the beginning of the new one. Undoubtedly, these compounds in foodstuffs provide added value for their well-known health benefits, for their technological role and also marketing. Many efforts have been made to provide simple, effective and user friendly analytical methods for the determination and antioxidant capacity (AOC) evaluation of food polyphenols. In a parallel track, over the last twenty years, nanomaterials (NMs) have made their entry in the analytical chemistry domain; NMs have, in fact, opened new paths for the development of analytical methods with the common aim to improve analytical performance and sustainability, becoming new tools in quality assurance of food and beverages. The aim of this review is to provide information on the most recent developments of new NMs-based tools and strategies for total polyphenols (TP) determination and AOC evaluation in food. In this review optical, electrochemical and bioelectrochemical approaches have been reviewed. The use of nanoparticles, quantum dots, carbon nanomaterials and hybrid materials for the detection of polyphenols is the main subject of the works reported. However, particular attention has been paid to the success of the application in real samples, in addition to the NMs. In particular, the discussion has been focused on methods/devices presenting, in the opinion of the authors, clear advancement in the fields, in terms of simplicity, rapidity and usability. This review aims to demonstrate how the NM-based approaches represent valid alternatives to classical methods for polyphenols analysis, and are mature to be integrated for the rapid quality assessment of food quality in lab or directly in the field. PMID:29401719
Treatment of a Disorder of Self through Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; Valero-Aguayo, Luis
2012-01-01
This paper presents a clinical case study of a depressed female, treated by means of Functional Analytic Psychotherapy (FAP) based on the theory and techniques for treating an "unstable self" (Kohlenberg & Tsai, 1991), instead of the classic treatment for depression. The client was a 20-year-old college student. The trigger for her problems was a…
Maya, Fernando; Estela, José Manuel; Cerdà, Víctor
2009-07-01
In this work, the hyphenation of the multisyringe flow injection analysis technique with a 100-cm-long pathlength liquid core waveguide has been accomplished. The Cl-/Hg(SCN)2/Fe3+ reaction system for the spectrophotometric determination of chloride (Cl(-)) in waters was used as chemical model. As a result, this classic analytical methodology has been improved, minimizing dramatically the consumption of reagents, in particular, that of the highly biotoxic chemical Hg(SCN)2. The proposed method features a linear dynamic range composed of two steps between (1) 0.2-2 and (2) 2-8 mg Cl- L(-1), thus extended applicability due to on-line sample dilution (up to 400 mg Cl- L(-1)). It also presents improved limits of detection and quantification of 0.06 and 0.20 mg Cl- L(-1), respectively. The coefficient of variation and the injection throughput were 1.3% (n = 10, 2 mg Cl- L(-1)) and 21 h(-1). Furthermore, a very low consumption of reagents per Cl- determination of 0.2 microg Hg(II) and 28 microg Fe3+ has been achieved. The method was successfully applied to the determination of Cl- in different types of water samples. Finally, the proposed system is critically compared from a green analytical chemistry point of view against other flow systems for the same purpose.
Ellingson, David; Zywicki, Richard; Sullivan, Darryl
2014-01-01
Recent studies have shown that there are detectable levels of arsenic (As) in rice, rice food products, and apple juice. This has created significant concern to the public, the food industry, and various regulatory bodies. Classic test methods typically measure total As and are unable to differentiate the various As species. Since different As species have greatly different toxicities, an analytical method was needed to separate and quantify the different inorganic and organic species of As. The inorganic species arsenite [As(+3)] and arsenate [As(+5)] are highly toxic. With this in mind, an ion chromatography-inductively coupled plasma (IC-ICP/MS) method was developed and validated for rice and rice food products that can separate and individually measure multiple inorganic and organic species of As. This allows for the evaluation of the safety or risk associated with any product analyzed. The IC-ICP/MS method was validated on rice and rice food products, and it has been used successfully on apple juice. This paper provides details of the validated method as well as some lessons learned during its development. Precision and accuracy data are presented for rice, rice food products, and apple juice.
NASA Astrophysics Data System (ADS)
Grib, S. A.; Leora, S. N.
2016-03-01
We use analytical methods of magnetohydrodynamics to describe the behavior of cosmic plasma. This approach makes it possible to describe different structural fields of disturbances in solar wind: shock waves, direction discontinuities, magnetic clouds and magnetic holes, and their interaction with each other and with the Earth's magnetosphere. We note that the wave problems of solar-terrestrial physics can be efficiently solved by the methods designed for solving classical problems of mathematical physics. We find that the generalized Riemann solution particularly simplifies the consideration of secondary waves in the magnetosheath and makes it possible to describe in detail the classical solutions of boundary value problems. We consider the appearance of a fast compression wave in the Earth's magnetosheath, which is reflected from the magnetosphere and can nonlinearly overturn to generate a back shock wave. We propose a new mechanism for the formation of a plateau with protons of increased density and a magnetic field trough in the magnetosheath due to slow secondary shock waves. Most of our findings are confirmed by direct observations conducted on spacecrafts (WIND, ACE, Geotail, Voyager-2, SDO and others).
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
Schrödinger-Poisson-Vlasov-Poisson correspondence
NASA Astrophysics Data System (ADS)
Mocz, Philip; Lancaster, Lachlan; Fialkov, Anastasia; Becerra, Fernando; Chavanis, Pierre-Henri
2018-04-01
The Schrödinger-Poisson equations describe the behavior of a superfluid Bose-Einstein condensate under self-gravity with a 3D wave function. As ℏ/m →0 , m being the boson mass, the equations have been postulated to approximate the collisionless Vlasov-Poisson equations also known as the collisionless Boltzmann-Poisson equations. The latter describe collisionless matter with a 6D classical distribution function. We investigate the nature of this correspondence with a suite of numerical test problems in 1D, 2D, and 3D along with analytic treatments when possible. We demonstrate that, while the density field of the superfluid always shows order unity oscillations as ℏ/m →0 due to interference and the uncertainty principle, the potential field converges to the classical answer as (ℏ/m )2. Thus, any dynamics coupled to the superfluid potential is expected to recover the classical collisionless limit as ℏ/m →0 . The quantum superfluid is able to capture rich phenomena such as multiple phase-sheets, shell-crossings, and warm distributions. Additionally, the quantum pressure tensor acts as a regularizer of caustics and singularities in classical solutions. This suggests the exciting prospect of using the Schrödinger-Poisson equations as a low-memory method for approximating the high-dimensional evolution of the Vlasov-Poisson equations. As a particular example we consider dark matter composed of ultralight axions, which in the classical limit (ℏ/m →0 ) is expected to manifest itself as collisionless cold dark matter.
Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data.
Hammitt, Laura L; Feikin, Daniel R; Scott, J Anthony G; Zeger, Scott L; Murdoch, David R; O'Brien, Katherine L; Deloria Knoll, Maria
2017-06-15
Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Addressing the Analytic Challenges of Cross-Sectional Pediatric Pneumonia Etiology Data
Feikin, Daniel R.; Scott, J. Anthony G.; Zeger, Scott L.; Murdoch, David R.; O’Brien, Katherine L.; Deloria Knoll, Maria
2017-01-01
Abstract Despite tremendous advances in diagnostic laboratory technology, identifying the pathogen(s) causing pneumonia remains challenging because the infected lung tissue cannot usually be sampled for testing. Consequently, to obtain information about pneumonia etiology, clinicians and researchers test specimens distant to the site of infection. These tests may lack sensitivity (eg, blood culture, which is only positive in a small proportion of children with pneumonia) and/or specificity (eg, detection of pathogens in upper respiratory tract specimens, which may indicate asymptomatic carriage or a less severe syndrome, such as upper respiratory infection). While highly sensitive nucleic acid detection methods and testing of multiple specimens improve sensitivity, multiple pathogens are often detected and this adds complexity to the interpretation as the etiologic significance of results may be unclear (ie, the pneumonia may be caused by none, one, some, or all of the pathogens detected). Some of these challenges can be addressed by adjusting positivity rates to account for poor sensitivity or incorporating test results from controls without pneumonia to account for poor specificity. However, no classical analytic methods can account for measurement error (ie, sensitivity and specificity) for multiple specimen types and integrate the results of measurements for multiple pathogens to produce an accurate understanding of etiology. We describe the major analytic challenges in determining pneumonia etiology and review how the common analytical approaches (eg, descriptive, case-control, attributable fraction, latent class analysis) address some but not all challenges. We demonstrate how these limitations necessitate a new, integrated analytical approach to pneumonia etiology data. PMID:28575372
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
Kinyua, Juliet; Covaci, Adrian; Maho, Walid; McCall, Ann-Kathrin; Neels, Hugo; van Nuijs, Alexander L N
2015-09-01
Sewage-based epidemiology (SBE) employs the analysis of sewage to detect and quantify drug use within a community. While SBE has been applied repeatedly for the estimation of classical illicit drugs, only few studies investigated new psychoactive substances (NPS). These compounds mimic effects of illicit drugs by introducing slight modifications to chemical structures of controlled illicit drugs. We describe the optimization, validation, and application of an analytical method using liquid chromatography coupled to positive electrospray tandem mass spectrometry (LC-ESI-MS/MS) for the determination of seven NPS in sewage: methoxetamine (MXE), butylone, ethylone, methylone, methiopropamine (MPA), 4-methoxymethamphetamine (PMMA), and 4-methoxyamphetamine (PMA). Sample preparation was performed using solid-phase extraction (SPE) with Oasis MCX cartridges. The LC separation was done with a HILIC (150 x 3 mm, 5 µm) column which ensured good resolution of the analytes with a total run time of 19 min. The lower limit of quantification (LLOQ) was between 0.5 and 5 ng/L for all compounds. The method was validated by evaluating the following parameters: sensitivity, selectivity, linearity, accuracy, precision, recoveries and matrix effects. The method was applied on sewage samples collected from sewage treatment plants in Belgium and Switzerland in which all investigated compounds were detected, except MPA and PMA. Furthermore, a consistent presence of MXE has been observed in most of the sewage samples at levels higher than LLOQ. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Jankovic, I.; Barnes, R. J.; Soule, R.
2001-12-01
The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.
Multivariable Hermite polynomials and phase-space dynamics
NASA Technical Reports Server (NTRS)
Dattoli, G.; Torre, Amalia; Lorenzutta, S.; Maino, G.; Chiccoli, C.
1994-01-01
The phase-space approach to classical and quantum systems demands for advanced analytical tools. Such an approach characterizes the evolution of a physical system through a set of variables, reducing to the canonically conjugate variables in the classical limit. It often happens that phase-space distributions can be written in terms of quadratic forms involving the above quoted variables. A significant analytical tool to treat these problems may come from the generalized many-variables Hermite polynomials, defined on quadratic forms in R(exp n). They form an orthonormal system in many dimensions and seem the natural tool to treat the harmonic oscillator dynamics in phase-space. In this contribution we discuss the properties of these polynomials and present some applications to physical problems.
Synthesis of active controls for flutter suppression on a flight research wing
NASA Technical Reports Server (NTRS)
Abel, I.; Perry, B., III; Murrow, H. N.
1977-01-01
This paper describes some activities associated with the preliminary design of an active control system for flutter suppression capable of demonstrating a 20% increase in flutter velocity. Results from two control system synthesis techniques are given. One technique uses classical control theory, and the other uses an 'aerodynamic energy method' where control surface rates or displacements are minimized. Analytical methods used to synthesize the control systems and evaluate their performance are described. Some aspects of a program for flight testing the active control system are also given. This program, called DAST (Drones for Aerodynamics and Structural Testing), employs modified drone-type vehicles for flight assessments and validation testing.
NASA Astrophysics Data System (ADS)
Cao, Lu; Verbeek, Fons J.
2012-03-01
In computer graphics and visualization, reconstruction of a 3D surface from a point cloud is an important research area. As the surface contains information that can be measured, i.e. expressed in features, the application of surface reconstruction can be potentially important for application in bio-imaging. Opportunities in this application area are the motivation for this study. In the past decade, a number of algorithms for surface reconstruction have been proposed. Generally speaking, these methods can be separated into two categories: i.e., explicit representation and implicit approximation. Most of the aforementioned methods are firmly based in theory; however, so far, no analytical evaluation between these methods has been presented. The straightforward way of evaluation has been by convincing through visual inspection. Through evaluation we search for a method that can precisely preserve the surface characteristics and that is robust in the presence of noise. The outcome will be used to improve reliability in surface reconstruction of biological models. We, therefore, use an analytical approach by selecting features as surface descriptors and measure these features in varying conditions. We selected surface distance, surface area and surface curvature as three major features to compare quality of the surface created by the different algorithms. Our starting point has been ground truth values obtained from analytical shapes such as the sphere and the ellipsoid. In this paper we present four classical surface reconstruction methods from the two categories mentioned above, i.e. the Power Crust, the Robust Cocone, the Fourier-based method and the Poisson reconstruction method. The results obtained from our experiments indicate that Poisson reconstruction method performs the best in the presence of noise.
ERIC Educational Resources Information Center
Ginway, M. Elizabeth
2013-01-01
This study focuses on some of the classical features of Rubem Fonseca's "A grande arte" (1983) in order to emphasize the puzzle-solving tradition of the detective novel that is embedded within Fonseca's crime thriller, producing a work that does not entirely fit into traditional divisions of detective, hardboiled, or crime…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.
1987-04-01
The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less
Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva
2004-01-01
The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society
A Bayesian approach to meta-analysis of plant pathology studies.
Mila, A L; Ngugi, H K
2011-01-01
Bayesian statistical methods are used for meta-analysis in many disciplines, including medicine, molecular biology, and engineering, but have not yet been applied for quantitative synthesis of plant pathology studies. In this paper, we illustrate the key concepts of Bayesian statistics and outline the differences between Bayesian and classical (frequentist) methods in the way parameters describing population attributes are considered. We then describe a Bayesian approach to meta-analysis and present a plant pathological example based on studies evaluating the efficacy of plant protection products that induce systemic acquired resistance for the management of fire blight of apple. In a simple random-effects model assuming a normal distribution of effect sizes and no prior information (i.e., a noninformative prior), the results of the Bayesian meta-analysis are similar to those obtained with classical methods. Implementing the same model with a Student's t distribution and a noninformative prior for the effect sizes, instead of a normal distribution, yields similar results for all but acibenzolar-S-methyl (Actigard) which was evaluated only in seven studies in this example. Whereas both the classical (P = 0.28) and the Bayesian analysis with a noninformative prior (95% credibility interval [CRI] for the log response ratio: -0.63 to 0.08) indicate a nonsignificant effect for Actigard, specifying a t distribution resulted in a significant, albeit variable, effect for this product (CRI: -0.73 to -0.10). These results confirm the sensitivity of the analytical outcome (i.e., the posterior distribution) to the choice of prior in Bayesian meta-analyses involving a limited number of studies. We review some pertinent literature on more advanced topics, including modeling of among-study heterogeneity, publication bias, analyses involving a limited number of studies, and methods for dealing with missing data, and show how these issues can be approached in a Bayesian framework. Bayesian meta-analysis can readily include information not easily incorporated in classical methods, and allow for a full evaluation of competing models. Given the power and flexibility of Bayesian methods, we expect them to become widely adopted for meta-analysis of plant pathology studies.
On-Site Detection as a Countermeasure to Chemical Warfare/Terrorism.
Seto, Y
2014-01-01
On-site monitoring and detection are necessary in the crisis and consequence management of wars and terrorism involving chemical warfare agents (CWAs) such as sarin. The analytical performance required for on-site detection is mainly determined by the fatal vapor concentration and volatility of the CWAs involved. The analytical performance for presently available on-site technologies and commercially available on-site equipment for detecting CWAs interpreted and compared in this review include: classical manual methods, photometric methods, ion mobile spectrometry, vibrational spectrometry, gas chromatography, mass spectrometry, sensors, and other methods. Some of the data evaluated were obtained from our experiments using authentic CWAs. We concluded that (a) no technologies perfectly fulfill all of the on-site detection requirements and (b) adequate on-site detection requires (i) a combination of the monitoring-tape method and ion-mobility spectrometry for point detection and (ii) a combination of the monitoring-tape method, atmospheric pressure chemical ionization mass spectrometry with counterflow introduction, and gas chromatography with a trap and special detectors for continuous monitoring. The basic properties of CWAs, the concept of on-site detection, and the sarin gas attacks in Japan as well as the forensic investigations thereof, are also explicated in this article. Copyright © 2014 Central Police University.
Transformer modeling for low- and mid-frequency electromagnetic transients simulation
NASA Astrophysics Data System (ADS)
Lambert, Mathieu
In this work, new models are developed for single-phase and three-phase shell-type transformers for the simulation of low-frequency transients, with the use of the coupled leakage model. This approach has the advantage that it avoids the use of fictitious windings to connect the leakage model to a topological core model, while giving the same response in short-circuit as the indefinite admittance matrix (BCTRAN) model. To further increase the model sophistication, it is proposed to divide windings into coils in the new models. However, short-circuit measurements between coils are never available. Therefore, a novel analytical method is elaborated for this purpose, which allows the calculation in 2-D of short-circuit inductances between coils of rectangular cross-section. The results of this new method are in agreement with the results obtained from the finite element method in 2-D. Furthermore, the assumption that the leakage field is approximately 2-D in shell-type transformers is validated with a 3-D simulation. The outcome of this method is used to calculate the self and mutual inductances between the coils of the coupled leakage model and the results are showing good correspondence with terminal short-circuit measurements. Typically, leakage inductances in transformers are calculated from short-circuit measurements and the magnetizing branch is calculated from no-load measurements, assuming that leakages are unimportant for the unloaded transformer and that magnetizing current is negligible during a short-circuit. While the core is assumed to have an infinite permeability to calculate short-circuit inductances, and it is a reasonable assumption since the core's magnetomotive force is negligible during a short-circuit, the same reasoning does not necessarily hold true for leakage fluxes in no-load conditions. This is because the core starts to saturate when the transformer is unloaded. To take this into account, a new analytical method is developed in this dissertation, which removes the contributions of leakage fluxes to properly calculate the magnetizing branches of the new models. However, in the new analytical method for calculating short-circuit inductances (as with other analytical methods), eddy-current losses are neglected. Similarly, winding losses are omitted in the coupled leakage model and in the new analytical method to remove leakage fluxes to calculate core parameters from no-load tests. These losses will be taken into account in future work. Both transformer models presented in this dissertation are based on the classical hypothesis that flux can be discretized into flux tubes, which is also the assumption used in a category of models called topological models. Even though these models are physically-based, there exist many topological models for a given transformer geometry. It is shown in this work that these differences can be explained in part through the concepts of divided and integral fluxes, and it is explained that divided approach is the result of mathematical manipulations, while the integral approach is more "physically-accurate". Furthermore, it is demonstrated, for the special case of a two-winding single-phase transformer, that the divided leakage inductances have to be nonlinear for both approaches to be equivalent. Even between models of the divided or integral approach models, there are differences, which arise from the particular choice of so-called flux paths" (tubes). This arbitrariness comes from the fact that with the classical hypothesis that magnetic flux can be confined into predefined flux tubes (leading to classical magnetic circuit theory), it is assumed that flux cannot leak from the sides of flux tubes. Therefore, depending on the transformer's operation conditions (degree of saturation, short-circuit, etc.), this can lead to different choices of flux tubes and different models. In this work, a new theoretical framework is developed to allow flux to leak from the sides of the tube, and generalized to include resistances and capacitances in what is called electromagnetic circuit theory. Also, it is explained that this theory is actually equivalent to what is called finite formulations (such as the finite element method), which bridges the gap between circuit theory and discrete electromagnetism. Therefore, this enables not only to develop topologically-correct transformer models, where electric and magnetic circuits are defined on dual meshes, but also rotating machine and transmission lines models (wave propagation can be taken into account).
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Accurate expressions for solar cell fill factors including series and shunt resistances
NASA Astrophysics Data System (ADS)
Green, Martin A.
2016-02-01
Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space.
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
Numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity
NASA Astrophysics Data System (ADS)
Korepanov, V. V.; Matveenko, V. P.; Fedorov, A. Yu.; Shardakov, I. N.
2013-07-01
An algorithm for the numerical analysis of singular solutions of two-dimensional problems of asymmetric elasticity is considered. The algorithm is based on separation of a power-law dependence from the finite-element solution in a neighborhood of singular points in the domain under study, where singular solutions are possible. The obtained power-law dependencies allow one to conclude whether the stresses have singularities and what the character of these singularities is. The algorithm was tested for problems of classical elasticity by comparing the stress singularity exponents obtained by the proposed method and from known analytic solutions. Problems with various cases of singular points, namely, body surface points at which either the smoothness of the surface is violated, or the type of boundary conditions is changed, or distinct materials are in contact, are considered as applications. The stress singularity exponents obtained by using the models of classical and asymmetric elasticity are compared. It is shown that, in the case of cracks, the stress singularity exponents are the same for the elasticity models under study, but for other cases of singular points, the stress singularity exponents obtained on the basis of asymmetric elasticity have insignificant quantitative distinctions from the solutions of the classical elasticity.
Extended analytical solutions for effective elastic moduli of cracked porous media
NASA Astrophysics Data System (ADS)
Nguyen, Sy-Tuan; To, Quy Dong; Vu, Minh Ngoc
2017-05-01
Extended solutions are derived, on the basis of the micromechanical methods, for the effective elastic moduli of porous media containing stiff pores and both open and closed cracks. Analytical formulas of the overall bulk and shear moduli are obtained as functions of the elastic moduli of the solid skeleton, porosity and the densities of open and closed cracks families. We show that the obtained results are extensions of the classical widely used Walsh's (JGR, 1965) and Budiansky-O‧Connell's (JGR, 1974) solutions. Parametric sensitivity analysis clarifies the impact of the model parameters on the effective elastic properties. An inverse analysis, using sonic and density data, is considered to quantify the density of both open and closed cracks. It is observed that the density of closed cracks depends strongly on stress condition while the dependence of open cracks on the confining stress is negligible.
Li, Chenxi; Cazzolato, Ben; Zander, Anthony
2016-01-01
The classic analytical model for the sound absorption of micro perforated materials is well developed and is based on a boundary condition where the velocity of the material is assumed to be zero, which is accurate when the material vibration is negligible. This paper develops an analytical model for finite-sized circular micro perforated membranes (MPMs) by applying a boundary condition such that the velocity of air particles on the hole wall boundary is equal to the membrane vibration velocity (a zero-slip condition). The acoustic impedance of the perforation, which varies with its position, is investigated. A prediction method for the overall impedance of the holes and the combined impedance of the MPM is also provided. The experimental results for four different MPM configurations are used to validate the model and good agreement between the experimental and predicted results is achieved.
NASA Astrophysics Data System (ADS)
Yehia, Ali M.; Mohamed, Heba M.
2016-01-01
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.
Supercritical fluid extraction of selected pharmaceuticals from water and serum.
Simmons, B R; Stewart, J T
1997-01-24
Selected drugs from benzodiazepine, anabolic agent and non-steroidal anti-inflammatory drug (NSAID) therapeutic classes were extracted from water and serum using a supercritical CO2 mobile phase. The samples were extracted at a pump pressure of 329 MPa, an extraction chamber temperature of 45 degrees C, and a restrictor temperature of 60 degrees C. The static extraction time for all samples was 2.5 min and the dynamic extraction time ranged from 5 to 20 min. The analytes were collected in appropriate solvent traps and assayed by modified literature HPLC procedures. Analyte recoveries were calculated based on peak height measurements of extracted vs. unextracted analyte. The recovery of the benzodiazepines ranged from 80 to 98% in water and from 75 to 94% in serum. Anabolic drug recoveries from water and serum ranged from 67 to 100% and 70 to 100%, respectively. The NSAIDs were recovered from water in the 76 to 97% range and in the 76 to 100% range from serum. Accuracy, precision and endogenous peak interference, if any, were determined for blank and spiked serum extractions and compared with classical sample preparation techniques of liquid-liquid and solid-phase extraction reported in the literature. For the benzodiazepines, accuracy and precision for supercritical fluid extraction (SFE) ranged from 1.95 to 3.31 and 0.57 to 1.25%, respectively (n = 3). The SFE accuracy and precision data for the anabolic agents ranged from 4.03 to 7.84 and 0.66 to 2.78%, respectively (n = 3). The accuracy and precision data reported for the SFE of the NSAIDs ranged from 2.79 to 3.79 and 0.33 to 1.27%, respectively (n = 3). The precision of the SFE method from serum was shown to be comparable to the precision obtained with other classical preparation techniques.
NASA Astrophysics Data System (ADS)
Jang, T. S.
2018-03-01
A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.
Failure Assessment of Brazed Structures
NASA Technical Reports Server (NTRS)
Flom, Yuri
2012-01-01
Despite the great advances in analytical methods available to structural engineers, designers of brazed structures have great difficulties in addressing fundamental questions related to the loadcarrying capabilities of brazed assemblies. In this chapter we will review why such common engineering tools as Finite Element Analysis (FEA) as well as many well-established theories (Tresca, von Mises, Highest Principal Stress, etc) don't work well for the brazed joints. This chapter will show how the classic approach of using interaction equations and the less known Coulomb-Mohr failure criterion can be employed to estimate Margins of Safety (MS) in brazed joints.
Spatial distribution of GRBs and large scale structure of the Universe
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István
We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.
On the spontaneous collective motion of active matter
Wang, Shenshen; Wolynes, Peter G.
2011-01-01
Spontaneous directed motion, a hallmark of cell biology, is unusual in classical statistical physics. Here we study, using both numerical and analytical methods, organized motion in models of the cytoskeleton in which constituents are driven by energy-consuming motors. Although systems driven by small-step motors are described by an effective temperature and are thus quiescent, at higher order in step size, both homogeneous and inhomogeneous, flowing and oscillating behavior emerges. Motors that respond with a negative susceptibility to imposed forces lead to an apparent negative-temperature system in which beautiful structures form resembling the asters seen in cell division. PMID:21876141
On the spontaneous collective motion of active matter.
Wang, Shenshen; Wolynes, Peter G
2011-09-13
Spontaneous directed motion, a hallmark of cell biology, is unusual in classical statistical physics. Here we study, using both numerical and analytical methods, organized motion in models of the cytoskeleton in which constituents are driven by energy-consuming motors. Although systems driven by small-step motors are described by an effective temperature and are thus quiescent, at higher order in step size, both homogeneous and inhomogeneous, flowing and oscillating behavior emerges. Motors that respond with a negative susceptibility to imposed forces lead to an apparent negative-temperature system in which beautiful structures form resembling the asters seen in cell division.
NASA Astrophysics Data System (ADS)
Shimada, Yutaka; Ikeguchi, Tohru; Shigehara, Takaomi
2012-10-01
In this Letter, we propose a framework to transform a complex network to a time series. The transformation from complex networks to time series is realized by the classical multidimensional scaling. Applying the transformation method to a model proposed by Watts and Strogatz [Nature (London) 393, 440 (1998)], we show that ring lattices are transformed to periodic time series, small-world networks to noisy periodic time series, and random networks to random time series. We also show that these relationships are analytically held by using the circulant-matrix theory and the perturbation theory of linear operators. The results are generalized to several high-dimensional lattices.
A General Model for Performance Evaluation in DS-CDMA Systems with Variable Spreading Factors
NASA Astrophysics Data System (ADS)
Chiaraluce, Franco; Gambi, Ennio; Righi, Giorgia
This paper extends previous analytical approaches for the study of CDMA systems to the relevant case of multipath environments where users can operate at different bit rates. This scenario is of interest for the Wideband CDMA strategy employed in UMTS, and the model permits the performance comparison of classic and more innovative spreading signals. The method is based on the characteristic function approach, that allows to model accurately the various kinds of interferences. Some numerical examples are given with reference to the ITU-R M. 1225 Recommendations, but the analysis could be extended to different channel descriptions.
Fractional dynamics using an ensemble of classical trajectories
NASA Astrophysics Data System (ADS)
Sun, Zhaopeng; Dong, Hao; Zheng, Yujun
2018-01-01
A trajectory-based formulation for fractional dynamics is presented and the trajectories are generated deterministically. In this theoretical framework, we derive a new class of estimators in terms of confluent hypergeometric function (F11) to represent the Riesz fractional derivative. Using this method, the simulation of free and confined Lévy flight are in excellent agreement with the exact numerical and analytical results. In addition, the barrier crossing in a bistable potential driven by Lévy noise of index α is investigated. In phase space, the behavior of trajectories reveal the feature of Lévy flight in a better perspective.
NASA Astrophysics Data System (ADS)
Lorenzetti, G.; Foresta, A.; Palleschi, V.; Legnaioli, S.
2009-09-01
The recent development of mobile instrumentation, specifically devoted to in situ analysis and study of museum objects, allows the acquisition of many LIBS spectra in very short time. However, such large amount of data calls for new analytical approaches which would guarantee a prompt analysis of the results obtained. In this communication, we will present and discuss the advantages of statistical analytical methods, such as Partial Least Squares Multiple Regression algorithms vs. the classical calibration curve approach. PLS algorithms allows to obtain in real time the information on the composition of the objects under study; this feature of the method, compared to the traditional off-line analysis of the data, is extremely useful for the optimization of the measurement times and number of points associated with the analysis. In fact, the real time availability of the compositional information gives the possibility of concentrating the attention on the most `interesting' parts of the object, without over-sampling the zones which would not provide useful information for the scholars or the conservators. Some example on the applications of this method will be presented, including the studies recently performed by the researcher of the Applied Laser Spectroscopy Laboratory on museum bronze objects.
NASA Astrophysics Data System (ADS)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong
2018-01-01
The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.
Verification of Ceramic Structures
NASA Astrophysics Data System (ADS)
Behar-Lafenetre, Stephanie; Cornillon, Laurence; Rancurel, Michael; De Graaf, Dennis; Hartmann, Peter; Coe, Graham; Laine, Benoit
2012-07-01
In the framework of the “Mechanical Design and Verification Methodologies for Ceramic Structures” contract [1] awarded by ESA, Thales Alenia Space has investigated literature and practices in affiliated industries to propose a methodological guideline for verification of ceramic spacecraft and instrument structures. It has been written in order to be applicable to most types of ceramic or glass-ceramic materials - typically Cesic®, HBCesic®, Silicon Nitride, Silicon Carbide and ZERODUR®. The proposed guideline describes the activities to be performed at material level in order to cover all the specific aspects of ceramics (Weibull distribution, brittle behaviour, sub-critical crack growth). Elementary tests and their post-processing methods are described, and recommendations for optimization of the test plan are given in order to have a consistent database. The application of this method is shown on an example in a dedicated article [7]. Then the verification activities to be performed at system level are described. This includes classical verification activities based on relevant standard (ECSS Verification [4]), plus specific analytical, testing and inspection features. The analysis methodology takes into account the specific behaviour of ceramic materials, especially the statistical distribution of failures (Weibull) and the method to transfer it from elementary data to a full-scale structure. The demonstration of the efficiency of this method is described in a dedicated article [8]. The verification is completed by classical full-scale testing activities. Indications about proof testing, case of use and implementation are given and specific inspection and protection measures are described. These additional activities are necessary to ensure the required reliability. The aim of the guideline is to describe how to reach the same reliability level as for structures made of more classical materials (metals, composites).
On non-autonomous dynamical systems
NASA Astrophysics Data System (ADS)
Anzaldo-Meneses, A.
2015-04-01
In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.
NASA Astrophysics Data System (ADS)
Brandstetter, Gerd; Govindjee, Sanjay
2012-03-01
Existing analytical and numerical methodologies are discussed and then extended in order to calculate critical contamination-particle sizes, which will result in deleterious effects during EUVL E-chucking in the face of an error budget on the image-placement-error (IPE). The enhanced analytical models include a gap dependant clamping pressure formulation, the consideration of a general material law for realistic particle crushing and the influence of frictional contact. We present a discussion of the defects of the classical de-coupled modeling approach where particle crushing and mask/chuck indentation are separated from the global computation of mask bending. To repair this defect we present a new analytic approach based on an exact Hankel transform method which allows a fully coupled solution. This will capture the contribution of the mask indentation to the image-placement-error (estimated IPE increase of 20%). A fully coupled finite element model is used to validate the analytical models and to further investigate the impact of a mask back-side CrN-layer. The models are applied to existing experimental data with good agreement. For a standard material combination, a given IPE tolerance of 1 nm and a 15 kPa closing pressure, we derive bounds for single particles of cylindrical shape (radius × height < 44 μm) and spherical shape (diameter < 12 μm).
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
A novel approach to signal normalisation in atmospheric pressure ionisation mass spectrometry.
Vogeser, Michael; Kirchhoff, Fabian; Geyer, Roland
2012-07-01
The aim of our study was to test an alternative principle of signal normalisation in LC-MS/MS. During analyses, post column infusion of the target analyte is done via a T-piece, generating an "area under the analyte peak" (AUP). The ratio of peak area to AUP is assessed as assay response. Acceptable analytical performance of this principle was found for an exemplary analyte. Post-column infusion may allow normalisation of ion suppression not requiring any additional standard compound. This approach can be useful in situations where no appropriate compound is available for classical internal standardisation. Copyright © 2012 Elsevier B.V. All rights reserved.
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-08
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. Copyright © 2015 Elsevier B.V. All rights reserved.
Application of FT-IR Classification Method in Silica-Plant Extracts Composites Quality Testing
NASA Astrophysics Data System (ADS)
Bicu, A.; Drumea, V.; Mihaiescu, D. E.; Purcareanu, B.; Florea, M. A.; Trică, B.; Vasilievici, G.; Draga, S.; Buse, E.; Olariu, L.
2018-06-01
Our present work is concerned with the validation and quality testing efforts of mesoporous silica - plant extracts composites, in order to sustain the standardization process of plant-based pharmaceutical products. The synthesis of the silica support were performed by using a TEOS based synthetic route and CTAB as a template, at room temperature and normal pressure. The silica support was analyzed by advanced characterization methods (SEM, TEM, BET, DLS and FT-IR), and loaded with Calendula officinalis and Salvia officinalis standardized extracts. Further desorption studies were performed in order to prove the sustained release properties of the final materials. Intermediate and final product identification was performed by a FT-IR classification method, using the MID-range of the IR spectra, and statistical representative samples from repetitive synthetic stages. The obtained results recommend this analytical method as a fast and cost effective alternative to the classic identification methods.
NASA Astrophysics Data System (ADS)
Wang, Wenji; Zhao, Yi
2017-07-01
Methane dissociation is a prototypical system for the study of surface reaction dynamics. The dissociation and recombination rates of CH4 through the Ni(111) surface are calculated by using the quantum instanton method with an analytical potential energy surface. The Ni(111) lattice is treated rigidly, classically, and quantum mechanically so as to reveal the effect of lattice motion. The results demonstrate that it is the lateral displacements rather than the upward and downward movements of the surface nickel atoms that affect the rates a lot. Compared with the rigid lattice, the classical relaxation of the lattice can increase the rates by lowering the free energy barriers. For instance, at 300 K, the dissociation and recombination rates with the classical lattice exceed the ones with the rigid lattice by 6 and 10 orders of magnitude, respectively. Compared with the classical lattice, the quantum delocalization rather than the zero-point energy of the Ni atoms further enhances the rates by widening the reaction path. For instance, the dissociation rate with the quantum lattice is about 10 times larger than that with the classical lattice at 300 K. On the rigid lattice, due to the zero-point energy difference between CH4 and CD4, the kinetic isotope effects are larger than 1 for the dissociation process, while they are smaller than 1 for the recombination process. The increasing kinetic isotope effect with decreasing temperature demonstrates that the quantum tunneling effect is remarkable for the dissociation process.
NASA Astrophysics Data System (ADS)
Chien, Chih-Chun; Kouachi, Said; Velizhanin, Kirill A.; Dubi, Yonatan; Zwolak, Michael
2017-01-01
We present a method for calculating analytically the thermal conductance of a classical harmonic lattice with both alternating masses and nearest-neighbor couplings when placed between individual Langevin reservoirs at different temperatures. The method utilizes recent advances in analytic diagonalization techniques for certain classes of tridiagonal matrices. It recovers the results from a previous method that was applicable for alternating on-site parameters only, and extends the applicability to realistic systems in which masses and couplings alternate simultaneously. With this analytic result in hand, we show that the thermal conductance is highly sensitive to the modulation of the couplings. This is due to the existence of topologically induced edge modes at the lattice-reservoir interface and is also a reflection of the symmetries of the lattice. We make a connection to a recent work that demonstrates thermal transport is analogous to chemical reaction rates in solution given by Kramers' theory [Velizhanin et al., Sci. Rep. 5, 17506 (2015)], 10.1038/srep17506. In particular, we show that the turnover behavior in the presence of edge modes prevents calculations based on single-site reservoirs from coming close to the natural—or intrinsic—conductance of the lattice. Obtaining the correct value of the intrinsic conductance through simulation of even a small lattice where ballistic effects are important requires quite large extended reservoir regions. Our results thus offer a route for both the design and proper simulation of thermal conductance of nanoscale devices.
Computing diffusivities from particle models out of equilibrium
NASA Astrophysics Data System (ADS)
Embacher, Peter; Dirr, Nicolas; Zimmer, Johannes; Reina, Celia
2018-04-01
A new method is proposed to numerically extract the diffusivity of a (typically nonlinear) diffusion equation from underlying stochastic particle systems. The proposed strategy requires the system to be in local equilibrium and have Gaussian fluctuations but it is otherwise allowed to undergo arbitrary out-of-equilibrium evolutions. This could be potentially relevant for particle data obtained from experimental applications. The key idea underlying the method is that finite, yet large, particle systems formally obey stochastic partial differential equations of gradient flow type satisfying a fluctuation-dissipation relation. The strategy is here applied to three classic particle models, namely independent random walkers, a zero-range process and a symmetric simple exclusion process in one space dimension, to allow the comparison with analytic solutions.
[Diagnosing imported helminthiasis].
Pardo, Javier; Pérez-Arellano, José Luis; Galindo, Inmaculada; Belhassen, Moncef; Cordero, Miguel; Muro, Antonio
2007-05-01
In recent years, there has been an increase in cases of imported helminthiasis in Spain because of two complementary causes: immigration and international travel. Although the prevalence of helminthiasis is high in the immigrant population, the risk of transmission to the Spanish population is low. In this review, we provide clues to aid in the diagnosis of the helminthiasis, highlighting the geographic characteristics, clinical findings and analytical results of the most frequent types. The low sensitivity of the classic parasitological diagnostic test, mainly in tissue helminthiasis, is described. In addition, the advantages and limitations of the common serological methods for detecting related circulating antigens and antibodies are presented. Certain molecular methods used in the diagnosis of imported helminthiasis and the best strategies for screening of this condition are discussed.
A statistical theory for sound radiation and reflection from a duct
NASA Technical Reports Server (NTRS)
Cho, Y. C.
1979-01-01
A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.
Dynamics of a prey-predator system under Poisson white noise excitation
NASA Astrophysics Data System (ADS)
Pan, Shan-Shan; Zhu, Wei-Qiu
2014-10-01
The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liemert, André, E-mail: andre.liemert@ilm.uni-ulm.de; Kienle, Alwin
Purpose: Explicit solutions of the monoenergetic radiative transport equation in the P{sub 3} approximation have been derived which can be evaluated with nearly the same computational effort as needed for solving the standard diffusion equation (DE). In detail, the authors considered the important case of a semi-infinite medium which is illuminated by a collimated beam of light. Methods: A combination of the classic spherical harmonics method and the recently developed method of rotated reference frames is used for solving the P{sub 3} equations in closed form. Results: The derived solutions are illustrated and compared to exact solutions of the radiativemore » transport equation obtained via the Monte Carlo (MC) method as well as with other approximated analytical solutions. It is shown that for the considered cases which are relevant for biomedical optics applications, the P{sub 3} approximation is close to the exact solution of the radiative transport equation. Conclusions: The authors derived exact analytical solutions of the P{sub 3} equations under consideration of boundary conditions for defining a semi-infinite medium. The good agreement to Monte Carlo simulations in the investigated domains, for example, in the steady-state and time domains, as well as the short evaluation time needed suggests that the derived equations can replace the often applied solutions of the diffusion equation for the homogeneous semi-infinite medium.« less
Exact test-based approach for equivalence test with parameter margin.
Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua
2017-01-01
The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2017-10-01
Over the recent decades, a number of fast approximate solutions of Lippmann-Schwinger equation, which are more accurate than classic Born and Rytov approximations, were proposed in the field of electromagnetic modeling. Those developments could be naturally extended to acoustic and elastic fields; however, until recently, they were almost unknown in seismology. This paper presents several solutions of this kind applied to acoustic modeling for both lossy and lossless media. We evaluated the numerical merits of those methods and provide an estimation of their numerical complexity. In our numerical realization we use the matrix-free implementation of the corresponding integral operator. We study the accuracy of those approximate solutions and demonstrate, that the quasi-analytical approximation is more accurate, than the Born approximation. Further, we apply the quasi-analytical approximation to the solution of the inverse problem. It is demonstrated that, this approach improves the estimation of the data gradient, comparing to the Born approximation. The developed inversion algorithm is based on the conjugate-gradient type optimization. Numerical model study demonstrates that the quasi-analytical solution significantly reduces computation time of the seismic full-waveform inversion. We also show how the quasi-analytical approximation can be extended to the case of elastic wavefield.
Anderson metal-insulator transitions with classical magnetic impurities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Daniel; Kettemann, Stefan
We study the effects of classical magnetic impurities on the Anderson metal-insulator transition (AMIT) numerically. In particular we find that while a finite concentration of Ising impurities lowers the critical value of the site-diagonal disorder amplitude W{sub c}, in the presence of Heisenberg impurities, W{sub c} is first increased with increasing exchange coupling strength J due to time-reversal symmetry breaking. The resulting scaling with J is compared to analytical predictions by Wegner [1]. The results are obtained numerically, based on a finite-size scaling procedure for the typical density of states [2], which is the geometric average of the local densitymore » of states. The latter can efficiently be calculated using the kernel polynomial method [3]. Although still suffering from methodical shortcomings, our method proves to deliver results close to established results for the orthogonal symmetry class [4]. We extend previous approaches [5] by combining the KPM with a finite-size scaling analysis. We also discuss the relevance of our findings for systems like phosphor-doped silicon (Si:P), which are known to exhibit a quantum phase transition from metal to insulator driven by the interplay of both interaction and disorder, accompanied by the presence of a finite concentration of magnetic moments [6].« less
Li, Chunquan; Han, Junwei; Yao, Qianlan; Zou, Chendan; Xu, Yanjun; Zhang, Chunlong; Shang, Desi; Zhou, Lingyun; Zou, Chaoxia; Sun, Zeguo; Li, Jing; Zhang, Yunpeng; Yang, Haixiu; Gao, Xu; Li, Xia
2013-01-01
Various ‘omics’ technologies, including microarrays and gas chromatography mass spectrometry, can be used to identify hundreds of interesting genes, proteins and metabolites, such as differential genes, proteins and metabolites associated with diseases. Identifying metabolic pathways has become an invaluable aid to understanding the genes and metabolites associated with studying conditions. However, the classical methods used to identify pathways fail to accurately consider joint power of interesting gene/metabolite and the key regions impacted by them within metabolic pathways. In this study, we propose a powerful analytical method referred to as Subpathway-GM for the identification of metabolic subpathways. This provides a more accurate level of pathway analysis by integrating information from genes and metabolites, and their positions and cascade regions within the given pathway. We analyzed two colorectal cancer and one metastatic prostate cancer data sets and demonstrated that Subpathway-GM was able to identify disease-relevant subpathways whose corresponding entire pathways might be ignored using classical entire pathway identification methods. Further analysis indicated that the power of a joint genes/metabolites and subpathway strategy based on their topologies may play a key role in reliably recalling disease-relevant subpathways and finding novel subpathways. PMID:23482392
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rana, A.; Ravichandran, R.; Park, J. H.
The second-order non-Navier-Fourier constitutive laws, expressed in a compact algebraic mathematical form, were validated for the force-driven Poiseuille gas flow by the deterministic atomic-level microscopic molecular dynamics (MD). Emphasis is placed on how completely different methods (a second-order continuum macroscopic theory based on the kinetic Boltzmann equation, the probabilistic mesoscopic direct simulation Monte Carlo, and, in particular, the deterministic microscopic MD) describe the non-classical physics, and whether the second-order non-Navier-Fourier constitutive laws derived from the continuum theory can be validated using MD solutions for the viscous stress and heat flux calculated directly from the molecular data using the statistical method.more » Peculiar behaviors (non-uniform tangent pressure profile and exotic instantaneous heat conduction from cold to hot [R. S. Myong, “A full analytical solution for the force-driven compressible Poiseuille gas flow based on a nonlinear coupled constitutive relation,” Phys. Fluids 23(1), 012002 (2011)]) were re-examined using atomic-level MD results. It was shown that all three results were in strong qualitative agreement with each other, implying that the second-order non-Navier-Fourier laws are indeed physically legitimate in the transition regime. Furthermore, it was shown that the non-Navier-Fourier constitutive laws are essential for describing non-zero normal stress and tangential heat flux, while the classical and non-classical laws remain similar for shear stress and normal heat flux.« less
Fonteyne, Margot; Gildemyn, Delphine; Peeters, Elisabeth; Mortier, Séverine Thérèse F C; Vercruysse, Jurgen; Gernaey, Krist V; Vervaet, Chris; Remon, Jean Paul; Nopens, Ingmar; De Beer, Thomas
2014-08-01
Classically, the end point detection during fluid bed drying has been performed using indirect parameters, such as the product temperature or the humidity of the outlet drying air. This paper aims at comparing those classic methods to both in-line moisture and solid-state determination by means of Process Analytical Technology (PAT) tools (Raman and NIR spectroscopy) and a mass balance approach. The six-segmented fluid bed drying system being part of a fully continuous from-powder-to-tablet production line (ConsiGma™-25) was used for this study. A theophylline:lactose:PVP (30:67.5:2.5) blend was chosen as model formulation. For the development of the NIR-based moisture determination model, 15 calibration experiments in the fluid bed dryer were performed. Six test experiments were conducted afterwards, and the product was monitored in-line with NIR and Raman spectroscopy during drying. The results (drying endpoint and residual moisture) obtained via the NIR-based moisture determination model, the classical approach by means of indirect parameters and the mass balance model were then compared. Our conclusion is that the PAT-based method is most suited for use in a production set-up. Secondly, the different size fractions of the dried granules obtained during different experiments (fines, yield and oversized granules) were compared separately, revealing differences in both solid state of theophylline and moisture content between the different granule size fractions. Copyright © 2014 Elsevier B.V. All rights reserved.
Revisiting competition in a classic model system using formal links between theory and data.
Hart, Simon P; Burgin, Jacqueline R; Marshall, Dustin J
2012-09-01
Formal links between theory and data are a critical goal for ecology. However, while our current understanding of competition provides the foundation for solving many derived ecological problems, this understanding is fractured because competition theory and data are rarely unified. Conclusions from seminal studies in space-limited benthic marine systems, in particular, have been very influential for our general understanding of competition, but rely on traditional empirical methods with limited inferential power and compatibility with theory. Here we explicitly link mathematical theory with experimental field data to provide a more sophisticated understanding of competition in this classic model system. In contrast to predictions from conceptual models, our estimates of competition coefficients show that a dominant space competitor can be equally affected by interspecific competition with a poor competitor (traditionally defined) as it is by intraspecific competition. More generally, the often-invoked competitive hierarchies and intransitivities in this system might be usefully revisited using more sophisticated empirical and analytical approaches.
Hasegawa, Hideo
2011-07-01
Responses of small open oscillator systems to applied external forces have been studied with the use of an exactly solvable classical Caldeira-Leggett model in which a harmonic oscillator (system) is coupled to finite N-body oscillators (bath) with an identical frequency (ω(n) = ω(o) for n = 1 to N). We have derived exact expressions for positions, momenta, and energy of the system in nonequilibrium states and for work performed by applied forces. A detailed study has been made on an analytical method for canonical averages of physical quantities over the initial equilibrium state, which is much superior to numerical averages commonly adopted in simulations of small systems. The calculated energy of the system which is strongly coupled to a finite bath is fluctuating but nondissipative. It has been shown that the Jarzynski equality is valid in nondissipative nonergodic open oscillator systems regardless of the rate of applied ramp force.
Theoretical study of mixing in liquid clouds – Part 1: Classical concepts
Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...
2016-07-28
The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less
NASA Astrophysics Data System (ADS)
Wu, Sheng-Jhih; Chu, Moody T.
2017-08-01
An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.
The role of mechanics during brain development
NASA Astrophysics Data System (ADS)
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-12-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated with neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism.
The role of mechanics during brain development
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-01-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated to neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von-Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism. PMID:25202162
NASA Astrophysics Data System (ADS)
Zaslavsky, M.
1996-06-01
The phenomena of dynamical localization, both classical and quantum, are studied in the Fermi accelerator model. The model consists of two vertical oscillating walls and a ball bouncing between them. The classical localization boundary is calculated in the case of ``sinusoidal velocity transfer'' [A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, Berlin, 1983)] on the basis of the analysis of resonances. In the case of the ``sawtooth'' wall velocity we show that the quantum localization is determined by the analytical properties of the canonical transformations to the action and angle coordinates of the unperturbed Hamiltonian, while the existence of the classical localization is determined by the number of continuous derivatives of the distance between the walls with respect to time.
Xu, Zhenli; Ma, Manman; Liu, Pei
2014-07-01
We propose a modified Poisson-Nernst-Planck (PNP) model to investigate charge transport in electrolytes of inhomogeneous dielectric environment. The model includes the ionic polarization due to the dielectric inhomogeneity and the ion-ion correlation. This is achieved by the self energy of test ions through solving a generalized Debye-Hückel (DH) equation. We develop numerical methods for the system composed of the PNP and DH equations. Particularly, toward the numerical challenge of solving the high-dimensional DH equation, we developed an analytical WKB approximation and a numerical approach based on the selective inversion of sparse matrices. The model and numerical methods are validated by simulating the charge diffusion in electrolytes between two electrodes, for which effects of dielectrics and correlation are investigated by comparing the results with the prediction by the classical PNP theory. We find that, at the length scale of the interface separation comparable to the Bjerrum length, the results of the modified equations are significantly different from the classical PNP predictions mostly due to the dielectric effect. It is also shown that when the ion self energy is in weak or mediate strength, the WKB approximation presents a high accuracy, compared to precise finite-difference results.
Saunders, Christina T; Blume, Jeffrey D
2017-10-26
Mediation analysis explores the degree to which an exposure's effect on an outcome is diverted through a mediating variable. We describe a classical regression framework for conducting mediation analyses in which estimates of causal mediation effects and their variance are obtained from the fit of a single regression model. The vector of changes in exposure pathway coefficients, which we named the essential mediation components (EMCs), is used to estimate standard causal mediation effects. Because these effects are often simple functions of the EMCs, an analytical expression for their model-based variance follows directly. Given this formula, it is instructive to revisit the performance of routinely used variance approximations (e.g., delta method and resampling methods). Requiring the fit of only one model reduces the computation time required for complex mediation analyses and permits the use of a rich suite of regression tools that are not easily implemented on a system of three equations, as would be required in the Baron-Kenny framework. Using data from the BRAIN-ICU study, we provide examples to illustrate the advantages of this framework and compare it with the existing approaches. © The Author 2017. Published by Oxford University Press.
Delaby, Constance; Gabelle, Audrey; Meynier, Philippe; Loubiere, Vincent; Vialaret, Jérôme; Tiers, Laurent; Ducos, Jacques; Hirtz, Christophe; Lehmann, Sylvain
2014-05-01
The use of dried blood spots on filter paper is well documented as an affordable and practical alternative to classical venous sampling for various clinical needs. This technique has indeed many advantages in terms of collection, biological safety, storage, and shipment. Amyloid β (Aβ) peptides are useful cerebrospinal fluid (CSF) biomarkers for Alzheimer disease diagnosis. However, Aβ determination is hindered by preanalytical difficulties in terms of sample collection and stability in tubes. We compared the quantification of Aβ peptides (1-40, 1-42, and 1-38) by simplex and multiplex ELISA, following either a standard operator method (liquid direct quantification) or after spotting CSF onto dried matrix paper card. The use of dried matrix spot (DMS) overcame preanalytical problems and allowed the determination of Aβ concentrations that were highly commutable (Bland-Altman) with those obtained using CSF in classical tubes. Moreover, we found a positive and significant correlation (r2=0.83, Pearson coefficient p=0.0329) between the two approaches. This new DMS method for CSF represents an interesting alternative that increases the quality and efficiency in preanalytics. This should enable the better exploitation of Aβ analytes for Alzheimer's diagnosis.
Noninvasive measurement of pharmacokinetics by near-infrared fluorescence imaging in the eye of mice
NASA Astrophysics Data System (ADS)
Dobosz, Michael; Strobel, Steffen; Stubenrauch, Kay-Gunnar; Osl, Franz; Scheuer, Werner
2014-01-01
Purpose: For generating preclinical pharmacokinetics (PKs) of compounds, blood is drawn at different time points and levels are quantified by different analytical methods. In order to receive statistically meaningful data, 3 to 5 animals are used for each time point to get serum peak-level and half-life of the compound. Both characteristics are determined by data interpolation, which may influence the accuracy of these values. We provide a method that allows continuous monitoring of blood levels noninvasively by measuring the fluorescence intensity of labeled compounds in the eye and other body regions of anesthetized mice. Procedures: The method evaluation was performed with four different fluorescent compounds: (i) indocyanine green, a nontargeting dye; (ii) OsteoSense750, a bone targeting agent; (iii) tumor targeting Trastuzumab-Alexa750; and (iv) its F(-alxea750 fragment. The latter was used for a direct comparison between fluorescence imaging and classical blood analysis using enzyme-linked immunosorbent assay (ELISA). Results: We found an excellent correlation between blood levels measured by noninvasive eye imaging with the results generated by classical methods. A strong correlation between eye imaging and ELISA was demonstrated for the F( fragment. Whole body imaging revealed a compound accumulation in the expected regions (e.g., liver, bone). Conclusions: The combination of eye and whole body fluorescence imaging enables the simultaneous measurement of blood PKs and biodistribution of fluorescent-labeled compounds.
Nikitin, E E; Troe, J
2010-09-16
Approximate analytical expressions are derived for the low-energy rate coefficients of capture of two identical dipolar polarizable rigid rotors in their lowest nonresonant (j(1) = 0 and j(2) = 0) and resonant (j(1) = 0,1 and j(2) = 1,0) states. The considered range extends from the quantum, ultralow energy regime, characterized by s-wave capture, to the classical regime described within fly wheel and adiabatic channel approaches, respectively. This is illustrated by the table of contents graphic (available on the Web) that shows the scaled rate coefficients for the mutual capture of rotors in the resonant state versus the reduced wave vector between the Bethe zero-energy (left arrows) and classical high-energy (right arrow) limits for different ratios δ of the dipole-dipole to dispersion interaction.
NASA Astrophysics Data System (ADS)
Alam, Muhammad Ashraful; Khan, M. Ryyan
2016-10-01
Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.
Quantum Discord for d⊗2 Systems
Ma, Zhihao; Chen, Zhihua; Fanchini, Felipe Fernandes; Fei, Shao-Ming
2015-01-01
We present an analytical solution for classical correlation, defined in terms of linear entropy, in an arbitrary system when the second subsystem is measured. We show that the optimal measurements used in the maximization of the classical correlation in terms of linear entropy, when used to calculate the quantum discord in terms of von Neumann entropy, result in a tight upper bound for arbitrary systems. This bound agrees with all known analytical results about quantum discord in terms of von Neumann entropy and, when comparing it with the numerical results for 106 two-qubit random density matrices, we obtain an average deviation of order 10−4. Furthermore, our results give a way to calculate the quantum discord for arbitrary n-qubit GHZ and W states evolving under the action of the amplitude damping noisy channel. PMID:26036771
Three dimensional iterative beam propagation method for optical waveguide devices
NASA Astrophysics Data System (ADS)
Ma, Changbao; Van Keuren, Edward
2006-10-01
The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.
A Conserving Discretization for the Free Boundary in a Two-Dimensional Stefan Problem
NASA Astrophysics Data System (ADS)
Segal, Guus; Vuik, Kees; Vermolen, Fred
1998-03-01
The dissolution of a disk-likeAl2Cuparticle is considered. A characteristic property is that initially the particle has a nonsmooth boundary. The mathematical model of this dissolution process contains a description of the particle interface, of which the position varies in time. Such a model is called a Stefan problem. It is impossible to obtain an analytical solution for a general two-dimensional Stefan problem, so we use the finite element method to solve this problem numerically. First, we apply a classical moving mesh method. Computations show that after some time steps the predicted particle interface becomes very unrealistic. Therefore, we derive a new method for the displacement of the free boundary based on the balance of atoms. This method leads to good results, also, for nonsmooth boundaries. Some numerical experiments are given for the dissolution of anAl2Cuparticle in anAl-Cualloy.
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
Yehia, Ali M; Mohamed, Heba M
2016-01-05
Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference. Copyright © 2015 Elsevier B.V. All rights reserved.
Experimental design and statistical methods for improved hit detection in high-throughput screening.
Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert
2010-09-01
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
A monolithic homotopy continuation algorithm with application to computational fluid dynamics
NASA Astrophysics Data System (ADS)
Brown, David A.; Zingg, David W.
2016-09-01
A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.
NASA Astrophysics Data System (ADS)
Ni, Yongnian; Wang, Yong; Kokot, Serge
2008-10-01
A spectrophotometric method for the simultaneous determination of the important pharmaceuticals, pefloxacin and its structurally similar metabolite, norfloxacin, is described for the first time. The analysis is based on the monitoring of a kinetic spectrophotometric reaction of the two analytes with potassium permanganate as the oxidant. The measurement of the reaction process followed the absorbance decrease of potassium permanganate at 526 nm, and the accompanying increase of the product, potassium manganate, at 608 nm. It was essential to use multivariate calibrations to overcome severe spectral overlaps and similarities in reaction kinetics. Calibration curves for the individual analytes showed linear relationships over the concentration ranges of 1.0-11.5 mg L -1 at 526 and 608 nm for pefloxacin, and 0.15-1.8 mg L -1 at 526 and 608 nm for norfloxacin. Various multivariate calibration models were applied, at the two analytical wavelengths, for the simultaneous prediction of the two analytes including classical least squares (CLS), principal component regression (PCR), partial least squares (PLS), radial basis function-artificial neural network (RBF-ANN) and principal component-radial basis function-artificial neural network (PC-RBF-ANN). PLS and PC-RBF-ANN calibrations with the data collected at 526 nm, were the preferred methods—%RPE T ˜ 5, and LODs for pefloxacin and norfloxacin of 0.36 and 0.06 mg L -1, respectively. Then, the proposed method was applied successfully for the simultaneous determination of pefloxacin and norfloxacin present in pharmaceutical and human plasma samples. The results compared well with those from the alternative analysis by HPLC.
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
Quantum calculus of classical vortex images, integrable models and quantum states
NASA Astrophysics Data System (ADS)
Pashaev, Oktay K.
2016-10-01
From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.
Simple analytical model of a thermal diode
NASA Astrophysics Data System (ADS)
Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul
2018-05-01
Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.
Nguyen, Tuan A H; Biggs, Simon R; Nguyen, Anh V
2018-05-30
Current analytical models for sessile droplet evaporation do not consider the nonuniform temperature field within the droplet and can overpredict the evaporation by 20%. This deviation can be attributed to a significant temperature drop due to the release of the latent heat of evaporation along the air-liquid interface. We report, for the first time, an analytical solution of the sessile droplet evaporation coupled with this interfacial cooling effect. The two-way coupling model of the quasi-steady thermal diffusion within the droplet and the quasi-steady diffusion-controlled droplet evaporation is conveniently solved in the toroidal coordinate system by applying the method of separation of variables. Our new analytical model for the coupled vapor concentration and temperature fields is in the closed form and is applicable for a full range of spherical-cap shape droplets of different contact angles and types of fluids. Our analytical results are uniquely quantified by a dimensionless evaporative cooling number E o whose magnitude is determined only by the thermophysical properties of the liquid and the atmosphere. Accordingly, the larger the magnitude of E o , the more significant the effect of the evaporative cooling, which results in stronger suppression on the evaporation rate. The classical isothermal model is recovered if the temperature gradient along the air-liquid interface is negligible ( E o = 0). For substrates with very high thermal conductivities (isothermal substrates), our analytical model predicts a reversal of temperature gradient along the droplet-free surface at a contact angle of 119°. Our findings pose interesting challenges but also guidance for experimental investigations.
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
A centroid molecular dynamics study of liquid para-hydrogen and ortho-deuterium.
Hone, Tyler D; Voth, Gregory A
2004-10-01
Centroid molecular dynamics (CMD) is applied to the study of collective and single-particle dynamics in liquid para-hydrogen at two state points and liquid ortho-deuterium at one state point. The CMD results are compared with the results of classical molecular dynamics, quantum mode coupling theory, a maximum entropy analytic continuation approach, pair-product forward- backward semiclassical dynamics, and available experimental results. The self-diffusion constants are in excellent agreement with the experimental measurements for all systems studied. Furthermore, it is shown that the method is able to adequately describe both the single-particle and collective dynamics of quantum liquids. (c) 2004 American Institute of Physics
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
NASA Astrophysics Data System (ADS)
Difilippo, Felix C.
2012-09-01
Within the context of general relativity theory we calculate, analytically, scattering signatures around a gravitational singularity: angular and time distributions of scattered massive objects and photons and the time and space modulation of Doppler effects. Additionally, the scattering and absorption cross sections for the gravitational interactions are calculated. The results of numerical simulations of the trajectories are compared with the analytical results.
Bin Sayeed, Muhammad Shahdaat; Karim, Selim Muhammad Rezaul; Sharmin, Tasnuva; Morshed, Mohammed Monzur
2016-01-01
Beta-sitosterol (BS) is a phytosterol, widely distributed throughout the plant kingdom and known to be involved in the stabilization of cell membranes. To compile the sources, physical and chemical properties, spectral and chromatographic analytical methods, synthesis, systemic effects, pharmacokinetics, therapeutic potentials, toxicity, drug delivery and finally, to suggest future research with BS, classical as well as on-line literature were studied. Classical literature includes classical books on ethnomedicine and phytochemistry, and the electronic search included Pubmed, SciFinder, Scopus, the Web of Science, Google Scholar, and others. BS could be obtained from different plants, but the total biosynthetic pathway, as well as its exact physiological and structural function in plants, have not been fully understood. Different pharmacological effects have been studied, but most of the mechanisms of action have not been studied in detail. Clinical trials with BS have shown beneficial effects in different diseases, but long-term study results are not available. These have contributed to its current status as an “orphan phytosterol”. Therefore, extensive research regarding its effect at cellular and molecular level in humans as well as addressing the claims made by commercial manufacturers such as the cholesterol lowering ability, immunological activity etc. are highly recommended. PMID:28930139
Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.
Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo
2017-01-01
The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.
Extended Rindler spacetime and a new multiverse structure
NASA Astrophysics Data System (ADS)
Araya, Ignacio J.; Bars, Itzhak
2018-04-01
This is the first of a series of papers in which we use analyticity properties of quantum fields propagating on a spacetime to uncover a new multiverse geometry when the classical geometry has horizons and/or singularities. The nature and origin of the "multiverse" idea presented in this paper, that is shared by the fields in the standard model coupled to gravity, are different from other notions of a multiverse. Via analyticity we are able to establish definite relations among the universes. In this paper we illustrate these properties for the extended Rindler space, while black hole spacetime and the cosmological geometry of mini-superspace (see Appendix B) will appear in later papers. In classical general relativity, extended Rindler space is equivalent to flat Minkowski space; it consists of the union of the four wedges in (u ,v ) light-cone coordinates as in Fig. 1. In quantum mechanics, the wavefunction is an analytic function of (u ,v ) that is sensitive to branch points at the horizons u =0 or v =0 , with branch cuts attached to them. The wave function is uniquely defined by analyticity on an infinite number of sheets in the cut analytic (u ,v ) spacetime. This structure is naturally interpreted as an infinite stack of identical Minkowski geometries, or "universes", connected to each other by analyticity across branch cuts, such that each sheet represents a different Minkowski universe when (u ,v ) are analytically continued to the real axis on any sheet. We show in this paper that, in the absence of interactions, information does not flow from one Rindler sheet to another. By contrast, for an eternal black hole spacetime, which may be viewed as a modification of Rindler that includes gravitational interactions, analyticity shows how information is "lost" due to a flow to other universes, enabled by an additional branch point and cut due to the black hole singularity.
Torsion of a Cosserat elastic bar with square cross section: theory and experiment
NASA Astrophysics Data System (ADS)
Drugan, W. J.; Lakes, R. S.
2018-04-01
An approximate analytical solution for the displacement and microrotation vector fields is derived for pure torsion of a prismatic bar with square cross section comprised of homogeneous, isotropic linear Cosserat elastic material. This is accomplished by analytical simplification coupled with use of the principle of minimum potential energy together with polynomial representations for the desired field components. Explicit approximate expressions are derived for cross section warp and for applied torque versus angle of twist of the bar. These show that torsional rigidity exceeds the classical elasticity value, the difference being larger for slender bars, and that cross section warp is less than the classical amount. Experimental measurements on two sets of 3D printed square cross section polymeric bars, each set having a different microstructure and four different cross section sizes, revealed size effects not captured by classical elasticity but consistent with the present analysis for physically sensible values of the Cosserat moduli. The warp can allow inference of Cosserat elastic constants independently of any sensitivity the material may have to dilatation gradients; warp also facilitates inference of Cosserat constants that are difficult to obtain via size effects.
Quench dynamics of a dissipative Rydberg gas in the classical and quantum regimes
NASA Astrophysics Data System (ADS)
Gribben, Dominic; Lesanovsky, Igor; Gutiérrez, Ricardo
2018-01-01
Understanding the nonequilibrium behavior of quantum systems is a major goal of contemporary physics. Much research is currently focused on the dynamics of many-body systems in low-dimensional lattices following a quench, i.e., a sudden change of parameters. Already such a simple setting poses substantial theoretical challenges for the investigation of the real-time postquench quantum dynamics. In classical many-body systems, the Kolmogorov-Mehl-Johnson-Avrami model describes the phase transformation kinetics of a system that is quenched across a first-order phase transition. Here, we show that a similar approach can be applied for shedding light on the quench dynamics of an interacting gas of Rydberg atoms, which has become an important experimental platform for the investigation of quantum nonequilibrium effects. We are able to gain an analytical understanding of the time evolution following a sudden quench from an initial state devoid of Rydberg atoms and identify strikingly different behaviors of the excitation growth in the classical and quantum regimes. Our approach allows us to describe quenches near a nonequilibrium phase transition and provides an approximate analytical solution deep in the quantum domain.
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
1991-01-01
The analytical derivations of the non-axial thrust divergence losses for convergent-divergent nozzles are described as well as how these calculations are embodied in the Navy/NASA engine computer program. The convergent-divergent geometries considered are simple classic axisymmetric nozzles, two dimensional rectangular nozzles, and axisymmetric and two dimensional plug nozzles. A simple, traditional, inviscid mathematical approach is used to deduce the influence of the ineffectual non-axial thrust as a function of the nozzle exit divergence angle.
Cell-model prediction of the melting of a Lennard-Jones solid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holian, B.L.
The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.
The classical equation of state of fully ionized plasmas
NASA Astrophysics Data System (ADS)
Eisa, Dalia Ahmed
2011-03-01
The aim of this paper is to calculate the analytical form of the equation of state until the third virial coefficient of a classical system interacting via an effective potential of fully Ionized Plasmas. The excess osmotic pressure is represented in the forms of a convergent series expansions in terms of the plasma Parameter μ _{ab} = {{{e_a e_b χ } over {DKT}}}, where χ2 is the square of the inverse Debye radius. We consider only the thermal equilibrium plasma.
Classical topological paramagnetism
NASA Astrophysics Data System (ADS)
Bondesan, R.; Ringel, Z.
2017-05-01
Topological phases of matter are one of the hallmarks of quantum condensed matter physics. One of their striking features is a bulk-boundary correspondence wherein the topological nature of the bulk manifests itself on boundaries via exotic massless phases. In classical wave phenomena, analogous effects may arise; however, these cannot be viewed as equilibrium phases of matter. Here, we identify a set of rules under which robust equilibrium classical topological phenomena exist. We write simple and analytically tractable classical lattice models of spins and rotors in two and three dimensions which, at suitable parameter ranges, are paramagnetic in the bulk but nonetheless exhibit some unusual long-range or critical order on their boundaries. We point out the role of simplicial cohomology as a means of classifying, writing, and analyzing such models. This opens an experimental route for studying strongly interacting topological phases of spins.
Continuum-kinetic approach to sheath simulations
NASA Astrophysics Data System (ADS)
Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana
2016-10-01
Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.
The propagation of Lamb waves in multilayered plates: phase-velocity measurement
NASA Astrophysics Data System (ADS)
Grondel, Sébastien; Assaad, Jamal; Delebarre, Christophe; Blanquet, Pierrick; Moulin, Emmanuel
1999-05-01
Owing to the dispersive nature and complexity of the Lamb waves generated in a composite plate, the measurement of the phase velocities by using classical methods is complicated. This paper describes a measurement method based upon the spectrum-analysis technique, which allows one to overcome these problems. The technique consists of using the fast Fourier transform to compute the spatial power-density spectrum. Additionally, weighted functions are used to increase the probability of detecting the various propagation modes. Experimental Lamb-wave dispersion curves of multilayered plates are successfully compared with the analytical ones. This technique is expected to be a useful way to design composite parts integrating ultrasonic transducers in the field of health monitoring. Indeed, Lamb waves and particularly their velocities are very sensitive to defects.
Quantum dressing orbits on compact groups
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Šťovíček, Pavel
1993-02-01
The quantum double is shown to imply the dressing transformation on quantum compact groups and the quantum Iwasawa decompositon in the general case. Quantum dressing orbits are described explicitly as *-algebras. The dual coalgebras consisting of differential operators are related to the quantum Weyl elements. Besides, the differential geometry on a quantum leaf allows a remarkably simple construction of irreducible *-representations of the algebras of quantum functions. Representation spaces then consist of analytic functions on classical phase spaces. These representations are also interpreted in the framework of quantization in the spirit of Berezin applied to symplectic leaves on classical compact groups. Convenient “coherent states” are introduced and a correspondence between classical and quantum observables is given.
Krasnoshchekov, Sergey V; Isayeva, Elena V; Stepanov, Nikolay F
2012-04-12
Anharmonic vibrational states of semirigid polyatomic molecules are often studied using the second-order vibrational perturbation theory (VPT2). For efficient higher-order analysis, an approach based on the canonical Van Vleck perturbation theory (CVPT), the Watson Hamiltonian and operators of creation and annihilation of vibrational quanta is employed. This method allows analysis of the convergence of perturbation theory and solves a number of theoretical problems of VPT2, e.g., yields anharmonic constants y(ijk), z(ijkl), and allows the reliable evaluation of vibrational IR and Raman anharmonic intensities in the presence of resonances. Darling-Dennison and higher-order resonance coupling coefficients can be reliably evaluated as well. The method is illustrated on classic molecules: water and formaldehyde. A number of theoretical conclusions results, including the necessity of using sextic force field in the fourth order (CVPT4) and the nearly vanishing CVPT4 contributions for bending and wagging modes. The coefficients of perturbative Dunham-type Hamiltonians in high-orders of CVPT are found to conform to the rules of equality at different orders as earlier proven analytically for diatomic molecules. The method can serve as a good substitution of the more traditional VPT2.
NASA Astrophysics Data System (ADS)
Perrier, C.; Breysacher, J.; Rauw, G.
2009-09-01
Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.
Dam break problem for the focusing nonlinear Schrödinger equation and the generation of rogue waves
NASA Astrophysics Data System (ADS)
El, G. A.; Khamis, E. G.; Tovbis, A.
2016-09-01
We propose a novel, analytically tractable, scenario of the rogue wave formation in the framework of the small-dispersion focusing nonlinear Schrödinger (NLS) equation with the initial condition in the form of a rectangular barrier (a ‘box’). We use the Whitham modulation theory combined with the nonlinear steepest descent for the semi-classical inverse scattering transform, to describe the evolution and interaction of two counter-propagating nonlinear wave trains—the dispersive dam break flows—generated in the NLS box problem. We show that the interaction dynamics results in the emergence of modulated large-amplitude quasi-periodic breather lattices whose amplitude profiles are closely approximated by the Akhmediev and Peregrine breathers within certain space-time domain. Our semi-classical analytical results are shown to be in excellent agreement with the results of direct numerical simulations of the small-dispersion focusing NLS equation.
The effect of damping on a quantum system containing a Kerr-like medium
NASA Astrophysics Data System (ADS)
Mohamed, A.-B. A.; Sebawe Abdalla, M.; Obada, A.-S. F.
2018-05-01
An analytical description is given for a model which represents the interaction between Su(1,1) and Su(2) quantum systems taking into account Su(1,1)-cavity damping and Kerr medium properties. The analytic solution for the master equation of the density matrix is obtained. The examination of the effects of the damping parameter as well as the Kerr-like medium features is performed. The atomic inversion is discussed where the revivals and collapses phenomenon is realized at the considered period of time. Our study is extended to include the degree of entanglement where the system shows partial entanglement in all cases, however, disentanglement is also observed. The death and rebirth is seen in the system provided one selects the suitable values of the parameters. The correlation function of the system shows non-classical as well as classical behavior.
The mean and variance of phylogenetic diversity under rarefaction
Matsen, Frederick A.
2013-01-01
Summary Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required. PMID:23833701
The mean and variance of phylogenetic diversity under rarefaction.
Nipperess, David A; Matsen, Frederick A
2013-06-01
Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.
Development of advanced methods for analysis of experimental data in diffusion
NASA Astrophysics Data System (ADS)
Jaques, Alonso V.
There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.
NASA Astrophysics Data System (ADS)
Berrada, K.; Eleuch, H.
2017-09-01
Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.
Zhu, Chaoyuan; Lin, Sheng Hsien
2006-07-28
Unified semiclasical solution for general nonadiabatic tunneling between two adiabatic potential energy surfaces is established by employing unified semiclassical solution for pure nonadiabatic transition [C. Zhu, J. Chem. Phys. 105, 4159 (1996)] with the certain symmetry transformation. This symmetry comes from a detailed analysis of the reduced scattering matrix for Landau-Zener type of crossing as a special case of nonadiabatic transition and nonadiabatic tunneling. Traditional classification of crossing and noncrossing types of nonadiabatic transition can be quantitatively defined by the rotation angle of adiabatic-to-diabatic transformation, and this rotational angle enters the analytical solution for general nonadiabatic tunneling. The certain two-state exponential potential models are employed for numerical tests, and the calculations from the present general nonadiabatic tunneling formula are demonstrated in very good agreement with the results from exact quantum mechanical calculations. The present general nonadiabatic tunneling formula can be incorporated with various mixed quantum-classical methods for modeling electronically nonadiabatic processes in photochemistry.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
Liu, X; Abd El-Aty, A M; Shim, J-H
2011-10-01
Nigella sativa L. (black cumin), commonly known as black seed, is a member of the Ranunculaceae family. This seed is used as a natural remedy in many Middle Eastern and Far Eastern countries. Extracts prepared from N. sativa have, for centuries, been used for medical purposes. Thus far, the organic compounds in N. sativa, including alkaloids, steroids, carbohydrates, flavonoids, fatty acids, etc. have been fairly well characterized. Herein, we summarize some new extraction techniques, including microwave assisted extraction (MAE) and supercritical extraction techniques (SFE), in addition to the classical method of hydrodistillation (HD), which have been employed for isolation and various analytical techniques used for the identification of secondary metabolites in black seed. We believe that some compounds contained in N. sativa remain to be identified, and that high-throughput screening could help to identify new compounds. A study addressing environmentally-friendly techniques that have minimal or no environmental effects is currently underway in our laboratory.
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Merging OLTP and OLAP - Back to the Future
NASA Astrophysics Data System (ADS)
Lehner, Wolfgang
When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.
Continuum description of solvent dielectrics in molecular-dynamics simulations of proteins
NASA Astrophysics Data System (ADS)
Egwolf, Bernhard; Tavan, Paul
2003-02-01
We present a continuum approach for efficient and accurate calculation of reaction field forces and energies in classical molecular-dynamics (MD) simulations of proteins in water. The derivation proceeds in two steps. First, we reformulate the electrostatics of an arbitrarily shaped molecular system, which contains partially charged atoms and is embedded in a dielectric continuum representing the water. A so-called fuzzy partition is used to exactly decompose the system into partial atomic volumes. The reaction field is expressed by means of dipole densities localized at the atoms. Since these densities cannot be calculated analytically for general systems, we introduce and carefully analyze a set of approximations in a second step. These approximations allow us to represent the dipole densities by simple dipoles localized at the atoms. We derive a system of linear equations for these dipoles, which can be solved numerically by iteration. After determining the two free parameters of our approximate method we check its quality by comparisons (i) with an analytical solution, which is available for a perfectly spherical system, (ii) with forces obtained from a MD simulation of a soluble protein in water, and (iii) with reaction field energies of small molecules calculated by a finite difference method.
Moazami, Hamid Reza; Hosseiny Davarani, Saied Saeed; Mohammadi, Jamil; Nojavan, Saeed; Abrari, Masoud
2015-09-03
The distribution of electric field vectors was first calculated for electromembrane extraction (EME) systems in classical and cylindrical electrode geometries. The results showed that supported liquid membrane (SLM) has a general field amplifying effect due to its lower dielectric constant in comparison with aqueous donor/acceptor solutions. The calculated norms of the electric field vector showed that a DC voltage of 50 V can create huge electric field strengths up to 64 kV m(-1) and 111 kV m(-1) in classical and cylindrical geometries respectively. In both cases, the electric field strength reached its peak value on the inner wall of the SLM. In the case of classical geometry, the field strength was a function of the polar position of the SLM whereas the field strength in cylindrical geometry was angularly uniform. In order to investigate the effect of the electrode geometry on the performance of real EME systems, the analysis was carried out in three different geometries including classical, helical and cylindrical arrangements using naproxen and sodium diclofenac as the model analytes. Despite higher field strength and extended cross sectional area, the helical and cylindrical geometries gave lower recoveries with respect to the classical EME. The observed decline of the signal was proved to be against the relations governing migration and diffusion processes, which means that a third driving force is involved in EME. The third driving force is the interaction between the radially inhomogeneous electric field and the analyte in its neutral form. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Saravanos, D. A.
1993-01-01
The development of novel composite mechanics for the analysis of damping in composite laminates and structures and the more significant results of this effort are summarized. Laminate mechanics based on piecewise continuous in-plane displacement fields are described that can represent both intralaminar stresses and interlaminar shear stresses and the associated effects on the stiffness and damping characteristics of a composite laminate. Among other features, the mechanics can accurately model the static and damped dynamic response of either thin or thick composite laminates, as well as, specialty laminates with embedded compliant damping layers. The discrete laminate damping theory is further incorporated into structural analysis methods. In this context, an exact semi-analytical method for the simulation of the damped dynamic response of composite plates was developed. A finite element based method and a specialty four-node plate element were also developed for the analysis of composite structures of variable shape and boundary conditions. Numerous evaluations and applications demonstrate the quality and superiority of the mechanics in predicting the damped dynamic characteristics of composite structures. Finally, additional development was focused on the development of optimal tailoring methods for the design of thick composite structures based on the developed analytical capability. Applications on composite plates illustrated the influence of composite mechanics in the optimal design of composites and the potential for significant deviations in the resultant designs when more simplified (classical) laminate theories are used.
Pramanik, Brahmananda; Tadepalli, Tezeswi; Mantena, P. Raju
2012-01-01
In this study, the fractal dimensions of failure surfaces of vinyl ester based nanocomposites are estimated using two classical methods, Vertical Section Method (VSM) and Slit Island Method (SIM), based on the processing of 3D digital microscopic images. Self-affine fractal geometry has been observed in the experimentally obtained failure surfaces of graphite platelet reinforced nanocomposites subjected to quasi-static uniaxial tensile and low velocity punch-shear loading. Fracture energy and fracture toughness are estimated analytically from the surface fractal dimensionality. Sensitivity studies show an exponential dependency of fracture energy and fracture toughness on the fractal dimensionality. Contribution of fracture energy to the total energy absorption of these nanoparticle reinforced composites is demonstrated. For the graphite platelet reinforced nanocomposites investigated, surface fractal analysis has depicted the probable ductile or brittle fracture propagation mechanism, depending upon the rate of loading. PMID:28817017
Preliminary design methods for fiber reinforced composite structures employing a personal computer
NASA Technical Reports Server (NTRS)
Eastlake, C. N.
1986-01-01
The objective of this project was to develop a user-friendly interactive computer program to be used as an analytical tool by structural designers. Its intent was to do preliminary, approximate stress analysis to help select or verify sizing choices for composite structural members. The approach to the project was to provide a subroutine which uses classical lamination theory to predict an effective elastic modulus for a laminate of arbitrary material and ply orientation. This effective elastic modulus can then be used in a family of other subroutines which employ the familiar basic structural analysis methods for isotropic materials. This method is simple and convenient to use but only approximate, as is appropriate for a preliminary design tool which will be subsequently verified by more sophisticated analysis. Additional subroutines have been provided to calculate laminate coefficient of thermal expansion and to calculate ply-by-ply strains within a laminate.
Are the classic diagnostic methods in mycology still state of the art?
Wiegand, Cornelia; Bauer, Andrea; Brasch, Jochen; Nenoff, Pietro; Schaller, Martin; Mayser, Peter; Hipler, Uta-Christina; Elsner, Peter
2016-05-01
The diagnostic workup of cutaneous fungal infections is traditionally based on microscopic KOH preparations as well as culturing of the causative organism from sample material. Another possible option is the detection of fungal elements by dermatohistology. If performed correctly, these methods are generally suitable for the diagnosis of mycoses. However, the advent of personalized medicine and the tasks arising therefrom require new procedures marked by simplicity, specificity, and swiftness. The additional use of DNA-based molecular techniques further enhances sensitivity and diagnostic specificity, and reduces the diagnostic interval to 24-48 hours, compared to weeks required for conventional mycological methods. Given the steady evolution in the field of personalized medicine, simple analytical PCR-based systems are conceivable, which allow for instant diagnosis of dermatophytes in the dermatology office (point-of-care tests). © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2006-11-01
The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.
Apparent Mass Nonlinearity for Paired Oscillating Plates
NASA Astrophysics Data System (ADS)
Granlund, Kenneth; Ol, Michael
2014-11-01
The classical potential-flow problem of a plate oscillating sinusoidally at small amplitude, in a direction normal to its plane, has a well-known analytical solution of a fluid ``mass,'' multiplied by plate acceleration, being equal to the force on the plate. This so-called apparent-mass is analytically equal to that of a cylinder of fluid, with diameter equal to plate chord. The force is directly proportional to frequency squared. Here we consider experimentally a generalization, where two coplanar plates of equal chord are placed at some lateral distance apart. For spacing of ~0.5 chord and larger between the two plates, the analytical solution for a single plate can simply be doubled. Zero spacing means a plate of twice the chord and therefore a heuristic cylinder of fluid of twice the cross-sectional area. This limit is approached for plate spacing <0.5c. For a spacing of 0.1-0.2c, the force due to apparent mass was found to increase with frequency, when normalized by frequency squared; this is a nonlinearity and a departure from the classical theory. Flow visualization in a water-tank suggests that such departure can be imputed to vortex shedding from the plates' edges inside the inter-plate gap.
Thermodynamic aspects of reformulation of automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-09-01
A study of procedures for measuring and predicting the RVP and the initial vapor emissions of reformulated gasoline blends which contain one or more oxygenated compounds, viz., Ethanol, MTBE, ETBE, and TAME is discussed. Two computer simulation methods were programmed and tested. In one method, Method A, the D-86 distillation data on the blend are used for predicting the blend`s RVP from a simulation of the Mini RVPE (RVP Equivalent) experiment. The other method, Method B, relies on analytical information (PIANO analyzes) on the nature of the base gasoline and utilizes classical thermodynamics for simulating the same RVPE, Mini experiment.more » Method B, also, predicts the composition and other properties of the initial vapor emission from the fuel. The results indicate that predictions made with both methods agree very well with experimental values. The predictions with Method B illustrate that the admixture of an oxygenate to a gasoline blend changes the volatility of the blend and, also, the composition of the vapor emission. From the example simulations, a blend with 10 vol % ethanol increases the RVP by about 0.8 psi. The accompanying vapor emission will contain about 15% ethanol. Similarly, the vapor emission of a fuel blend with 11 vol % MTBE was calculated to contain about 11 vol % MTBE. Predictions of the behavior of blends with ETBE and ETBE+Ethanol are also presented and discussed. Recognizing that quite some efforts have been invested in developing empirical correlations for predicting RVP, the writers consider the purpose of this paper to be pointing out that the methods of classical thermodynamics are adequate and that there is a need for additional work in developing certain fundamental data that are still lacking.« less
A Comparison of the Bounded Derivative and the Normal Mode Initialization Methods Using Real Data
NASA Technical Reports Server (NTRS)
Semazzi, F. H. M.; Navon, I. M.
1985-01-01
Browning et al. (1980) proposed an initialization method called the bounded derivative method (BDI). They used analytical data to test the new method. Kasahara (1982) theoretically demonstrated the equivalence between BDI and the well known nonlinear normal mode initialization method (NMI). The purposes of this study are the extension of the application of BDI to real data and comparison with NMI. The unbalanced initial state (UBD) is data of January, 1979 OOZ which were interpolated from the adjacent sigma levels of the GLAS GCM to the 300 mb surface. The global barotropic model described by Takacs and Balgovind (1983) is used. Orographic forcing is explicitly included in the model. Many comparisons are performed between various quantities. However, we only present a comparison of the time evolution at two grid points A(50 S, 90 E) and B(10 S, 20 E) which represent low and middle latitude locations. To facilitate a more complete comparison an initialization experiment based on the classical balance equation (CBE) was also included.
A new exact method for line radiative transfer
NASA Astrophysics Data System (ADS)
Elitzur, Moshe; Asensio Ramos, Andrés
2006-01-01
We present a new method, the coupled escape probability (CEP), for exact calculation of line emission from multi-level systems, solving only algebraic equations for the level populations. The CEP formulation of the classical two-level problem is a set of linear equations, and we uncover an exact analytic expression for the emission from two-level optically thick sources that holds as long as they are in the `effectively thin' regime. In a comparative study of a number of standard problems, the CEP method outperformed the leading line transfer methods by substantial margins. The algebraic equations employed by our new method are already incorporated in numerous codes based on the escape probability approximation. All that is required for an exact solution with these existing codes is to augment the expression for the escape probability with simple zone-coupling terms. As an application, we find that standard escape probability calculations generally produce the correct cooling emission by the CII 158-μm line but not by the 3P lines of OI.
Lashgari, Maryam; Lee, Hian Kee
2014-11-21
In the current study, a simple, fast and efficient combination of protein precipitation and micro-solid phase extraction (μ-SPE) followed by liquid chromatography-triple quadrupole tandem mass spectrometry (LC-MS/MS) was developed for the determination of perfluorinated carboxylic acids (PFCAs) in fish fillet. Ten PFCAs with different hydrocarbon chain lengths (C5-C14) were analysed simultaneously using this method. Protein precipitation by acetonitrile and μ-SPE by surfactant-incorporated ordered mesoporous silica were applied to the extraction and concentration of the PFCAs as well as for removal of interferences. Determination of the PFCAs was carried out by LC-MS/MS in negative electrospray ionization mode. MS/MS parameters were optimized for multiple reaction monitoring of the analytes. (13)C mass labelled PFOA as a stable-isotopic internal standard, was used for calibration. The detection limits of the method ranged from 0.97 ng/g to 2.7 ng/g, with a relative standard deviation of between 5.4 and 13.5. The recoveries were evaluated for each analyte and were ranged from 77% to 120%. The t-test at 95% confidence level showed that for all the analytes, the relative recoveries did not depend on their concentrations in the explored concentration range. The effect of the matrix on MS signals (suppression or enhancement) was also evaluated. Contamination at low levels was detected for some analytes in the fish samples. The protective role of the polypropylene membrane used in μ-SPE in the elimination of matrix effects was evaluated by parallel experiments in classical dispersive solid phase extraction. The results evidently showed that the polypropylene membrane was significantly effective in reducing matrix effects. Copyright © 2014 Elsevier B.V. All rights reserved.
Smalley, James; Marino, Anthony M; Xin, Baomin; Olah, Timothy; Balimane, Praveen V
2007-07-01
Caco-2 cells, the human colon carcinoma cells, are typically used for screening compounds for their permeability characteristics and P-glycoprotein (P-gp) interaction potential during discovery and development. The P-gp inhibition of test compounds is assessed by performing bi-directional permeability studies with digoxin, a well established P-gp substrate probe. Studies performed with digoxin alone as well as digoxin in presence of test compounds as putative inhibitors constitute the P-gp inhibition assay used to assess the potential liability of discovery compounds. Radiolabeled (3)H-digoxin is commonly used in such studies followed by liquid scintillation counting. This manuscript describes the development of a sensitive, accurate, and reproducible LC-MS/MS method for analysis of digoxin and its internal standard digitoxin using an on-line extraction turbulent flow chromatography coupled to tandem mass spectrometric detection that is amendable to high throughput with use of 96-well plates. The standard curve for digoxin was linear between 10 nM and 5000 nM with regression coefficient (R(2)) of 0.99. The applicability and reliability of the analysis method was evaluated by successful demonstration of efflux ratio (permeability B to A over permeability A to B) greater than 10 for digoxin in Caco-2 cells. Additional evaluations were performed on 13 marketed compounds by conducting inhibition studies in Caco-2 cells using classical P-gp inhibitors (ketoconazole, cyclosporin, verapamil, quinidine, saquinavir etc.) and comparing the results to historical data with (3)H-digoxin studies. Similarly, P-gp inhibition studies with LC-MS/MS analytical method for digoxin were also performed for 21 additional test compounds classified as negative, moderate, and potent P-gp inhibitors spanning multiple chemo types and results compared with the historical P-gp inhibition data from the (3)H-digoxin studies. A very good correlation coefficient (R(2)) of 0.89 between the results from the two analytical methods affords an attractive LC-MS/MS analytical option for labs that need to conduct the P-gp inhibition assay without using radiolabeled compounds.
Barker, John R; Martinez, Antonio
2018-04-04
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
NASA Astrophysics Data System (ADS)
Barker, John R.; Martinez, Antonio
2018-04-01
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
Effective scheme of photolysis of GFP in live cell as revealed with confocal fluorescence microscopy
NASA Astrophysics Data System (ADS)
Glazachev, Yu I.; Orlova, D. Y.; Řezníčková, P.; Bártová, E.
2018-05-01
We proposed an effective kinetics scheme of photolysis of green fluorescent protein (GFP) observed in live cells with a commercial confocal fluorescence microscope. We investigated the photolysis of GFP-tagged heterochromatin protein, HP1β-GFP, in live nucleus with the pulse position modulation approach, which has several advantages over the classical pump-and-probe method. At the basis of the proposed scheme lies a process of photoswitching from the native fluorescence state to the intermediate fluorescence state, which has a lower fluorescence yield and recovers back to native state in the dark. This kinetics scheme includes four effective parameters (photoswitching, reverse switching, photodegradation rate constants, and relative brightness of the intermediate state) and covers the time scale from dozens of milliseconds to minutes of the experimental fluorescence kinetics. Additionally, the applicability of the scheme was demonstrated in the cases of continuous irradiation and the classical pump-and-probe approach using numerical calculations and analytical solutions. An interesting finding of experimental data analysis was that the overall photodegradation of GFP proceeds dominantly from the intermediate state, and demonstrated approximately the second-order reaction versus irradiation power. As a practical example, the proposed scheme elucidates the artifacts of fluorescence recovery after the photobleaching method, and allows us to propose some suggestions on how to diminish them.
Glazachev, Yu I; Orlova, D Y; Řezníčková, P; Bártová, E
2018-03-23
We proposed an effective kinetics scheme of photolysis of green fluorescent protein (GFP) observed in live cells with a commercial confocal fluorescence microscope. We investigated the photolysis of GFP-tagged heterochromatin protein, HP1β-GFP, in live nucleus with the pulse position modulation approach, which has several advantages over the classical pump-and-probe method. At the basis of the proposed scheme lies a process of photoswitching from the native fluorescence state to the intermediate fluorescence state, which has a lower fluorescence yield and recovers back to native state in the dark. This kinetics scheme includes four effective parameters (photoswitching, reverse switching, photodegradation rate constants, and relative brightness of the intermediate state) and covers the time scale from dozens of milliseconds to minutes of the experimental fluorescence kinetics. Additionally, the applicability of the scheme was demonstrated in the cases of continuous irradiation and the classical pump-and-probe approach using numerical calculations and analytical solutions. An interesting finding of experimental data analysis was that the overall photodegradation of GFP proceeds dominantly from the intermediate state, and demonstrated approximately the second-order reaction versus irradiation power. As a practical example, the proposed scheme elucidates the artifacts of fluorescence recovery after the photobleaching method, and allows us to propose some suggestions on how to diminish them.
[Analysis of triterpenoids in Ganoderma lucidum by microwave-assisted continuous extraction].
Lu, Yan-fang; An, Jing; Jiang, Ye
2015-04-01
For further improving the extraction efficiency of microwave extraction, a microwave-assisted contijuous extraction (MACE) device has been designed and utilized. By contrasting with the traditional methods, the characteristics and extraction efficiency of MACE has also been studied. The method was validated by the analysis of the triterpenoids in Ganoderma lucidum. The extraction conditions of MACE were: using 95% ethanol as solvent, microwave power 200 W and radiation time 14.5 min (5 cycles). The extraction results were subsequently compared with traditional heat reflux extraction ( HRE) , soxhlet extraction (SE), ultrasonic extraction ( UE) as well as the conventional microwave extraction (ME). For triterpenoids, the two methods based on the microwaves (ME and MACE) were in general capable of finishing the extraction in 10, 14.5 min, respectively, while other methods should consume 60 min and even more than 100 min. Additionally, ME can produce comparable extraction results as the classical HRE and higher extraction yield than both SE and UE, however, notably lower extraction yield than MASE. More importantly, the purity of the crud extract by MACE is far better than the other methods. MACE can effectively combine the advantages of microwave extraction and soxhlet extraction, thus enabling a more complete extraction of the analytes of TCMs in comparison with ME. And therefore makes the analytic result more accurate. It provides a novel, high efficient, rapid and reliable pretreatment technique for the analysis of TCMs, and it could potentially be extended to ingredient preparation or extracting techniques of TCMs.
Deport, Coralie; Ratel, Jérémy; Berdagué, Jean-Louis; Engel, Erwan
2006-05-26
The current work describes a new method, the comprehensive combinatory standard correction (CCSC), for the correction of instrumental signal drifts in GC-MS systems. The method consists in analyzing together with the products of interest a mixture of n selected internal standards, and in normalizing the peak area of each analyte by the sum of standard areas and then, select among the summation operator sigma(p = 1)(n)C(n)p possible sums, the sum that enables the best product discrimination. The CCSC method was compared with classical techniques of data pre-processing like internal normalization (IN) or single standard correction (SSC) on their ability to correct raw data from the main drifts occurring in a dynamic headspace-gas chromatography-mass spectrometry system. Three edible oils with closely similar compositions in volatile compounds were analysed using a device which performance was modulated by using new or used dynamic headspace traps and GC-columns, and by modifying the tuning of the mass spectrometer. According to one-way ANOVA, the CCSC method increased the number of analytes discriminating the products (31 after CCSC versus 25 with raw data or after IN and 26 after SSC). Moreover, CCSC enabled a satisfactory discrimination of the products irrespective of the drifts. In a factorial discriminant analysis, 100% of the samples (n = 121) were well-classified after CCSC versus 45% for raw data, 90 and 93%, respectively after IN and SSC.
Quantum Theories of Self-Localization
NASA Astrophysics Data System (ADS)
Bernstein, Lisa Joan
In the classical dynamics of coupled oscillator systems, nonlinearity leads to the existence of stable solutions in which energy remains localized for all time. Here the quantum-mechanical counterpart of classical self-localization is investigated in the context of two model systems. For these quantum models, the terms corresponding to classical nonlinearities modify a subset of the stationary quantum states to be particularly suited to the creation of nonstationary wavepackets that localize energy for long times. The first model considered here is the Quantized Discrete Self-Trapping model (QDST), a system of anharmonic oscillators with linear dispersive coupling used to model local modes of vibration in polyatomic molecules. A simple formula is derived for a particular symmetry class of QDST systems which gives an analytic connection between quantum self-localization and classical local modes. This formula is also shown to be useful in the interpretation of the vibrational spectra of some molecules. The second model studied is the Frohlich/Einstein Dimer (FED), a two-site system of anharmonically coupled oscillators based on the Frohlich Hamiltonian and motivated by the theory of Davydov solitons in biological protein. The Born-Oppenheimer perturbation method is used to obtain approximate stationary state wavefunctions with error estimates for the FED at the first excited level. A second approach is used to reduce the first excited level FED eigenvalue problem to a system of ordinary differential equations. A simple theory of low-energy self-localization in the FED is discussed. The quantum theories of self-localization in the intrinsic QDST model and the extrinsic FED model are compared.
Hughes, Sarah A; Mahaffey, Ashley; Shore, Bryon; Baker, Josh; Kilgour, Bruce; Brown, Christine; Peru, Kerry M; Headley, John V; Bailey, Howard C
2017-11-01
Previous assessments of oil sands process-affected water (OSPW) toxicity were hampered by lack of high-resolution analytical analysis, use of nonstandard toxicity methods, and variability between OSPW samples. We integrated ultrahigh-resolution mass spectrometry with a toxicity identification evaluation (TIE) approach to quantitatively identify the primary cause of acute toxicity of OSPW to rainbow trout (Oncorhynchus mykiss). The initial characterization of OSPW toxicity indicated that toxicity was associated with nonpolar organic compounds, and toxicant(s) were further isolated within a range of discrete methanol fractions that were then subjected to Orbitrap mass spectrometry to evaluate the contribution of naphthenic acid fraction compounds to toxicity. The results showed that toxicity was attributable to classical naphthenic acids, with the potency of individual compounds increasing as a function of carbon number. Notably, the mass of classical naphthenic acids present in OSPW was dominated by carbon numbers ≤16; however, toxicity was largely a function of classical naphthenic acids with ≥17 carbons. Additional experiments found that acute toxicity of the organic fraction was similar when tested at conductivities of 400 and 1800 μmhos/cm and that rainbow trout fry were more sensitive to the organic fraction than larval fathead minnows (Pimephales promelas). Collectively, the results will aid in developing treatment goals and targets for removal of OSPW toxicity in water return scenarios both during operations and on mine closure. Environ Toxicol Chem 2017;36:3148-3157. © 2017 SETAC. © 2017 SETAC.
Quantum-classical correspondence in the vicinity of periodic orbits
NASA Astrophysics Data System (ADS)
Kumari, Meenu; Ghose, Shohini
2018-05-01
Quantum-classical correspondence in chaotic systems is a long-standing problem. We describe a method to quantify Bohr's correspondence principle and calculate the size of quantum numbers for which we can expect to observe quantum-classical correspondence near periodic orbits of Floquet systems. Our method shows how the stability of classical periodic orbits affects quantum dynamics. We demonstrate our method by analyzing quantum-classical correspondence in the quantum kicked top (QKT), which exhibits both regular and chaotic behavior. We use our correspondence conditions to identify signatures of classical bifurcations even in a deep quantum regime. Our method can be used to explain the breakdown of quantum-classical correspondence in chaotic systems.
Ślączka-Wilk, Magdalena M; Włodarczyk, Elżbieta; Kaleniecka, Aleksandra; Zarzycki, Paweł K
2017-07-01
There is increasing interest in the development of simple analytical systems enabling the fast screening of target components in complex samples. A number of newly invented protocols are based on quasi separation techniques involving microfluidic paper-based analytical devices and/or micro total analysis systems. Under such conditions, the quantification of target components can be performed mainly due to selective detection. The main goal of this paper is to demonstrate that miniaturized planar chromatography has the capability to work as an efficient separation and quantification tool for the analysis of multiple targets within complex environmental samples isolated and concentrated using an optimized SPE method. In particular, we analyzed various samples collected from surface water ecosystems (lakes, rivers, and the Baltic Sea of Middle Pomerania in the northern part of Poland) in different seasons, as well as samples collected during key wastewater technological processes (originating from the "Jamno" wastewater treatment plant in Koszalin, Poland). We documented that the multiple detection of chromatographic spots on RP-18W microplates-under visible light, fluorescence, and fluorescence quenching conditions, and using the visualization reagent phosphomolybdic acid-enables fast and robust sample classification. The presented data reveal that the proposed micro-TLC system is useful, inexpensive, and can be considered as a complementary method for the fast control of treated sewage water discharged by a municipal wastewater treatment plant, particularly for the detection of low-molecular mass micropollutants with polarity ranging from estetrol to progesterone, as well as chlorophyll-related dyes. Due to the low consumption of mobile phases composed of water-alcohol binary mixtures (less than 1 mL/run for the simultaneous separation of up to nine samples), this method can be considered an environmentally friendly and green chemistry analytical tool. The described analytical protocol can be complementary to those involving classical column chromatography (HPLC) or various planar microfluidic devices.
Quantum kinetic theory of the filamentation instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.; Haas, F.
2011-07-15
The quantum electromagnetic dielectric tensor for a multi-species plasma is re-derived from the gauge-invariant Wigner-Maxwell system and presented under a form very similar to the classical one. The resulting expression is then applied to a quantum kinetic theory of the electromagnetic filamentation instability. Comparison is made with the quantum fluid theory including a Bohm pressure term and with the cold classical plasma result. A number of analytical expressions are derived for the cutoff wave vector, the largest growth rate, and the most unstable wave vector.
Ultrasonic waves in classical gases
NASA Astrophysics Data System (ADS)
Magner, A. G.; Gorenstein, M. I.; Grygoriev, U. V.
2017-12-01
The velocity and absorption coefficient for the plane sound waves in a classical gas are obtained by solving the Boltzmann kinetic equation, which describes the reaction of the single-particle distribution function to a periodic external field. Within the linear response theory, the nonperturbative dispersion equation valid for all sound frequencies is derived and solved numerically. The results are in agreement with the approximate analytical solutions found for both the frequent- and rare-collision regimes. These results are also in qualitative agreement with the experimental data for ultrasonic waves in dilute gases.
NASA Astrophysics Data System (ADS)
Morozovska, A. N.; Eliseev, E. A.; Balke, N.; Kalinin, S. V.
2010-09-01
Electrochemical insertion-deintercalation reactions are typically associated with significant change in molar volume of the host compound. This strong coupling between ionic currents and strains underpins image formation mechanisms in electrochemical strain microscopy (ESM), and allows exploring the tip-induced electrochemical processes locally. Here we analyze the signal formation mechanism in ESM, and develop the analytical description of operation in frequency and time domains. The ESM spectroscopic modes are compared to classical electrochemical methods including potentiostatic and galvanostatic intermittent titration, and electrochemical impedance spectroscopy. This analysis illustrates the feasibility of spatially resolved studies of Li-ion dynamics on the sub-10-nm level using electromechanical detection.
Entanglement entropy of dispersive media from thermodynamic entropy in one higher dimension.
Maghrebi, M F; Reid, M T H
2015-04-17
A dispersive medium becomes entangled with zero-point fluctuations in the vacuum. We consider an arbitrary array of material bodies weakly interacting with a quantum field and compute the quantum mutual information between them. It is shown that the mutual information in D dimensions can be mapped to classical thermodynamic entropy in D+1 dimensions. As a specific example, we compute the mutual information both analytically and numerically for a range of separation distances between two bodies in D=2 dimensions and find a logarithmic correction to the area law at short separations. A key advantage of our method is that it allows the strong subadditivity property to be easily verified.
Harmonic growth of spherical Rayleigh-Taylor instability in weakly nonlinear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wanhai; LHD, Institute of Mechanics, Chinese Academy of Sciences, Beijing 100190; Chen, Yulian
Harmonic growth in classical Rayleigh-Taylor instability (RTI) on a spherical interface is analytically investigated using the method of the parameter expansion up to the third order. Our results show that the amplitudes of the first four harmonics will recover those in planar RTI as the interface radius tends to infinity compared against the initial perturbation wavelength. The initial radius dramatically influences the harmonic development. The appearance of the second-order feedback to the initial unperturbed interface (i.e., the zeroth harmonic) makes the interface move towards the spherical center. For these four harmonics, the smaller the initial radius is, the faster theymore » grow.« less
NASA Astrophysics Data System (ADS)
Sethi, M.; Sharma, A.; Vasishth, A.
2017-05-01
The present paper deals with the mathematical modeling of the propagation of torsional surface waves in a non-homogeneous transverse isotropic elastic half-space under a rigid layer. Both rigidities and density of the half-space are assumed to vary inversely linearly with depth. Separation of variable method has been used to get the analytical solutions for the dispersion equation of the torsional surface waves. Also, the effects of nonhomogeneities on the phase velocity of torsional surface waves have been shown graphically. Also, dispersion equations have been derived for some particular cases, which are in complete agreement with some classical results.
NASA Astrophysics Data System (ADS)
Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng
2013-03-01
A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.
Cluster sizes in a classical Lennard-Jones chain
NASA Astrophysics Data System (ADS)
Lee-Dadswell, G. R.; Barrett, Nicholas; Power, Michael
2017-09-01
The definitions of breaks and clusters in a one-dimensional chain in equilibrium are discussed. Analytical expressions are obtained for the expected cluster length, 〈K 〉 , as a function of temperature and pressure in a one-dimensional Lennard-Jones chain. These expressions are compared with results from molecular dynamics simulations. It is found that 〈K 〉 increases exponentially with β =1 /kBT and with pressure, P in agreement with previous results in the literature. A method is illustrated for using 〈K 〉(β ,P ) to generate a "phase diagram" for the Lennard-Jones chain. Some implications for the study of heat transport in Lennard-Jones chains are discussed.
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
NASA Astrophysics Data System (ADS)
Ebrahimi-Nejad, Salman; Boreiry, Mahya
2018-03-01
The bending, buckling and vibrational behavior of size-dependent piezoelectric nanobeams under thermo-magneto-mechano-electrical environment are investigated by performing a parametric study, in the presence of surface effects. The Gurtin-Murdoch surface elasticity and Eringen’s nonlocal elasticity theories are applied in the framework of Euler–Bernoulli beam theory to obtain a new non-classical size-dependent beam model for dynamic and static analyses of piezoelectric nanobeams. In order to satisfy the surface equilibrium equations, cubic variation of stress with beam thickness is assumed for the bulk stress component which is neglected in classical beam models. Results are obtained for clamped - simply-supported (C-S) and simply-supported - simply-supported (S-S) boundary conditions using a proposed analytical solution method. Numerical examples are presented to demonstrate the effects of length, surface effects, nonlocal parameter and environmental changes (temperature, magnetic field and external voltage) on deflection, critical buckling load and natural frequency for each boundary condition. Results of this study can serve as benchmarks for the design and analysis of nanostructures of magneto-electro-thermo-elastic materials.
NASA Astrophysics Data System (ADS)
Iriki, Y.; Kikuchi, Y.; Imai, M.; Itoh, A.
2011-11-01
Double-differential ionization cross sections (DDCSs) of vapor-phase adenine molecules (C5H5N5) by 0.5- and 2.0-MeV proton impact have been measured by the electron spectroscopy method. Electrons ejected from adenine were analyzed by a 45∘ parallel-plate electrostatic spectrometer over an energy range of 1.0-1000 eV at emission angles from 15∘ to 165∘. Single-differential cross sections (SDCSs) and total ionization cross sections (TICSs) were also deduced. It was found from the Platzman plot, defined as SDCSs divided by the classical Rutherford knock-on cross sections per target electron, that the SDCSs at higher electron energies are proportional to the total number of valence electrons (50) of adenine, while those at low-energy electrons are highly enhanced due to dipole and higher-order interactions. The present results of TICS are in fairly good agreement with recent classical trajectory Monte Carlo calculations, and moreover, a simple analytical formula gives nearly equivalent cross sections in magnitude at the incident proton energies investigated.
NASA Astrophysics Data System (ADS)
Ih Choi, Woon; Kim, Kwiseon; Narumanchi, Sreekant
2012-09-01
Thermal resistance between layers impedes effective heat dissipation in electronics packaging applications. Thermal conductance for clean and disordered interfaces between silicon (Si) and aluminum (Al) was computed using realistic Si/Al interfaces and classical molecular dynamics with the modified embedded atom method potential. These realistic interfaces, which include atomically clean as well as disordered interfaces, were obtained using density functional theory. At 300 K, the magnitude of interfacial conductance due to phonon-phonon scattering obtained from the classical molecular dynamics simulations was approximately five times higher than the conductance obtained using analytical elastic diffuse mismatch models. Interfacial disorder reduced the thermal conductance due to increased phonon scattering with respect to the atomically clean interface. Also, the interfacial conductance, due to electron-phonon scattering at the interface, was greater than the conductance due to phonon-phonon scattering. This indicates that phonon-phonon scattering is the bottleneck for interfacial transport at the semiconductor/metal interfaces. The molecular dynamics modeling predictions for interfacial thermal conductance for a 5-nm disordered interface between Si/Al were in-line with recent experimental data in the literature.
A new method for multi-bit and qudit transfer based on commensurate waveguide arrays
NASA Astrophysics Data System (ADS)
Petrovic, J.; Veerman, J. J. P.
2018-05-01
The faithful state transfer is an important requirement in the construction of classical and quantum computers. While the high-speed transfer is realized by optical-fibre interconnects, its implementation in integrated optical circuits is affected by cross-talk. The cross-talk between densely packed optical waveguides limits the transfer fidelity and distorts the signal in each channel, thus severely impeding the parallel transfer of states such as classical registers, multiple qubits and qudits. Here, we leverage on the suitably engineered cross-talk between waveguides to achieve the parallel transfer on optical chip. Waveguide coupling coefficients are designed to yield commensurate eigenvalues of the array and hence, periodic revivals of the input state. While, in general, polynomially complex, the inverse eigenvalue problem permits analytic solutions for small number of waveguides. We present exact solutions for arrays of up to nine waveguides and use them to design realistic buses for multi-(qu)bit and qudit transfer. Advantages and limitations of the proposed solution are discussed in the context of available fabrication techniques.
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
Automation of a spectrophotometric method for measuring L -carnitine in human blood serum.
Galan, A; Padros, A; Arambarri, M; Martin, S
1998-01-01
A spectrometric method for the determination of L-carnitine has been developed based on the reaction of the 5,5' dithiobis-(2-nitrobenzoic) acid (DTNB) and adapted to a Technicon RA-2000 automatic analyser Química Farmacéutica Bayer, S.A.). The detection limit of the method is 13.2 mumol/l, with a measurement interval ranging from 30 to 320 mumoll1. Imprecision and accuracy are good even at levels close to the detection limit (coeffcient of variation of 5.4% for within-run imprecision for a concentration of 35 mumol/l). A good correlation was observed between the method studied and the radiometric method. The method evaluated has suffcient analytical sensitivity to diagnose carnitine deficiencies. The short time period required for sample processing (30 samples in 40min), the simple methodology and apparatus, the ease of personnel training and the low cost of the reagents make this method a good alternative to the classical radiometric method for evaluating serum L-carnitine in clinical laboratories without radioactive installations.
Automation of a spectrophotometric method for measuring L -carnitine in human blood serum
Galan, Amparo; Padros, Anna; Arambarri, Marta; Martin, Silvia
1998-01-01
A spectrometric method for the determination of L-carnitine has been developed based on the reaction of the 5, 5 ′ dithiobis-(2-nitrobenzoic) acid (DTNB) and adapted to a Technicon RA-2000 automatic analyser Química Farmacéutica Bayer, S.A.). The detection limit of the method is 13.2 μmol/l, with a measurement interval ranging from 30 to 320 μmoll1. Imprecision and accuracy are good even at levels close to the detection limit (coeffcient of variation of 5.4% for within-run imprecision for a concentration of 35 μmol/l). A good correlation was observed between the method studied and the radiometric method. The method evaluated has suffcient analytical sensitivity to diagnose carnitine deficiencies. The short time period required for sample processing (30 samples in 40min), the simple methodology and apparatus, the ease of personnel training and the low cost of the reagents make this method a good alternative to the classical radiometric method for evaluating serum L-carnitine in clinical laboratories without radioactive installations. PMID:18924818
Two Dimensional Processing Of Speech And Ecg Signals Using The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Abeysekera, Saman S.
1986-12-01
The Wigner-Ville Distribution (WVD) has been shown to be a valuable tool for the analysis of non-stationary signals such as speech and Electrocardiogram (ECG) data. The one-dimensional real data are first transformed into a complex analytic signal using the Hilbert Transform and then a 2-dimensional image is formed using the Wigner-Ville Transform. For speech signals, a contour plot is determined and used as a basic feature. for a pattern recognition algorithm. This method is compared with the classical Short Time Fourier Transform (STFT) and is shown, to be able to recognize isolated words better in a noisy environment. The same method together with the concept of instantaneous frequency of the signal is applied to the analysis of ECG signals. This technique allows one to classify diseased heart-beat signals. Examples are shown.
An inelastic analysis of a welded aluminum joint
NASA Astrophysics Data System (ADS)
Vaughan, Robert E.; Schonberg, William P.
1995-02-01
Butt weld joints are most commonly designed into pressure vessels by using weld material properties that are determined from a tensile test. These properties are provided to the stress analyst in the form of a stress vs strain diagram. Variations in properties through the thickness of the weld and along the width of the weld have been suspect but not explored because of inaccessibility and cost. The purpose of this study is to investigate analytical and computational methods used for analysis of multiple pass aluminum 2219-T87 butt welds. The weld specimens are analyzed using classical plasticity theory to provide a basis for modeling the inelastic properties in a finite element solution. The results of the analysis are compared to experimental data to determine the weld behavior and the accuracy of currently available numerical prediction methods.
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
NASA Technical Reports Server (NTRS)
Shollenberger, C. A.; Smyth, D. N.
1978-01-01
A nonlinear, nonplanar three dimensional jet flap analysis, applicable to the ground effect problem, is presented. Lifting surface methodology is developed for a wing with arbitrary planform operating in an inviscid and incompressible fluid. The classical, infintely thin jet flap model is employed to simulate power induced effects. An iterative solution procedure is applied within the analysis to successively approximate the jet shape until a converged solution is obtained which closely satisfies jet and wing boundary conditions. Solution characteristics of the method are discussed and example results are presented for unpowered, basic powered and complex powered configurations. Comparisons between predictions of the present method and experimental measurements indicate that the improvement of the jet with the ground plane is important in the analyses of powered lift systems operating in ground proximity. Further development of the method is suggested in the areas of improved solution convergence, more realistic modeling of jet impingement and calculation efficiency enhancements.
Dabkiewicz, Vanessa Emídio; de Mello Pereira Abrantes, Shirley; Cassella, Ricardo Jorgensen
2018-08-05
Near infrared spectroscopy (NIR) with diffuse reflectance associated to multivariate calibration has as main advantage the replacement of the physical separation of interferents by the mathematical separation of their signals, rapidly with no need for reagent consumption, chemical waste production or sample manipulation. Seeking to optimize quality control analyses, this spectroscopic analytical method was shown to be a viable alternative to the classical Kjeldahl method for the determination of protein nitrogen in yellow fever vaccine. The most suitable multivariate calibration was achieved by the partial least squares method (PLS) with multiplicative signal correction (MSC) treatment and data mean centering (MC), using a minimum number of latent variables (LV) equal to 1, with the lower value of the square root of the mean squared prediction error (0.00330) associated with the highest percentage value (91%) of samples. Accuracy ranged 95 to 105% recovery in the 4000-5184 cm -1 region. Copyright © 2018 Elsevier B.V. All rights reserved.
Zhang, Ling
2017-01-01
The main purpose of this paper is to investigate the strong convergence and exponential stability in mean square of the exponential Euler method to semi-linear stochastic delay differential equations (SLSDDEs). It is proved that the exponential Euler approximation solution converges to the analytic solution with the strong order [Formula: see text] to SLSDDEs. On the one hand, the classical stability theorem to SLSDDEs is given by the Lyapunov functions. However, in this paper we study the exponential stability in mean square of the exact solution to SLSDDEs by using the definition of logarithmic norm. On the other hand, the implicit Euler scheme to SLSDDEs is known to be exponentially stable in mean square for any step size. However, in this article we propose an explicit method to show that the exponential Euler method to SLSDDEs is proved to share the same stability for any step size by the property of logarithmic norm.
Long-term detection of methyltestosterone (ab-) use by a yeast transactivation system.
Wolf, Sylvi; Diel, Patrick; Parr, Maria Kristina; Rataj, Felicitas; Schänzer, Willhelm; Vollmer, Günter; Zierau, Oliver
2011-04-01
The routinely used analytical method for detecting the abuse of anabolic steroids only allows the detection of molecules with known analytical properties. In our supplementary approach to structure-independent detection, substances are identified by their biological activity. In the present study, urines excreted after oral methyltestosterone (MT) administration were analyzed by a yeast androgen screen (YAS). The aim was to trace the excretion of MT or its metabolites in human urine samples and to compare the results with those from the established analytical method. MT and its two major metabolites were tested as pure compounds in the YAS. In a second step, the ability of the YAS to detect MT and its metabolites in urine samples was analyzed. For this purpose, a human volunteer ingested of a single dose of 5 mg methyltestosterone. Urine samples were collected after different time intervals (0-307 h) and were analyzed in the YAS and in parallel by GC/MS. Whereas the YAS was able to trace MT in urine samples at least for 14 days, the detection limits of the GC/MS method allowed follow-up until day six. In conclusion, our results demonstrate that the yeast reporter gene system could detect the activity of anabolic steroids like methyltestosterone with high sensitivity even in urine. Furthermore, the YAS was able to detect MT abuse for a longer period of time than classical GC/MS. Obviously, the system responds to long-lasting metabolites yet unidentified. Therefore, the YAS can be a powerful (pre-) screening tool with the potential that to be used to identify persistent or late screening metabolites of anabolic steroids, which could be used for an enhancement of the sensitivity of GC/MS detection techniques.
Statistical mechanics in the context of special relativity. II.
Kaniadakis, G
2005-09-01
The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.
Signatures of bifurcation on quantum correlations: Case of the quantum kicked top
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.; Santhanam, M. S.
2017-01-01
Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.
Kovarik, Peter; Grivet, Chantal; Bourgogne, Emmanuel; Hopfgartner, Gérard
2007-01-01
The present work investigates various method development aspects for the quantitative analysis of pharmaceutical compounds in human plasma using matrix-assisted laser desorption/ionization and multiple reaction monitoring (MALDI-MRM). Talinolol was selected as a model analyte. Liquid-liquid extraction (LLE) and protein precipitation were evaluated regarding sensitivity and throughput for the MALDI-MRM technique and its applicability without and with chromatographic separation. Compared to classical electrospray liquid chromatography/mass spectrometry (LC/ESI-MS) method development, with MALDI-MRM the tuning of the analyte in single MS mode is more challenging due to interfering matrix background ions. An approach is proposed using background subtraction. With LLE and using a 200 microL human plasma aliquot acceptable precision and accuracy could be obtained in the range of 1 to 1000 ng/mL without any LC separation. Approximately 3 s were required for one analysis. A full calibration curve and its quality control samples (20 samples) can be analyzed within 1 min. Combining LC with the MALDI analysis allowed improving the linearity down to 50 pg/mL, while reducing the throughput potential only by two-fold. Matrix effects are still a significant issue with MALDI but can be monitored in a similar way to that used for LC/ESI-MS analysis.
Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Lance J.; /SLAC; Drummond, James M.
2012-02-15
We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parametersmore » uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, A. P.
2009-06-15
In the referenced paper an analytical approach was introduced, which allows one to demonstrate the instability in linearly stable systems, specifically, in a classical three-body problem. These considerations are disproved here.
New bis(alkythio) fatty acid methyl esters
USDA-ARS?s Scientific Manuscript database
The addition reaction of dimethyl disulfide (DMDS) to mono-unsaturated fatty acid methyl esters is well-known for analytical purposes to determine the position of double bonds by mass spectrometry. In this work, the classical iodine-catalyzed reaction is expanded to other dialkyl disulfides (RSSR), ...
Soltani, Amin; Gebauer, Denis; Duschek, Lennart; Fischer, Bernd M; Cölfen, Helmut; Koch, Martin
2017-10-12
Crystal formation is a highly debated problem. This report shows that the crystallization of l-(+)-tartaric acid from water follows a non-classical path involving intermediate hydrated states. Analytical ultracentrifugation indicates solution clusters of the initial stages aggregate to form an early intermediate. Terahertz spectroscopy performed during water evaporation highlights a transient increase in the absorption during nucleation; this indicates the recurrence of water molecules that are expelled from the intermediate phase. Besides, a transient resonance at 750 GHz, which can be assigned to a natural vibration of large hydrated aggregates, vanishes after the final crystal has formed. Furthermore, THz data reveal the vibration of nanosized clusters in the dilute solution indicated by analytical ultracentrifugation. Infrared spectroscopy and wide-angle X-ray scattering highlight that the intermediate is not a crystalline hydrate. These results demonstrate that nanoscopic intermediate units assemble to form the first solvent-free crystalline nuclei upon dehydration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Modeling the free energy surfaces of electron transfer in condensed phases
NASA Astrophysics Data System (ADS)
Matyushov, Dmitry V.; Voth, Gregory A.
2000-10-01
We develop a three-parameter model of electron transfer (ET) in condensed phases based on the Hamiltonian of a two-state solute linearly coupled to a harmonic, classical solvent mode with different force constants in the initial and final states (a classical limit of the quantum Kubo-Toyozawa model). The exact analytical solution for the ET free energy surfaces demonstrates the following features: (i) the range of ET reaction coordinates is limited by a one-sided fluctuation band, (ii) the ET free energies are infinite outside the band, and (iii) the free energy surfaces are parabolic close to their minima and linear far from the minima positions. The model provides an analytical framework to map physical phenomena conflicting with the Marcus-Hush two-parameter model of ET. Nonlinear solvation, ET in polarizable charge-transfer complexes, and configurational flexibility of donor-acceptor complexes are successfully mapped onto the model. The present theory leads to a significant modification of the energy gap law for ET reactions.
On oscillatory convection with the Cattaneo–Christov hyperbolic heat-flow model
Bissell, J. J.
2015-01-01
Adoption of the hyperbolic Cattaneo–Christov heat-flow model in place of the more usual parabolic Fourier law is shown to raise the possibility of oscillatory convection in the classic Bénard problem of a Boussinesq fluid heated from below. By comparing the critical Rayleigh numbers for stationary and oscillatory convection, Rc and RS respectively, oscillatory convection is found to represent the preferred form of instability whenever the Cattaneo number C exceeds a threshold value CT≥8/27π2≈0.03. In the case of free boundaries, analytical approaches permit direct treatment of the role played by the Prandtl number P1, which—in contrast to the classical stationary scenario—can impact on oscillatory modes significantly owing to the non-zero frequency of convection. Numerical investigation indicates that the behaviour found analytically for free boundaries applies in a qualitatively similar fashion for fixed boundaries, while the threshold Cattaneo number CT is computed as a function of P1∈[10−2,10+2] for both boundary regimes. PMID:25792960
NASA Astrophysics Data System (ADS)
Descartes, R.; Rota, G.-C.; Euler, L.; Bernoulli, J. D.; Siegel, Edward Carl-Ludwig
2011-03-01
Quantum-statistics Dichotomy: Fermi-Dirac(FDQS) Versus Bose-Einstein(BEQS), respectively with contact-repulsion/non-condensation(FDCR) versus attraction/ condensationBEC are manifestly-demonstrated by Taylor-expansion ONLY of their denominator exponential, identified BOTH as Descartes analytic-geometry conic-sections, FDQS as Elllipse (homotopy to rectangle FDQS distribution-function), VIA Maxwell-Boltzmann classical-statistics(MBCS) to Parabola MORPHISM, VS. BEQS to Hyperbola, Archimedes' HYPERBOLICITY INEVITABILITY, and as well generating-functions[Abramowitz-Stegun, Handbook Math.-Functions--p. 804!!!], respectively of Euler-numbers/functions, (via Riemann zeta-function(domination of quantum-statistics: [Pathria, Statistical-Mechanics; Huang, Statistical-Mechanics]) VS. Bernoulli-numbers/ functions. Much can be learned about statistical-physics from Euler-numbers/functions via Riemann zeta-function(s) VS. Bernoulli-numbers/functions [Conway-Guy, Book of Numbers] and about Euler-numbers/functions, via Riemann zeta-function(s) MORPHISM, VS. Bernoulli-numbers/ functions, visa versa!!! Ex.: Riemann-hypothesis PHYSICS proof PARTLY as BEQS BEC/BEA!!!
Chandrasekhar's dynamical friction and non-extensive statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J.M.; Lima, J.A.S.; De Souza, R.E.
2016-05-01
The motion of a point like object of mass M passing through the background potential of massive collisionless particles ( m || M ) suffers a steady deceleration named dynamical friction. In his classical work, Chandrasekhar assumed a Maxwellian velocity distribution in the halo and neglected the self gravity of the wake induced by the gravitational focusing of the mass M . In this paper, by relaxing the validity of the Maxwellian distribution due to the presence of long range forces, we derive an analytical formula for the dynamical friction in the context of the q -nonextensive kinetic theory. Inmore » the extensive limiting case ( q = 1), the classical Gaussian Chandrasekhar result is recovered. As an application, the dynamical friction timescale for Globular Clusters spiraling to the galactic center is explicitly obtained. Our results suggest that the problem concerning the large timescale as derived by numerical N -body simulations or semi-analytical models can be understood as a departure from the standard extensive Maxwellian regime as measured by the Tsallis nonextensive q -parameter.« less
Deflection of cross-ply composite laminates induced by piezoelectric actuators.
Her, Shiuh-Chuan; Lin, Chi-Sheng
2010-01-01
The coupling effects between the mechanical and electric properties of piezoelectric materials have drawn significant attention for their potential applications as sensors and actuators. In this investigation, two piezoelectric actuators are symmetrically surface bonded on a cross-ply composite laminate. Electric voltages with the same amplitude and opposite sign are applied to the two symmetric piezoelectric actuators, resulting in the bending effect on the laminated plate. The bending moment is derived by using the classical laminate theory and piezoelectricity. The analytical solution of the flexural displacement of the simply supported composite plate subjected to the bending moment is solved by using the plate theory. The analytical solution is compared with the finite element solution to show the validation of present approach. The effects of the size and location of the piezoelectric actuators on the response of the composite laminate are presented through a parametric study. A simple model incorporating the classical laminate theory and plate theory is presented to predict the deformed shape of the simply supported laminate plate.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
An analytic model for buoyancy resonances in protoplanetary disks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lubow, Stephen H.; Zhu, Zhaohuan, E-mail: lubow@stsci.edu, E-mail: zhzhu@astro.princeton.edu
2014-04-10
Zhu et al. found in three-dimensional shearing box simulations a new form of planet-disk interaction that they attributed to a vertical buoyancy resonance in the disk. We describe an analytic linear model for this interaction. We adopt a simplified model involving azimuthal forcing that produces the resonance and permits an analytic description of its structure. We derive an analytic expression for the buoyancy torque and show that the vertical torque distribution agrees well with the results of the Athena simulations and a Fourier method for linear numerical calculations carried out with the same forcing. The buoyancy resonance differs from themore » classic Lindblad and corotation resonances in that the resonance lies along tilted planes. Its width depends on damping effects and is independent of the gas sound speed. The resonance does not excite propagating waves. At a given large azimuthal wavenumber k{sub y} > h {sup –1} (for disk thickness h), the buoyancy resonance exerts a torque over a region that lies radially closer to the corotation radius than the Lindblad resonance. Because the torque is localized to the region of excitation, it is potentially subject to the effects of nonlinear saturation. In addition, the torque can be reduced by the effects of radiative heat transfer between the resonant region and its surroundings. For each azimuthal wavenumber, the resonance establishes a large scale density wave pattern in a plane within the disk.« less
Baldo, Matías N; Angeli, Emmanuel; Gareis, Natalia C; Hunzicker, Gabriel A; Murguía, Marcelo C; Ortega, Hugo H; Hein, Gustavo J
2018-04-01
A relative bioavailability study (RBA) of two phenytoin (PHT) formulations was conducted in rabbits, in order to compare the results obtained from different matrices (plasma and blood from dried blood spot (DBS) sampling) and different experimental designs (classic and block). The method was developed by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in plasma and blood samples. The different sample preparation techniques, plasma protein precipitation and DBS, were validated according to international requirements. The analytical method was validated with ranges 0.20-50.80 and 0.12-20.32 µg ml -1 , r > 0.999 for plasma and blood, respectively. Accuracy and precision were within acceptance criteria for bioanalytical assay validation (< 15 for bias and CV% and < 20 for limit of quantification (LOQ)). PHT showed long-term stability, both for plasma and blood, and under refrigerated and room temperature conditions. Haematocrit values were measured during the validation process and RBA study. Finally, the pharmacokinetic parameters (C max , T max and AUC 0-t ) obtained from the RBA study were tested. Results were highly comparable for matrices and experimental designs. A matrix correlation higher than 0.975 and a ratio of (PHT blood) = 1.158 (PHT plasma) were obtained. The results obtained herein show that the use of classic experimental design and DBS sampling for animal pharmacokinetic studies should be encouraged as they could help to prevent the use of a large number of animals and also animal euthanasia. Finally, the combination of DBS sampling with LC-MS/MS technology showed to be an excellent tool not only for therapeutic drug monitoring but also for RBA studies.
Aparicio, Irene; Martín, Julia; Santos, Juan Luis; Malvar, José Luis; Alonso, Esteban
2017-06-02
An analytical method based on stir bar sorptive extraction (SBSE) was developed and validated for the determination of environmental concern pollutants in environmental waters by liquid chromatography-tandem mass spectrometry (LC-MS/MS). Target compounds include six water and oil repellents (perfluorinated compounds), four preservatives (butylated hydroxytoluene and three parabens), two plasticizers (bisphenol A and di(2-ethylhexyl)phthalate), seven surfactants (four linear alkylbenzene sulfonates, nonylphenol and two nonylphenol ethoxylates), a flame retardant (hexabromocyclododecane), four hormones, fourteen pharmaceutical compounds, an UV-filter (2-ethylhexyl 4-methoxycinnamate) and nine pesticides. To achieve the simultaneous extraction of polar and non-polar pollutants two stir bar coatings were tested, the classic polydimethylsiloxane (PDMS) coating and the novel ethylene glycol modified silicone (EG-silicone). The best extraction recoveries were obtained using EG-silicone coating. The effects of sample pH, volume and ionic strength and extraction time on extraction recoveries were evaluated. The analytical method was validated for surface water and tap water samples. The method quantification limits ranged from 7.0ngL -1 to 177ngL -1 . The inter-day precision, expressed as relative standard deviation, was lower than 20%. Accuracy, expressed as relative recovery values, was in the range from 61 to 130%. The method was applied for the determination of the 48 target compounds in surface and tap water samples. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje
2006-03-20
We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less
Structure of the classical scrape-off layer of a tokamak
NASA Astrophysics Data System (ADS)
Rozhansky, V.; Kaveeva, E.; Senichenkov, I.; Vekshina, E.
2018-03-01
The structure of the scrape-off layer (SOL) of a tokamak with little or no turbulent transport is analyzed. The analytical estimates of the density and electron temperature fall-off lengths of the SOL are put forward. It is demonstrated that the SOL width could be of the order of the ion poloidal gyroradius, as suggested in Goldston (2012 Nuclear Fusion 52 013009). The analytical results are supported by the results of the 2D simulations of the edge plasma with reduced transport coefficients performed by SOLPS-ITER transport code.
Off-diagonal series expansion for quantum partition functions
NASA Astrophysics Data System (ADS)
Hen, Itay
2018-05-01
We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.
Electrochemistry in hollow-channel paper analytical devices.
Renault, Christophe; Anderson, Morgan J; Crooks, Richard M
2014-03-26
In the present article we provide a detailed analysis of fundamental electrochemical processes in a new class of paper-based analytical devices (PADs) having hollow channels (HCs). Voltammetry and amperometry were applied under flow and no flow conditions yielding reproducible electrochemical signals that can be described by classical electrochemical theory as well as finite-element simulations. The results shown here provide new and quantitative insights into the flow within HC-PADs. The interesting new result is that despite their remarkable simplicity these HC-PADs exhibit electrochemical and hydrodynamic behavior similar to that of traditional microelectrochemical devices.
Determination of Phosphates by the Gravimetric Quimociac Technique
ERIC Educational Resources Information Center
Shaver, Lee Alan
2008-01-01
The determination of phosphates by the classic quimociac gravimetric technique was used successfully as a laboratory experiment in our undergraduate analytical chemistry course. Phosphate-containing compounds are dissolved in acid and converted to soluble orthophosphate ion (PO[subscript 4][superscript 3-]). The soluble phosphate is easily…
Using Qualitative Inquiry to Promote Organizational Intelligence
ERIC Educational Resources Information Center
Kimball, Ezekiel; Loya, Karla I.
2017-01-01
Framed by Terenzini's revision of his classic "On the nature of institutional research" article, this chapter offers concluding thoughts on the way in which technical/analytical, issues, and contextual types of awarenesses appeared across chapters in this volume. Moreover, it outlines how each chapter demonstrated how qualitative inquiry…
The Initial Flow of Classical Gluon Fields in Heavy Ion Collisions
NASA Astrophysics Data System (ADS)
Fries, Rainer J.; Chen, Guangyao
2015-03-01
Using analytic solutions of the Yang-Mills equations we calculate the initial flow of energy of the classical gluon field created in collisions of large nuclei at high energies. We find radial and elliptic flow which follows gradients in the initial energy density, similar to a simple hydrodynamic behavior. In addition we find a rapidity-odd transverse flow field which implies the presence of angular momentum and should lead to directed flow in final particle spectra. We trace those energy flow terms to transverse fields from the non-abelian generalization of Gauss' Law and Ampere's and Faraday's Laws.
Classical problems in computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.
Thermodynamics of reformulated automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-06-01
Two methods for predicting Reid vapor pressure (Rvp) and initial vapor emissions of reformulated gasoline blends that contain one or more oxygenated compounds show excellent agreement with experimental data. In the first method, method A, D-86 distillation data for gasoline blends are used for predicting Rvp from a simulation of the mini dry vapor pressure equivalent (Dvpe) experiment. The other method, method B, relies on analytical information (PIANO analyses) of the base gasoline and uses classical thermodynamics for simulating the same Rvp equivalent (Rvpe) mini experiment. Method B also predicts composition and other properties for the fuel`s initial vapor emission.more » Method B, although complex, is more useful in that is can predict properties of blends without a D-86 distillation. An important aspect of method B is its capability to predict composition of initial vapor emissions from gasoline blends. Thus, it offers a powerful tool to planners of gasoline blending. Method B uses theoretically sound formulas, rigorous thermodynamic routines and uses data and correlations of physical properties that are in the public domain. Results indicate that predictions made with both methods agree very well with experimental values of Dvpe. Computer simulation methods were programmed and tested.« less
Pinon, J M; Thoannes, H; Gruson, N
1985-02-28
Enzyme-linked immuno-filtration assay is carried out on a micropore membrane. This doubly analytical technique permits simultaneous study of antibody specificity by immunoprecipitation and characterisation of antibody isotypes by immuno-filtration with enzyme-labelled antibodies. Recognition of the same T. gondii antigenic constituent by IgG, IgA, IgM or IgE antibodies produces couplets (IgG-IgM; IgG-IgA) or triplets (IgG-IgM-IgA; IgG-IgM-IgE) which identify the functional fractions of the toxoplasmosis antigen. In acquired toxoplasmosis, the persistence of IgM antibody long after infestation puts in question the implication of recent infestation normally linked to detection of this isotype. For sera of comparable titres, comparison of immunological profiles by the method described demonstrates disparities in the composition of the specific antibody content as expressed in international units. Use of the same method to detect IgM antibodies or distinguish between transmitted maternal IgG and IgG antibodies synthesised by the foetus or neonate makes a diagnosis of congenital toxoplasmosis possible in 85% of cases during the first few days of life. With the method described the diagnosis may be made on average 5 months earlier than with classical techniques. In the course of surveillance for latent congenital toxoplasmosis, the appearance of IgM or IgE antibodies raises the possibility of complications (hydrocephalus, chorioretinitis). After cessation of treatment, a rise in IgG antibodies indicating persistence of infection is detected earlier by the present than by classical methods.
A graphical approach to electric sail mission design with radial thrust
NASA Astrophysics Data System (ADS)
Mengali, Giovanni; Quarta, Alessandro A.; Aliasi, Generoso
2013-02-01
This paper describes a semi-analytical approach to electric sail mission analysis under the assumption that the spacecraft experiences a purely radial, outward, propulsive acceleration. The problem is tackled by means of the potential well concept, a very effective idea that was originally introduced by Prussing and Coverstone in 1998. Unlike a classical procedure that requires the numerical integration of the equations of motion, the proposed method provides an estimate of the main spacecraft trajectory parameters, as its maximum and minimum attainable distance from the Sun, with the simple use of analytical relationships and elementary graphs. A number of mission scenarios clearly show the effectiveness of the proposed approach. In particular, when the spacecraft parking orbit is either circular or elliptic it is possible to find the optimal performances required to reach an escape condition or a given distance from the Sun. Another example is given by the optimal strategy required to reach a heliocentric Keplerian orbit of prescribed orbital period. Finally the graphical approach is applied to the preliminary design of a nodal mission towards a Near Earth Asteroid.
Cabrera-Barona, Pablo; Ghorbanzadeh, Omid
2018-01-16
Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas.
Cabrera-Barona, Pablo
2018-01-01
Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas. PMID:29337915
NASA Astrophysics Data System (ADS)
Silva, Cesar R.; Simoni, Jose A.; Collins, Carol H.; Volpe, Pedro L. O.
1999-10-01
Ascorbic acid is suggested as the weighable compound for the standardization of iodine solutions in an analytical experiment in general chemistry. The experiment involves an iodometric titration in which iodine reacts with ascorbic acid, oxidizing it to dehydroascorbic acid. The redox titration endpoint is determined by the first iodine excess that is complexed with starch, giving a deep blue-violet color. The results of the titration of iodine solution using ascorbic acid as a calibration standard were compared with the results acquired by the classic method using a standardized solution of sodium thiosulfate. The standardization of the iodine solution using ascorbic acid was accurate and precise, with the advantages of saving time and avoiding mistakes due to solution preparation. The colorless ascorbic acid solution gives a very clear and sharp titration end point with starch. It was shown by thermogravimetric analysis that ascorbic acid can be dried at 393 K for 2 h without decomposition. This experiment allows general chemistry students to perform an iodometric titration during a single laboratory period, determining with precision the content of vitamin C in pharmaceutical formulations.
Karayannis, Miltiades I; Efstathiou, Constantinos E
2012-12-15
In this review the history of chemistry and specifically the history and the significant steps of the evolution of analytical chemistry are presented. In chronological time spans, covering the ancient world, the middle ages, the period of the 19th century, and the three evolutional periods, from the verge of the 19th century to contemporary times, it is given information for the progress of chemistry and analytical chemistry. During this period, analytical chemistry moved gradually from its pure empirical nature to more rational scientific activities, transforming itself to an autonomous branch of chemistry and a separate discipline. It is also shown that analytical chemistry moved gradually from the status of exclusive serving the chemical science, towards serving, the environment, health, law, almost all areas of science and technology, and the overall society. Some recommendations are also directed to analytical chemistry educators concerning the indispensable nature of knowledge of classical analytical chemistry and the associated laboratory exercises and to analysts, in general, why it is important to use the chemical knowledge to make measurements on problems of everyday life. Copyright © 2012 Elsevier B.V. All rights reserved.
Peters, Frank T; Schaefer, Simone; Staack, Roland F; Kraemer, Thomas; Maurer, Hans H
2003-06-01
The classical stimulants amphetamine, methamphetamine, ethylamphetamine and the amphetamine-derived designer drugs MDA, MDMA ('ecstasy'), MDEA, BDB and MBDB have been widely abused for a relatively long time. In recent years, a number of newer designer drugs have entered the illicit drug market. 4-Methylthioamphetamine (MTA), p-methoxyamphetamine (PMA) and p-methoxymethamphetamine (PMMA) are also derived from amphetamine. Other designer drugs are derived from piperazine, such as benzylpiperazine (BZP), methylenedioxybenzylpiperazine (MDBP), trifluoromethylphenylpiperazine (TFMPP), m-chlorophenylpiperazine (mCPP) and p-methoxyphenylpiperazine (MeOPP). A number of severe or even fatal intoxications involving these newer substances, especially PMA, have been reported. This paper describes a method for screening for and simultaneous quantification of the above-mentioned compounds and the metabolites p-hydroxyamphetamine and p-hydroxymethamphetamine (pholedrine) in human blood plasma. The analytes were analyzed by gas chromatography/mass spectrometry in the selected-ion monitoring mode after mixed-mode solid-phase extraction (HCX) and derivatization with heptafluorobutyric anhydride. The method was fully validated according to international guidelines. It was linear from 5 to 1000 micro g l(-1) for all analytes. Data for accuracy and precision were within required limits with the exception of those for MDBP. The limit of quantification was 5 micro g l(-1) for all analytes. The applicability of the assay was proven by analysis of authentic plasma samples and of a certified reference sample. This procedure should also be suitable for confirmation of immunoassay results positive for amphetamines and/or designer drugs of the ecstasy type. Copyright 2003 John Wiley & Sons, Ltd.
Modeling of classical swirl injector dynamics
NASA Astrophysics Data System (ADS)
Ismailov, Maksud M.
The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.
Models for the rise of the dinosaurs.
Benton, Michael J; Forth, Jonathan; Langer, Max C
2014-01-20
Dinosaurs arose in the early Triassic in the aftermath of the greatest mass extinction ever and became hugely successful in the Mesozoic. Their initial diversification is a classic example of a large-scale macroevolutionary change. Diversifications at such deep-time scales can now be dissected, modelled and tested. New fossils suggest that dinosaurs originated early in the Middle Triassic, during the recovery of life from the devastating Permo-Triassic mass extinction. Improvements in stratigraphic dating and a new suite of morphometric and comparative evolutionary numerical methods now allow a forensic dissection of one of the greatest turnovers in the history of life. Such studies mark a move from the narrative to the analytical in macroevolutionary research, and they allow us to begin to answer the proposal of George Gaylord Simpson, to explore adaptive radiations using numerical methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Theoretical model for Sub-Doppler Cooling with EIT System
NASA Astrophysics Data System (ADS)
He, Peiru; Tengdin, Phoebe; Anderson, Dana; Rey, Ana Maria; Holland, Murray
2016-05-01
We propose a of sub-Doppler cooling mechanism that takes advantage of the unique spectral features and extreme dispersion generated by the so-called Electromagnetically Induced Transparency (EIT) effect, a destructive quantum interference phenomenon experienced by atoms with Lambda-shaped energy levels when illuminated by two light fields with appropriate frequencies. By detuning the probe lasers slightly from the ``dark resonance'', we observe that atoms can be significantly cooled down by the strong viscous force within the transparency window, while being just slightly heated by the diffusion caused by the small absorption near resonance. In contrast to polarization gradient cooling or EIT sideband cooling, no external magnetic field or external confining potential are required. Using a semi-classical method, analytical expressions, and numerical simulations, we demonstrate that the proposed EIT cooling method can lead to temperatures well below the Doppler limit. This work is supported by NSF and NIST.
Elements of an algorithm for optimizing a parameter-structural neural network
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2016-06-01
The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.
Automated measurement and monitoring of bioprocesses: key elements of the M(3)C strategy.
Sonnleitner, Bernhard
2013-01-01
The state-of-routine monitoring items established in the bioprocess industry as well as some important state-of-the-art methods are briefly described and the potential pitfalls discussed. Among those are physical and chemical variables such as temperature, pressure, weight, volume, mass and volumetric flow rates, pH, redox potential, gas partial pressures in the liquid and molar fractions in the gas phase, infrared spectral analysis of the liquid phase, and calorimetry over an entire reactor. Classical as well as new optical versions are addressed. Biomass and bio-activity monitoring (as opposed to "measurement") via turbidity, permittivity, in situ microscopy, and fluorescence are critically analyzed. Some new(er) instrumental analytical tools, interfaced to bioprocesses, are explained. Among those are chromatographic methods, mass spectrometry, flow and sequential injection analyses, field flow fractionation, capillary electrophoresis, and flow cytometry. This chapter surveys the principles of monitoring rather than compiling instruments.
Realization of non-holonomic constraints and singular perturbation theory for plane dumbbells
NASA Astrophysics Data System (ADS)
Koshkin, Sergiy; Jovanovic, Vojin
2017-10-01
We study the dynamics of pairs of connected masses in the plane, when nonholonomic (knife-edge) constraints are realized by forces of viscous friction, in particular its relation to constrained dynamics, and its approximation by the method of matching asymptotics of singular perturbation theory when the mass to friction ratio is taken as the small parameter. It turns out that long term behaviors of the frictional and constrained systems may differ dramatically no matter how small the perturbation is, and when this happens is not determined by any transparent feature of the equations of motion. The choice of effective time scales for matching asymptotics is also subtle and non-obvious, and secular terms appearing in them can not be dealt with by the classical methods. Our analysis is based on comparison to analytic solutions, and we present a reduction procedure for plane dumbbells that leads to them in some cases.
Lechowicz, Wojciech
2009-01-01
Toxicological analyses performed in individuals who died in unclear circumstances constitute a key element of research aiming at providing a complete explanation of cause of death. The entire panel of examinations of the corpse of general Sikorski also included toxicological analyses for drugs and organic poisons of synthetic and natural origin. Attention was focused on fast-acting and potent poisons known and used in the forties of the century. The internal organs (stomach, liver, lung, brain) and hair, as well as other materials collected from the body and found in the coffin were analyzed. The classic method of sample preparation, i.e. homogenization, deproteinization, headspace and liquid-liquid extraction were applied. Hyphenated methods, mainly chromatographic with mass spectrometry were used for identification of the analytes. Organic poisons were not identified in the material as a result of the research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suparmi, A., E-mail: soeparmi@staff.uns.ac.id; Cari, C., E-mail: cari@staff.uns.ac.id; Pratiwi, B. N., E-mail: namakubetanurpratiwi@gmail.com
2016-02-08
The analytical solution of D-dimensional Dirac equation for hyperbolic tangent potential is investigated using Nikiforov-Uvarov method. In the case of spin symmetry the D dimensional Dirac equation reduces to the D dimensional Schrodinger equation. The D dimensional relativistic energy spectra are obtained from D dimensional relativistic energy eigen value equation by using Mat Lab software. The corresponding D dimensional radial wave functions are formulated in the form of generalized Jacobi polynomials. The thermodynamically properties of materials are generated from the non-relativistic energy eigen-values in the classical limit. In the non-relativistic limit, the relativistic energy equation reduces to the non-relativistic energy.more » The thermal quantities of the system, partition function and specific heat, are expressed in terms of error function and imaginary error function which are numerically calculated using Mat Lab software.« less
Elastic and transport cross sections for inert gases in a hydrogen plasma
NASA Astrophysics Data System (ADS)
Krstic, Predrag
2005-05-01
Accurate elastic differential and integral scattering and transport cross sections have been computed using a fully quantum-mechanical approach for hydrogen ions (H^+, D^+ and T^+) colliding with Neon, Krypton and Xenon, in the center of mass energy range 0.1 to 200 eV. The momentum transfer and viscosity cross sections have been extended to higher keV collision energies using a classical, three-body scattering method. The results were compared with previously calculated values for Argon and Helium, as well as with simple analytical models. The cross sections, tabulated and available through the world wide web (www-cfadc.phy.ornl.gov) are of significance in fusion plasma modeling, gaseous electronics and other plasma applications.
Quantum Metric of Classic Physics
NASA Astrophysics Data System (ADS)
Machusky, Eugene
2017-09-01
By methods of differential geometry and number theory the following has been established: All fundamental physical constants are the medians of quasi-harmonic functions of relative space and relative time. Basic quantum units are, in fact, the gradients of normal distribution of standing waves between the points of pulsating spherical spiral, which are determined only by functional bonds of transcendental numbers PI and E. Analytically obtained values of rotational speed, translational velocity, vibrational speed, background temperature and molar mass give the possibility to evaluate all basic quantum units with practically unlimited accuracy. Metric of quantum physics really is two-dimensional image of motion of waves in three-dimensional space. Standard physical model is correct, but SI metric system is insufficiently exact at submillimeter distances.
Considerations in the development of circulating tumor cell technology for clinical use
2012-01-01
This manuscript summarizes current thinking on the value and promise of evolving circulating tumor cell (CTC) technologies for cancer patient diagnosis, prognosis, and response to therapy, as well as accelerating oncologic drug development. Moving forward requires the application of the classic steps in biomarker development–analytical and clinical validation and clinical qualification for specific contexts of use. To that end, this review describes methods for interactive comparisons of proprietary new technologies, clinical trial designs, a clinical validation qualification strategy, and an approach for effectively carrying out this work through a public-private partnership that includes test developers, drug developers, clinical trialists, the US Food & Drug Administration (FDA) and the US National Cancer Institute (NCI). PMID:22747748
Renormalization of the unitary evolution equation for coined quantum walks
NASA Astrophysics Data System (ADS)
Boettcher, Stefan; Li, Shanshan; Portugal, Renato
2017-03-01
We consider discrete-time evolution equations in which the stochastic operator of a classical random walk is replaced by a unitary operator. Such a problem has gained much attention as a framework for coined quantum walks that are essential for attaining the Grover limit for quantum search algorithms in physically realizable, low-dimensional geometries. In particular, we analyze the exact real-space renormalization group (RG) procedure recently introduced to study the scaling of quantum walks on fractal networks. While this procedure, when implemented numerically, was able to provide some deep insights into the relation between classical and quantum walks, its analytic basis has remained obscure. Our discussion here is laying the groundwork for a rigorous implementation of the RG for this important class of transport and algorithmic problems, although some instances remain unresolved. Specifically, we find that the RG fixed-point analysis of the classical walk, which typically focuses on the dominant Jacobian eigenvalue {λ1} , with walk dimension dw\\text{RW}={{log}2}{λ1} , needs to be extended to include the subdominant eigenvalue {λ2} , such that the dimension of the quantum walk obtains dw\\text{QW}={{log}2}\\sqrt{{λ1}{λ2}} . With that extension, we obtain analytically previously conjectured results for dw\\text{QW} of Grover walks on all but one of the fractal networks that have been considered.
Classical mutual information in mean-field spin glass models
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Inglis, Stephen; Pollet, Lode
2016-03-01
We investigate the classical Rényi entropy Sn and the associated mutual information In in the Sherrington-Kirkpatrick (S-K) model, which is the paradigm model of mean-field spin glasses. Using classical Monte Carlo simulations and analytical tools we investigate the S-K model in the n -sheet booklet. This is achieved by gluing together n independent copies of the model, and it is the main ingredient for constructing the Rényi entanglement-related quantities. We find a glassy phase at low temperatures, whereas at high temperatures the model exhibits paramagnetic behavior, consistent with the regular S-K model. The temperature of the paramagnetic-glassy transition depends nontrivially on the geometry of the booklet. At high temperatures we provide the exact solution of the model by exploiting the replica symmetry. This is the permutation symmetry among the fictitious replicas that are used to perform disorder averages (via the replica trick). In the glassy phase the replica symmetry has to be broken. Using a generalization of the Parisi solution, we provide analytical results for Sn and In and for standard thermodynamic quantities. Both Sn and In exhibit a volume law in the whole phase diagram. We characterize the behavior of the corresponding densities, Sn/N and In/N , in the thermodynamic limit. Interestingly, at the critical point the mutual information does not exhibit any crossing for different system sizes, in contrast with local spin models.
Preparation, Characterization, and Selectivity Study of Mixed-Valence Sulfites
ERIC Educational Resources Information Center
Silva, Luciana A.; de Andrade, Jailson B.
2010-01-01
A project involving the synthesis of an isomorphic double sulfite series and characterization by classical inorganic chemical analyses is described. The project is performed by upper-level undergraduate students in the laboratory. This compound series is suitable for examining several chemical concepts and analytical techniques in inorganic…
Hermann-Bernoulli-Laplace-Hamilton-Runge-Lenz Vector.
ERIC Educational Resources Information Center
Subramanian, P. R.; And Others
1991-01-01
A way for students to refresh and use their knowledge in both mathematics and physics is presented. By the study of the properties of the "Runge-Lenz" vector the subjects of algebra, analytical geometry, calculus, classical mechanics, differential equations, matrices, quantum mechanics, trigonometry, and vector analysis can be reviewed. (KR)
On the Construction and Dynamics of Knotted Fields
NASA Astrophysics Data System (ADS)
Kedia, Hridesh
Representing a physical field in terms of its field lines has often enabled a deeper understanding of complex physical phenomena, from Faraday's law of magnetic induction, to the Helmholtz laws of vortex motion, to the free energy density of liquid crystals in terms of the distortions of the lines of the director field. At the same time, the application of ideas from topology--the study of properties that are invariant under continuous deformations--has led to robust insights into the nature of complex physical systems from defects in crystal structures, to the earth's magnetic field, to topological conservation laws. The study of knotted fields, physical fields in which the field lines encode knots, emerges naturally from the application of topological ideas to the investigation of the physical phenomena best understood in terms of the lines of a field. A knot--a closed loop tangled with itself which can not be untangled without cutting the loop--is the simplest topologically non-trivial object constructed from a line. Remarkably, knots in the vortex (magnetic field) lines of a dissipationless fluid (plasma), persist forever as they are transported by the flow, stretching and rotating as they evolve. Moreover, deeply entwined with the topology-preserving dynamics of dissipationless fluids and plasmas, is an additional conserved quantity--helicity, a measure of the average linking of the vortex (magnetic field) lines in a fluid (plasma)--which has had far-reaching consequences for fluids and plasmas. Inspired by the persistence of knots in dissipationless flows, and their far-reaching physical consequences, we seek to understand the interplay between the dynamics of a field and the topology of its field lines in a variety of systems. While it is easy to tie a knot in a shoelace, tying a knot in the the lines of a space-filling field requires contorting the lines everywhere to match the knotted region. The challenge of analytically constructing knotted field configurations has impeded a deeper understanding of the interplay between topology and dynamics in fluids and plasmas. We begin by analytically constructing knotted field configurations which encode a desired knot in the lines of the field, and show that their helicity can be tuned independently of the encoded knot. The nonlinear nature of the physical systems in which these knotted field configurations arise, makes their analytical study challenging. We ask if a linear theory such as electromagnetism can allow knotted field configurations to persist with time. We find analytical expressions for an infinite family of knotted solutions to Maxwell's equations in vacuum and elucidate their connections to dissipationless flows. We present a design rule for constructing such persistently knotted electromagnetic fields, which could possibly be used to transfer knottedness to matter such as quantum fluids and plasmas. An important consequence of the persistence of knots in classical dissipationless flows is the existence of an additional conserved quantity, helicity, which has had far-reaching implications. To understand the existence of analogous conserved quantities, we ask if superfluids, which flow without dissipation just like classical dissipationless flows, have an additional conserved quantity akin to helicity. We address this question using an analytical approach based on defining the particle relabeling symmetry--the symmetry underlying helicity conservation--in superfluids, and find that an analogous conserved quantity exists but vanishes identically owing to the intrinsic geometry of complex scalar fields. Furthermore, to address the question of a ``classical limit'' of superfluid vortices which recovers classical helicity conservation, we perform numerical simulations of \\emph{bundles} of superfluid vortices, and find behavior akin to classical viscous flows.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
NASA Astrophysics Data System (ADS)
Jafari, S.; Hojjati, M. H.
2011-12-01
Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.
Inulin determination for food labeling.
Zuleta, A; Sambucetti, M E
2001-10-01
Inulin and oligofructose exhibit valuable nutritional and functional attributes, so they are used as supplements as soluble fiber or as macronutrient substitutes. As classic analytical methods for dietary fiber measurement are not effective, several specific methods have been proposed. These methods measure total fructans and are based on one or more enzymatic sample treatments and determination of released sugars. To determine inulin for labeling purposes, we developed an easy and rapid anion-exchange high-performance liquid chromatography (HPLC) method following water extraction of inulin. HPLC conditions included an Aminex HPX- 87C column (Bio-Rad), deionized water at 85 degrees C as the mobile phase and a refractive index detector. The tested foods included tailor-made food products containing known amounts of inulin and commercial products (cookies, milk, ice creams, cheese, and cereal bars). The average recovery was 97%, and the coefficient of variation ranged from 1.1 to 5% in the food matrixes. The obtained results showed that this method provides an easier, faster and cheaper alternative than previous techniques for determining inulin with enough accuracy and precision for routine labeling purposes by direct determination of inulin by HPLC with refractive index detection.
NASA Astrophysics Data System (ADS)
Deco, Gustavo; Martí, Daniel
2007-03-01
The analysis of transitions in stochastic neurodynamical systems is essential to understand the computational principles that underlie those perceptual and cognitive processes involving multistable phenomena, like decision making and bistable perception. To investigate the role of noise in a multistable neurodynamical system described by coupled differential equations, one usually considers numerical simulations, which are time consuming because of the need for sufficiently many trials to capture the statistics of the influence of the fluctuations on that system. An alternative analytical approach involves the derivation of deterministic differential equations for the moments of the distribution of the activity of the neuronal populations. However, the application of the method of moments is restricted by the assumption that the distribution of the state variables of the system takes on a unimodal Gaussian shape. We extend in this paper the classical moments method to the case of bimodal distribution of the state variables, such that a reduced system of deterministic coupled differential equations can be derived for the desired regime of multistability.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
On the dynamics of a generalized predator-prey system with Z-type control.
Lacitignola, Deborah; Diele, Fasma; Marangi, Carmela; Provenzale, Antonello
2016-10-01
We apply the Z-control approach to a generalized predator-prey system and consider the specific case of indirect control of the prey population. We derive the associated Z-controlled model and investigate its properties from the point of view of the dynamical systems theory. The key role of the design parameter λ for the successful application of the method is stressed and related to specific dynamical properties of the Z-controlled model. Critical values of the design parameter are also found, delimiting the λ-range for the effectiveness of the Z-method. Analytical results are then numerically validated by the means of two ecological models: the classical Lotka-Volterra model and a model related to a case study of the wolf-wild boar dynamics in the Alta Murgia National Park. Investigations on these models also highlight how the Z-control method acts in respect to different dynamical regimes of the uncontrolled model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Diller, Christian; Karic, Sarah; Oberding, Sarah
2017-06-01
The topic of this article ist the question, in which phases oft he political planning process planners apply their methodological set of tools. That for the results of a research-project are presented, which were gained by an examination of planning-cases in learned journals. Firstly it is argued, which model oft he planning-process is most suitable to reflect the regarded cases and how it is positioned to models oft he political process. Thereafter it is analyzed, which types of planning methods are applied in the several stages oft he planning process. The central findings: Although complex, many planning processes can be thouroughly pictured by a linear modell with predominantly simple feedback loops. Even in times of he communicative turn, concerning their set of tools, planners should pay attention to apply not only communicative methods but as well the classical analytical-rational methods. They are helpful especially for the understanding of the political process before and after the actual planning phase.
Capriotti, Anna Laura; Cavaliere, Chiara; Foglia, Patrizia; La Barbera, Giorgia; Samperi, Roberto; Ventura, Salvatore; Laganà, Aldo
2016-12-01
Recently, magnetic solid-phase extraction has gained interest because it presents various operational advantages over classical solid-phase extraction. Furthermore, magnetic nanoparticles are easy to prepare, and various materials can be used in their synthesis. In the literature, there are only few studies on the determination of mycoestrogens in milk, although their carryover in milk has occurred. In this work, we wanted to develop the first (to the best of our knowledge) magnetic solid-phase extraction protocol for six mycoestrogens from milk, followed by liquid chromatography and tandem mass spectrometry analysis. Magnetic graphitized carbon black was chosen as the adsorbent, as this carbonaceous material, which is very different from the most diffuse graphene and carbon nanotubes, had already shown selectivity towards estrogenic compounds in milk. The graphitized carbon black was decorated with Fe 3 O 4 , which was confirmed by the characterization analyses. A milk deproteinization step was avoided, using only a suitable dilution in phosphate buffer as sample pretreatment. The overall process efficiency ranged between 52 and 102%, whereas the matrix effect considered as signal suppression was below 33% for all the analytes even at the lowest spiking level. The obtained method limits of quantification were below those of other published methods that employ classical solid-phase extraction protocols. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Contreras-Reyes, Eduardo; Garay, Jeremías
2018-01-01
The outer rise is a topographic bulge seaward of the trench at a subduction zone that is caused by bending and flexure of the oceanic lithosphere as subduction commences. The classic model of the flexure of oceanic lithosphere w (x) is a hydrostatic restoring force acting upon an elastic plate at the trench axis. The governing parameters are elastic thickness Te, shear force V0, and bending moment M0. V0 and M0 are unknown variables that are typically replaced by other quantities such as the height of the fore-bulge, wb, and the half-width of the fore-bulge, (xb - xo). However, this method is difficult to implement with the presence of excessive topographic noise around the bulge of the outer rise. Here, we present an alternative method to the classic model, in which lithospheric flexure w (x) is a function of the flexure at the trench axis w0, the initial dip angle of subduction β0, and the elastic thickness Te. In this investigation, we apply a sensitivity analysis to both methods in order to determine the impact of the differing parameters on the solution, w (x). The parametric sensitivity analysis suggests that stable solutions for the alternative approach requires relatively low β0 values (<15°), which are consistent with the initial dip angles observed in seismic velocity-depth models across convergent margins worldwide. The predicted flexure for both methods are compared with observed bathymetric profiles across the Izu-Mariana trench, where the old and cold Pacific plate is characterized by a pronounced outer rise bulge. The alternative method is a more suitable approach, assuming that accurate geometric information at the trench axis (i.e., w0 and β0) is available.
Zou, Ling; Zhao, Haihua; Kim, Seung Jun
2016-11-16
In this study, the classical Welander’s oscillatory natural circulation problem is investigated using high-order numerical methods. As originally studied by Welander, the fluid motion in a differentially heated fluid loop can exhibit stable, weakly instable, and strongly instable modes. A theoretical stability map has also been originally derived from the stability analysis. Numerical results obtained in this paper show very good agreement with Welander’s theoretical derivations. For stable cases, numerical results from both the high-order and low-order numerical methods agree well with the non-dimensional flow rate analytically derived. The high-order numerical methods give much less numerical errors compared to themore » low-order methods. For stability analysis, the high-order numerical methods could perfectly predict the stability map, while the low-order numerical methods failed to do so. For all theoretically unstable cases, the low-order methods predicted them to be stable. The result obtained in this paper is a strong evidence to show the benefits of using high-order numerical methods over the low-order ones, when they are applied to simulate natural circulation phenomenon that has already gain increasing interests in many future nuclear reactor designs.« less
2013-01-01
Background A conventional gravimetry and electro-gravimetry study has been carried out for the precise and accurate purity determination of lead (Pb) in high purity lead stick and for preparation of reference standard. Reference materials are standards containing a known amount of an analyte and provide a reference value to determine unknown concentrations or to calibrate analytical instruments. A stock solution of approximate 2 kg has been prepared after dissolving approximate 2 g of Pb stick in 5% ultra pure nitric acid. From the stock solution five replicates of approximate 50 g have been taken for determination of purity by each method. The Pb has been determined as PbSO4 by conventional gravimetry, as PbO2 by electro gravimetry. The percentage purity of the metallic Pb was calculated accordingly from PbSO4 and PbO2. Results On the basis of experimental observations it has been concluded that by conventional gravimetry and electro-gravimetry the purity of Pb was found to be 99.98 ± 0.24 and 99.97 ± 0.27 g/100 g and on the basis of Pb purity the concentration of reference standard solutions were found to be 1000.88 ± 2.44 and 1000.81 ± 2.68 mg kg-1 respectively with 95% confidence level (k = 2). The uncertainty evaluation has also been carried out in Pb determination following EURACHEM/GUM guidelines. The final analytical results quantifying uncertainty fulfills this requirement and gives a measure of the confidence level of the concerned laboratory. Conclusions Gravimetry is the most reliable technique in comparison to titremetry and instrumental method and the results of gravimetry are directly traceable to SI unit. Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In classical gravimetry the major uncertainties are due to repeatability but in electro-gravimetry several other factors also affect the final results. PMID:23800080
Singh, Nahar; Singh, Niranjan; Tripathy, S Swarupa; Soni, Daya; Singh, Khem; Gupta, Prabhat K
2013-06-26
A conventional gravimetry and electro-gravimetry study has been carried out for the precise and accurate purity determination of lead (Pb) in high purity lead stick and for preparation of reference standard. Reference materials are standards containing a known amount of an analyte and provide a reference value to determine unknown concentrations or to calibrate analytical instruments. A stock solution of approximate 2 kg has been prepared after dissolving approximate 2 g of Pb stick in 5% ultra pure nitric acid. From the stock solution five replicates of approximate 50 g have been taken for determination of purity by each method. The Pb has been determined as PbSO4 by conventional gravimetry, as PbO2 by electro gravimetry. The percentage purity of the metallic Pb was calculated accordingly from PbSO4 and PbO2. On the basis of experimental observations it has been concluded that by conventional gravimetry and electro-gravimetry the purity of Pb was found to be 99.98 ± 0.24 and 99.97 ± 0.27 g/100 g and on the basis of Pb purity the concentration of reference standard solutions were found to be 1000.88 ± 2.44 and 1000.81 ± 2.68 mg kg-1 respectively with 95% confidence level (k = 2). The uncertainty evaluation has also been carried out in Pb determination following EURACHEM/GUM guidelines. The final analytical results quantifying uncertainty fulfills this requirement and gives a measure of the confidence level of the concerned laboratory. Gravimetry is the most reliable technique in comparison to titremetry and instrumental method and the results of gravimetry are directly traceable to SI unit. Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In classical gravimetry the major uncertainties are due to repeatability but in electro-gravimetry several other factors also affect the final results.
Instantaneous Frequency Attribute Comparison
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Ben Horin, Y.
2013-12-01
The instantaneous seismic data attribute provides a different means of seismic interpretation, for all types of seismic data. It first came to the fore in exploration seismology in the classic paper of Taner et al (1979), entitled " Complex seismic trace analysis". Subsequently a vast literature has been accumulated on the subject, which has been given an excellent review by Barnes (1992). In this research we will compare two different methods of computation of the instantaneous frequency. The first method is based on the original idea of Taner et al (1979) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method is based on the computation of the power centroid of the time-frequency spectrum, obtained using either the Gabor Transform as computed by Margrave et al (2011) or the Stockwell Transform as described by Stockwell et al (1996). We will apply both methods to exploration seismic data and the DPRK events recorded in 2006 and 2013. In applying the classical analytic signal technique, which is known to be unstable, due to the division of the square of the envelope, we will incorporate the stabilization and smoothing method proposed in the two paper of Fomel (2007). This method employs linear inverse theory regularization coupled with the application of an appropriate data smoother. The centroid method application is straightforward and is based on the very complete theoretical analysis provided in elegant fashion by Cohen (1995). While the results of the two methods are very similar, noticeable differences are seen at the data edges. This is most likely due to the edge effects of the smoothing operator in the Fomel method, which is more computationally intensive, when an optimal search of the regularization parameter is done. An advantage of the centroid method is the intrinsic smoothing of the data, which is inherent in the sliding window application used in all Short-Time Fourier Transform methods. The Fomel technique has a larger CPU run-time, resulting from the necessary matrix inversion. Barnes, Arthur E. "The calculation of instantaneous frequency and instantaneous bandwidth.", Geophysics, 57.11 (1992): 1520-1524. Fomel, Sergey. "Local seismic attributes.", Geophysics, 72.3 (2007): A29-A33. Fomel, Sergey. "Shaping regularization in geophysical-estimation problems." , Geophysics, 72.2 (2007): R29-R36. Stockwell, Robert Glenn, Lalu Mansinha, and R. P. Lowe. "Localization of the complex spectrum: the S transform."Signal Processing, IEEE Transactions on, 44.4 (1996): 998-1001. Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. "Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063. Cohen, Leon. "Time frequency analysis theory and applications."USA: Prentice Hall, (1995). Margrave, Gary F., Michael P. Lamoureux, and David C. Henley. "Gabor deconvolution: Estimating reflectivity by nonstationary deconvolution of seismic data." Geophysics, 76.3 (2011): W15-W30.
Universal scaling for the quantum Ising chain with a classical impurity
NASA Astrophysics Data System (ADS)
Apollaro, Tony J. G.; Francica, Gianluca; Giuliano, Domenico; Falcone, Giovanni; Palma, G. Massimo; Plastina, Francesco
2017-10-01
We study finite-size scaling for the magnetic observables of an impurity residing at the end point of an open quantum Ising chain with transverse magnetic field, realized by locally rescaling the field by a factor μ ≠1 . In the homogeneous chain limit at μ =1 , we find the expected finite-size scaling for the longitudinal impurity magnetization, with no specific scaling for the transverse magnetization. At variance, in the classical impurity limit μ =0 , we recover finite scaling for the longitudinal magnetization, while the transverse one basically does not scale. We provide both analytic approximate expressions for the magnetization and the susceptibility as well as numerical evidences for the scaling behavior. At intermediate values of μ , finite-size scaling is violated, and we provide a possible explanation of this result in terms of the appearance of a second, impurity-related length scale. Finally, by going along the standard quantum-to-classical mapping between statistical models, we derive the classical counterpart of the quantum Ising chain with an end-point impurity as a classical Ising model on a square lattice wrapped on a half-infinite cylinder, with the links along the first circle modified as a function of μ .
Information-theoretic metamodel of organizational evolution
NASA Astrophysics Data System (ADS)
Sepulveda, Alfredo
2011-12-01
Social organizations are abstractly modeled by holarchies---self-similar connected networks---and intelligent complex adaptive multiagent systems---large networks of autonomous reasoning agents interacting via scaled processes. However, little is known of how information shapes evolution in such organizations, a gap that can lead to misleading analytics. The research problem addressed in this study was the ineffective manner in which classical model-predict-control methods used in business analytics attempt to define organization evolution. The purpose of the study was to construct an effective metamodel for organization evolution based on a proposed complex adaptive structure---the info-holarchy. Theoretical foundations of this study were holarchies, complex adaptive systems, evolutionary theory, and quantum mechanics, among other recently developed physical and information theories. Research questions addressed how information evolution patterns gleamed from the study's inductive metamodel more aptly explained volatility in organization. In this study, a hybrid grounded theory based on abstract inductive extensions of information theories was utilized as the research methodology. An overarching heuristic metamodel was framed from the theoretical analysis of the properties of these extension theories and applied to business, neural, and computational entities. This metamodel resulted in the synthesis of a metaphor for, and generalization of organization evolution, serving as the recommended and appropriate analytical tool to view business dynamics for future applications. This study may manifest positive social change through a fundamental understanding of complexity in business from general information theories, resulting in more effective management.
Classical Wigner method with an effective quantum force: application to reaction rates.
Poulsen, Jens Aage; Li, Huaqing; Nyman, Gunnar
2009-07-14
We construct an effective "quantum force" to be used in the classical molecular dynamics part of the classical Wigner method when determining correlation functions. The quantum force is obtained by estimating the most important short time separation of the Feynman paths that enter into the expression for the correlation function. The evaluation of the force is then as easy as classical potential energy evaluations. The ideas are tested on three reaction rate problems. The resulting transmission coefficients are in much better agreement with accurate results than transmission coefficients from the ordinary classical Wigner method.
The pragmatic roots of American Quaternary geology and geomorphology
NASA Astrophysics Data System (ADS)
Baker, Victor R.
1996-07-01
H.L. Fairchild's words from the 1904 Geological Society of America Bulletin remain appropriate today: "Geologists have been too generous in allowing other people to make their philosophy for them". Geologists have quietly followed a methodological trinity involving (1) inspiration by analogy, (2) impartial and critical assessment of hypotheses, and (3) skepticism of authority (prevailing theoretical constraints or paradigms). These methods are described in classical papers by Quaternary geologists and geomorphologists, mostly written a century ago. In recent years these papers have all been criticized in modern philosophical terms with little appreciation for the late 19th century American philosophical tradition from which they arose. Recent scholarly research, however, has revealed some important aspects of that tradition, giving it a coherence that has largely been underappreciated as 20th century philosophy of science pursued its successive fads of logical positivism, critical rationalism, relativism, and deconstructivism — for all of which "science" is synonymous with "physics". Nearly all this ideology is geologically irrelevant. As philosophy of science in the late 20th century has come to be identical with philosophy of analytical physics, focused on explanations via ideal truths, much of geology has remained true to its classical doctrines of commonsensism, fallibilism, and realism. In contrast to the conceptualism and the reductionism of the analytical sciences, geology has emphasized synthetic thinking: the continuous activity of comparing, connecting, and putting together thoughts and perceptions. The classical methodological studies of geological reasoning all concern the formulation and testing of hypotheses. Analysis does not serve to provide the ultimate answers for intellectual puzzles predefined by limiting assumptions imposed on the real world. Rather, analysis in geology allows the investigator to consider the consequential effects of hypotheses, the latter having been suggested by experience with nature itself rather than by our theories of nature. These distinctions and methods were described in G.K. Gilbert's papers on "The Inculcation of Scientific Method by Example" (1886) and "the Origin of Hypotheses" (1896). Portions were elaborated in T.C. Chamberlin's "Method of Multiple Working Hypotheses" (1890) and his "method of the Earth Sciences" (1904); in W.M. Davis's "Value of Outrageous Geological Hypotheses" (1926); and in D. Johnson's "Role of Analysis in Scientific Investigation" (1933). American Quaternary geology and geomorphology have their philosophical roots in the pragmatic tradition, enunciated most clearly by C.S. Peirce, now recognized as the greatest American philosopher and considered by Sir Karl Popper to be one of the greatest philosophers of all time. Quaternary geology and geomorphology afford numerous examples of Peirce's "method" of science, which might be termed "the critical philosophy of common sense". The most obvious influence of pragmatism in geology, however, has largely been conveyed by the tradition of its scientific community. The elements of this tradition include a reverence for field work, a humility before the "facts" of nature, a continuing effort "to discriminate the phenomena observed from the observer's inference in regard to them", a propensity to pose hypotheses, and a willingness to abandon them when their consequences are contradicted by reality.
Protein assay structured on paper by using lithography
NASA Astrophysics Data System (ADS)
Wilhelm, E.; Nargang, T. M.; Al Bitar, W.; Waterkotte, B.; Rapp, B. E.
2015-03-01
There are two main challenges in producing a robust, paper-based analytical device. The first one is to create a hydrophobic barrier which unlike the commonly used wax barriers does not break if the paper is bent. The second one is the creation of the (bio-)specific sensing layer. For this proteins have to be immobilized without diminishing their activity. We solve both problems using light-based fabrication methods that enable fast, efficient manufacturing of paper-based analytical devices. The first technique relies on silanization by which we create a flexible hydrophobic barrier made of dimethoxydimethylsilane. The second technique demonstrated within this paper uses photobleaching to immobilize proteins by means of maskless projection lithography. Both techniques have been tested on a classical lithography setup using printed toner masks and on a lithography system for maskless lithography. Using these setups we could demonstrate that the proposed manufacturing techniques can be carried out at low costs. The resolution of the paper-based analytical devices obtained with static masks was lower due to the lower mask resolution. Better results were obtained using advanced lithography equipment. By doing so we demonstrated, that our technique enables fabrication of effective hydrophobic boundary layers with a thickness of only 342 μm. Furthermore we showed that flourescine-5-biotin can be immobilized on the non-structured paper and be employed for the detection of streptavidinalkaline phosphatase. By carrying out this assay on a paper-based analytical device which had been structured using the silanization technique we proofed biological compatibility of the suggested patterning technique.
KvN mechanics approach to the time-dependent frequency harmonic oscillator.
Ramos-Prieto, Irán; Urzúa-Pineda, Alejandro R; Soto-Eguibar, Francisco; Moya-Cessa, Héctor M
2018-05-30
Using the Ermakov-Lewis invariants appearing in KvN mechanics, the time-dependent frequency harmonic oscillator is studied. The analysis builds upon the operational dynamical model, from which it is possible to infer quantum or classical dynamics; thus, the mathematical structure governing the evolution will be the same in both cases. The Liouville operator associated with the time-dependent frequency harmonic oscillator can be transformed using an Ermakov-Lewis invariant, which is also time dependent and commutes with itself at any time. Finally, because the solution of the Ermakov equation is involved in the evolution of the classical state vector, we explore some analytical and numerical solutions.
Inertia effects in thin film flow with a corrugated boundary
NASA Technical Reports Server (NTRS)
Serbetci, Ilter; Tichy, John A.
1991-01-01
An analytical solution is presented for two-dimensional, incompressible film flow between a sinusoidally grooved (or rough) surface and a flat-surface. The upper grooved surface is stationary whereas the lower, smooth surface moves with a constant speed. The Navier-Stokes equations were solved employing both mapping techniques and perturbation expansions. Due to the inclusion of the inertia effects, a different pressure distribution is obtained than predicted by the classical lubrication theory. In particular, the amplitude of the pressure distribution of the classical lubrication theory is found to be in error by over 100 perent (for modified Reynolds number of 3-4).
NASA Astrophysics Data System (ADS)
Chen, Shanzhen; Jiang, Xiaoyun
2012-08-01
In this paper, analytical solutions to time-fractional partial differential equations in a multi-layer annulus are presented. The final solutions are obtained in terms of Mittag-Leffler function by using the finite integral transform technique and Laplace transform technique. In addition, the classical diffusion equation (α=1), the Helmholtz equation (α→0) and the wave equation (α=2) are discussed as special cases. Finally, an illustrative example problem for the three-layer semi-circular annular region is solved and numerical results are presented graphically for various kind of order of fractional derivative.
Approximation methods of European option pricing in multiscale stochastic volatility model
NASA Astrophysics Data System (ADS)
Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
Most Wired 2006: measuring value.
Solovy, Alden
2006-07-01
As the Most Wired hospitals incorporate information technology into their strategic plans, they combine a"balanced scorecard"approach with classic business analytics to measure how well IT delivers on their goals. To find out which organizations made this year's 100 Most Wired list, as well as those named in other survey categories, go to the foldout section.
Hypervelocity Aerodynamics and Control
1990-06-06
because of our inability to integrate the costate equiations analytically. Nevertheless, if the solution 5. If fX (T!) - X3, < and 1X4(7T) - -4,d1 < to...domain can be ex- it is independent of vo. This can be explained that actly calculated using the classical equations in astro - although the reachable
Singularities in the classical Rayleigh-Taylor flow - Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1993-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
Singularities in the classical Rayleigh-Taylor flow: Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1992-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
Mechanical Properties of Laminate Materials: From Surface Waves to Bloch Oscillations
NASA Astrophysics Data System (ADS)
Liang, Z.; Willatzen, M.; Christensen, J.
2015-10-01
We propose hitherto unexplored and fully analytical insights into laminate elastic materials in a true condensed-matter-physics spirit. Pure mechanical surface waves that decay as evanescent waves from the interface are discussed, and we demonstrate how these designer Scholte waves are controlled by the geometry as opposed to the material alone. The linear surface wave dispersion is modulated by the crystal filling fraction such that the degree of confinement can be engineered without relying on narrow-band resonances but on effective stiffness moduli. In the same context, we provide a theoretical recipe for designing Bloch oscillations in classical plate structures and show how mechanical Bloch oscillations can be generated in arrays of solid plates when the modal wavelength is gradually reduced. The design recipe describes how Bloch oscillations in classical structures of arbitrary dimensions can be generated, and we demonstrate this numerically for structures with millimeter and centimeter dimensions in the kilohertz to megahertz range. Analytical predictions agree entirely with full wave simulations showing how elastodynamics can mimic quantum-mechanical condensed-matter phenomena.
Quantum Hamilton equations of motion for bound states of one-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Köppe, J.; Patzold, M.; Grecksch, W.; Paul, W.
2018-06-01
On the basis of Nelson's stochastic mechanics derivation of the Schrödinger equation, a formal mathematical structure of non-relativistic quantum mechanics equivalent to the one in classical analytical mechanics has been established in the literature. We recently were able to augment this structure by deriving quantum Hamilton equations of motion by finding the Nash equilibrium of a stochastic optimal control problem, which is the generalization of Hamilton's principle of classical mechanics to quantum systems. We showed that these equations allow a description and numerical determination of the ground state of quantum problems without using the Schrödinger equation. We extend this approach here to deliver the complete discrete energy spectrum and related eigenfunctions for bound states of one-dimensional stationary quantum systems. We exemplify this analytically for the one-dimensional harmonic oscillator and numerically by analyzing a quartic double-well potential, a model of broad importance in many areas of physics. We furthermore point out a relation between the tunnel splitting of such models and mean first passage time concepts applied to Nelson's diffusion paths in the ground state.
Dynamics and Novel Mechanisms of SN2 Reactions on ab Initio Analytical Potential Energy Surfaces.
Szabó, István; Czakó, Gábor
2017-11-30
We describe a novel theoretical approach to the bimolecular nucleophilic substitution (S N 2) reactions that is based on analytical potential energy surfaces (PESs) obtained by fitting a few tens of thousands high-level ab initio energy points. These PESs allow computing millions of quasi-classical trajectories thereby providing unprecedented statistical accuracy for S N 2 reactions, as well as performing high-dimensional quantum dynamics computations. We developed full-dimensional ab initio PESs for the F - + CH 3 Y [Y = F, Cl, I] systems, which describe the direct and indirect, complex-forming Walden-inversion, the frontside attack, and the new double-inversion pathways as well as the proton-transfer channels. Reaction dynamics simulations on the new PESs revealed (a) a novel double-inversion S N 2 mechanism, (b) frontside complex formation, (c) the dynamics of proton transfer, (d) vibrational and rotational mode specificity, (e) mode-specific product vibrational distributions, (f) agreement between classical and quantum dynamics, (g) good agreement with measured scattering angle and product internal energy distributions, and (h) significant leaving group effect in accord with experiments.
2016-04-07
Multivariate UV-spectrophotometric methods and Quality by Design (QbD) HPLC are described for concurrent estimation of avanafil (AV) and dapoxetine (DP) in the binary mixture and in the dosage form. Chemometric methods have been developed, including classical least-squares, principal component regression, partial least-squares, and multiway partial least-squares. Analytical figures of merit, such as sensitivity, selectivity, analytical sensitivity, LOD, and LOQ were determined. QbD consists of three steps, starting with the screening approach to determine the critical process parameter and response variables. This is followed by understanding of factors and levels, and lastly the application of a Box-Behnken design containing four critical factors that affect the method. From an Ishikawa diagram and a risk assessment tool, four main factors were selected for optimization. Design optimization, statistical calculation, and final-condition optimization of all the reactions were Carried out. Twenty-five experiments were done, and a quadratic model was used for all response variables. Desirability plot, surface plot, design space, and three-dimensional plots were calculated. In the optimized condition, HPLC separation was achieved on Phenomenex Gemini C18 column (250 × 4.6 mm, 5 μm) using acetonitrile-buffer (ammonium acetate buffer at pH 3.7 with acetic acid) as a mobile phase at flow rate of 0.7 mL/min. Quantification was done at 239 nm, and temperature was set at 20°C. The developed methods were validated and successfully applied for simultaneous determination of AV and DP in the dosage form.
Improving the Method of Roof Fall Susceptibility Assessment based on Fuzzy Approach
NASA Astrophysics Data System (ADS)
Ghasemi, Ebrahim; Ataei, Mohammad; Shahriar, Kourosh
2017-03-01
Retreat mining is always accompanied by a great amount of accidents and most of them are due to roof fall. Therefore, development of methodologies to evaluate the roof fall susceptibility (RFS) seems essential. Ghasemi et al. (2012) proposed a systematic methodology to assess the roof fall risk during retreat mining based on risk assessment classic approach. The main defect of this method is ignorance of subjective uncertainties due to linguistic input value of some factors, low resolution, fixed weighting, sharp class boundaries, etc. To remove this defection and improve the mentioned method, in this paper, a novel methodology is presented to assess the RFS using fuzzy approach. The application of fuzzy approach provides an effective tool to handle the subjective uncertainties. Furthermore, fuzzy analytical hierarchy process (AHP) is used to structure and prioritize various risk factors and sub-factors during development of this method. This methodology is applied to identify the susceptibility of roof fall occurrence in main panel of Tabas Central Mine (TCM), Iran. The results indicate that this methodology is effective and efficient in assessing RFS.
NASA Astrophysics Data System (ADS)
Sun, HongGuang; Liu, Xiaoting; Zhang, Yong; Pang, Guofei; Garrard, Rhiannon
2017-09-01
Fractional-order diffusion equations (FDEs) extend classical diffusion equations by quantifying anomalous diffusion frequently observed in heterogeneous media. Real-world diffusion can be multi-dimensional, requiring efficient numerical solvers that can handle long-term memory embedded in mass transport. To address this challenge, a semi-discrete Kansa method is developed to approximate the two-dimensional spatiotemporal FDE, where the Kansa approach first discretizes the FDE, then the Gauss-Jacobi quadrature rule solves the corresponding matrix, and finally the Mittag-Leffler function provides an analytical solution for the resultant time-fractional ordinary differential equation. Numerical experiments are then conducted to check how the accuracy and convergence rate of the numerical solution are affected by the distribution mode and number of spatial discretization nodes. Applications further show that the numerical method can efficiently solve two-dimensional spatiotemporal FDE models with either a continuous or discrete mixing measure. Hence this study provides an efficient and fast computational method for modeling super-diffusive, sub-diffusive, and mixed diffusive processes in large, two-dimensional domains with irregular shapes.
A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods
2014-08-01
Approved for public release; distribution is unlimited. A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods...ABSTRACT A Study Comparing the Pedagogical Effectiveness of Virtual Worlds and of Classical Methods Report Title This experiment tests whether a virtual... PEDAGOGICAL EFFECTIVENESS OF VIRTUAL WORLDS AND OF TRADITIONAL TRAINING METHODS A Thesis by BENJAMIN PETERS
On the anisotropic advection-diffusion equation with time dependent coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Coronado, Hector; Coronado, Manuel; Del-Castillo-Negrete, Diego B.
The advection-diffusion equation with time dependent velocity and anisotropic time dependent diffusion tensor is examined in regard to its non-classical transport features and to the use of a non-orthogonal coordinate system. Although this equation appears in diverse physical problems, particularly in particle transport in stochastic velocity fields and in underground porous media, a detailed analysis of its solutions is lacking. In order to study the effects of the time-dependent coefficients and the anisotropic diffusion on transport, we solve analytically the equation for an initial Dirac delta pulse. Here, we discuss the solutions to three cases: one based on power-law correlationmore » functions where the pulse diffuses faster than the classical rate ~t, a second case specically designed to display slower rate of diffusion than the classical one, and a third case to describe hydrodynamic dispersion in porous media« less
On the anisotropic advection-diffusion equation with time dependent coefficients
Hernandez-Coronado, Hector; Coronado, Manuel; Del-Castillo-Negrete, Diego B.
2017-02-01
The advection-diffusion equation with time dependent velocity and anisotropic time dependent diffusion tensor is examined in regard to its non-classical transport features and to the use of a non-orthogonal coordinate system. Although this equation appears in diverse physical problems, particularly in particle transport in stochastic velocity fields and in underground porous media, a detailed analysis of its solutions is lacking. In order to study the effects of the time-dependent coefficients and the anisotropic diffusion on transport, we solve analytically the equation for an initial Dirac delta pulse. Here, we discuss the solutions to three cases: one based on power-law correlationmore » functions where the pulse diffuses faster than the classical rate ~t, a second case specically designed to display slower rate of diffusion than the classical one, and a third case to describe hydrodynamic dispersion in porous media« less
Generation of steady entanglement via unilateral qubit driving in bad cavities.
Jin, Zhao; Su, Shi-Lei; Zhu, Ai-Dong; Wang, Hong-Fu; Shen, Li-Tuo; Zhang, Shou
2017-12-15
We propose a scheme for generating an entangled state for two atoms trapped in two separate cavities coupled to each other. The scheme is based on the competition between the unitary dynamics induced by the classical fields and the collective decays induced by the dissipation of two non-local bosonic modes. In this scheme, only one qubit is driven by external classical fields, whereas the other need not be manipulated via classical driving. This is meaningful for experimental implementation between separate nodes of a quantum network. The steady entanglement can be obtained regardless of the initial state, and the robustness of the scheme against parameter fluctuations is numerically demonstrated. We also give an analytical derivation of the stationary fidelity to enable a discussion of the validity of this regime. Furthermore, based on the dissipative entanglement preparation scheme, we construct a quantum state transfer setup with multiple nodes as a practical application.
Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure
NASA Astrophysics Data System (ADS)
Pestrenin, V. M.; Pestrenina, I. V.
2017-03-01
The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.
Fractional flow in fractured chalk; a flow and tracer test revisited.
Odling, N E; West, L J; Hartmann, S; Kilpatrick, A
2013-04-01
A multi-borehole pumping and tracer test in fractured chalk is revisited and reinterpreted in the light of fractional flow. Pumping test data analyzed using a fractional flow model gives sub-spherical flow dimensions of 2.2-2.4 which are interpreted as due to the partially penetrating nature of the pumped borehole. The fractional flow model offers greater versatility than classical methods for interpreting pumping tests in fractured aquifers but its use has been hampered because the hydraulic parameters derived are hard to interpret. A method is developed to convert apparent transmissivity and storativity (L(4-n)/T and S(2-n)) to conventional transmissivity and storativity (L2/T and dimensionless) for the case where flow dimension, 2
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Torres-Climent, A; Gomis, P; Martín-Mata, J; Bustamante, M A; Marhuenda-Egea, F C; Pérez-Murcia, M D; Pérez-Espinosa, A; Paredes, C; Moral, R
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio.
Torres-Climent, A.; Gomis, P.; Martín-Mata, J.; Bustamante, M. A.; Marhuenda-Egea, F. C.; Pérez-Murcia, M. D.; Pérez-Espinosa, A.; Paredes, C.; Moral, R.
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio. PMID:26418458
Cosmic Experiments: Remaking Materialism and Daoist Ethic "Outside of the Establishment".
Zhan, Mei
2016-01-01
In this article, I discuss recent experiments in 'classical' (gudian) Chinese medicine. As the marketization and privatization of health care deepens and enters uncharted territories in China, a cohort of young practitioners and entrepreneurs have begun their quest for the 'primordial spirit' of traditional Chinese medicine by setting up their own businesses where they engage in clinical, pedagogical, and entrepreneurial practices outside of state-run institutions. I argue that these explorations in classical Chinese medicine, which focus on classical texts and Daoist analytics, do not aim to restore spirituality to the scientized and secularized theory of traditional Chinese medicine. Nor are they symptomatic of withdrawals from the modern world. Rather, these 'cosmic experiments' need to be understood in relation to dialectical and historical materialisms as modes of knowledge production and political alliance. In challenging the status of materialist theory and the process of theorization in traditional Chinese medicine and postsocialist life more broadly speaking, advocates of classical Chinese medicine imagine nondialectical materialisms as immanent ways of thinking, doing, and being in the world.
NASA Astrophysics Data System (ADS)
Beheshti, Alireza
2018-03-01
The contribution addresses the finite element analysis of bending of plates given the Kirchhoff-Love model. To analyze the static deformation of plates with different loadings and geometries, the principle of virtual work is used to extract the weak form. Following deriving the strain field, stresses and resultants may be obtained. For constructing four-node quadrilateral plate elements, the Hermite polynomials defined with respect to the variables in the parent space are applied explicitly. Based on the approximated field of displacement, the stiffness matrix and the load vector in the finite element method are obtained. To demonstrate the performance of the subparametric 4-node plate elements, some known, classical examples in structural mechanics are solved and there are comparisons with the analytical solutions available in the literature.
Li, Fumin; Ewles, Matthew; Pelzer, Mary; Brus, Theodore; Ledvina, Aaron; Gray, Nicholas; Koupaei-Abyazani, Mohammad; Blackburn, Michael
2013-10-01
Achieving sufficient selectivity in bioanalysis is critical to ensure accurate quantitation of drugs and metabolites in biological matrices. Matrix effects most classically refer to modification of ionization efficiency of an analyte in the presence of matrix components. However, nonanalyte or matrix components present in samples can adversely impact the performance of a bioanalytical method and are broadly considered as matrix effects. For the current manuscript, we expand the scope to include matrix elements that contribute to isobaric interference and measurement bias. These three categories of matrix effects are illustrated with real examples encountered. The causes, symptoms, and suggested strategies and resolutions for each form of matrix effects are discussed. Each case is presented in the format of situation/action/result to facilitate reading.
Research implications of science-informed, value-based decision making.
Dowie, Jack
2004-01-01
In 'Hard' science, scientists correctly operate as the 'guardians of certainty', using hypothesis testing formulations and value judgements about error rates and time discounting that make classical inferential methods appropriate. But these methods can neither generate most of the inputs needed by decision makers in their time frame, nor generate them in a form that allows them to be integrated into the decision in an analytically coherent and transparent way. The need for transparent accountability in public decision making under uncertainty and value conflict means the analytical coherence provided by the stochastic Bayesian decision analytic approach, drawing on the outputs of Bayesian science, is needed. If scientific researchers are to play the role they should be playing in informing value-based decision making, they need to see themselves also as 'guardians of uncertainty', ensuring that the best possible current posterior distributions on relevant parameters are made available for decision making, irrespective of the state of the certainty-seeking research. The paper distinguishes the actors employing different technologies in terms of the focus of the technology (knowledge, values, choice); the 'home base' mode of their activity on the cognitive continuum of varying analysis-to-intuition ratios; and the underlying value judgements of the activity (especially error loss functions and time discount rates). Those who propose any principle of decision making other than the banal 'Best Principle', including the 'Precautionary Principle', are properly interpreted as advocates seeking to have their own value judgements and preferences regarding mode location apply. The task for accountable decision makers, and their supporting technologists, is to determine the best course of action under the universal conditions of uncertainty and value difference/conflict.
Wickering, Ellis; Gaspard, Nicolas; Zafar, Sahar; Moura, Valdery J; Biswal, Siddharth; Bechek, Sophia; OʼConnor, Kathryn; Rosenthal, Eric S; Westover, M Brandon
2016-06-01
The purpose of this study is to evaluate automated implementations of continuous EEG monitoring-based detection of delayed cerebral ischemia based on methods used in classical retrospective studies. We studied 95 patients with either Fisher 3 or Hunt Hess 4 to 5 aneurysmal subarachnoid hemorrhage who were admitted to the Neurosciences ICU and underwent continuous EEG monitoring. We implemented several variations of two classical algorithms for automated detection of delayed cerebral ischemia based on decreases in alpha-delta ratio and relative alpha variability. Of 95 patients, 43 (45%) developed delayed cerebral ischemia. Our automated implementation of the classical alpha-delta ratio-based trending method resulted in a sensitivity and specificity (Se,Sp) of (80,27)%, compared with the values of (100,76)% reported in the classic study using similar methods in a nonautomated fashion. Our automated implementation of the classical relative alpha variability-based trending method yielded (Se,Sp) values of (65,43)%, compared with (100,46)% reported in the classic study using nonautomated analysis. Our findings suggest that improved methods to detect decreases in alpha-delta ratio and relative alpha variability are needed before an automated EEG-based early delayed cerebral ischemia detection system is ready for clinical use.
Onset of fractional-order thermal convection in porous media
NASA Astrophysics Data System (ADS)
Karani, Hamid; Rashtbehesht, Majid; Huber, Christian; Magin, Richard L.
2017-12-01
The macroscopic description of buoyancy-driven thermal convection in porous media is governed by advection-diffusion processes, which in the presence of thermophysical heterogeneities fail to predict the onset of thermal convection and the average rate of heat transfer. This work extends the classical model of heat transfer in porous media by including a fractional-order advective-dispersive term to account for the role of thermophysical heterogeneities in shifting the thermal instability point. The proposed fractional-order model overcomes limitations of the common closure approaches for the thermal dispersion term by replacing the diffusive assumption with a fractional-order model. Through a linear stability analysis and Galerkin procedure, we derive an analytical formula for the critical Rayleigh number as a function of the fractional model parameters. The resulting critical Rayleigh number reduces to the classical value in the absence of thermophysical heterogeneities when solid and fluid phases have similar thermal conductivities. Numerical simulations of the coupled flow equation with the fractional-order energy model near the primary bifurcation point confirm our analytical results. Moreover, data from pore-scale simulations are used to examine the potential of the proposed fractional-order model in predicting the amount of heat transfer across the porous enclosure. The linear stability and numerical results show that, unlike the classical thermal advection-dispersion models, the fractional-order model captures the advance and delay in the onset of convection in porous media and provides correct scalings for the average heat transfer in a thermophysically heterogeneous medium.
Acoustic imaging of a duct spinning mode by the use of an in-duct circular microphone array.
Wei, Qingkai; Huang, Xun; Peers, Edward
2013-06-01
An imaging method of acoustic spinning modes propagating within a circular duct simply with surface pressure information is introduced in this paper. The proposed method is developed in a theoretical way and is demonstrated by a numerical simulation case. Nowadays, the measurements within a duct have to be conducted using in-duct microphone array, which is unable to provide information of complete acoustic solutions across the test section. The proposed method can estimate immeasurable information by forming a so-called observer. The fundamental idea behind the testing method was originally developed in control theory for ordinary differential equations. Spinning mode propagation, however, is formulated in partial differential equations. A finite difference technique is used to reduce the associated partial differential equations to a classical form in control. The observer method can thereafter be applied straightforwardly. The algorithm is recursive and, thus, could be operated in real-time. A numerical simulation for a straight circular duct is conducted. The acoustic solutions on the test section can be reconstructed with good agreement to analytical solutions. The results suggest the potential and applications of the proposed method.
Garrido-Delgado, Rocío; Arce, Lourdes; Valcárcel, Miguel
2012-01-01
The potential of a headspace device coupled to multi-capillary column-ion mobility spectrometry has been studied as a screening system to differentiate virgin olive oils ("lampante," "virgin," and "extra virgin" olive oil). The last two types are virgin olive oil samples of very similar characteristics, which were very difficult to distinguish with the existing analytical method. The procedure involves the direct introduction of the virgin olive oil sample into a vial, headspace generation, and automatic injection of the volatiles into a gas chromatograph-ion mobility spectrometer. The data obtained after the analysis by duplicate of 98 samples of three different categories of virgin olive oils, were preprocessed and submitted to a detailed chemometric treatment to classify the virgin olive oil samples according to their sensory quality. The same virgin olive oil samples were also analyzed by an expert's panel to establish their category and use these data as reference values to check the potential of this new screening system. This comparison confirms the potential of the results presented here. The model was able to classify 97% of virgin olive oil samples in their corresponding group. Finally, the chemometric method was validated obtaining a percentage of prediction of 87%. These results provide promising perspectives for the use of ion mobility spectrometry to differentiate virgin olive oil samples according to their quality instead of using the classical analytical procedure.
Equilibrium, stability, and orbital evolution of close binary systems
NASA Technical Reports Server (NTRS)
Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.
1994-01-01
We present a new analytic study of the equilibrium and stability properties of close binary systems containing polytropic components. Our method is based on the use of ellipsoidal trial functions in an energy variational principle. We consider both synchronized and nonsynchronized systems, constructing the compressible generalizations of the classical Darwin and Darwin-Riemann configurations. Our method can be applied to a wide variety of binary models where the stellar masses, radii, spins, entropies, and polytropic indices are all allowed to vary over wide ranges and independently for each component. We find that both secular and dynamical instabilities can develop before a Roche limit or contact is reached along a sequence of models with decreasing binary separation. High incompressibility always makes a given binary system more susceptible to these instabilities, but the dependence on the mass ratio is more complicated. As simple applications, we construct models of double degenerate systems and of low-mass main-sequence star binaries. We also discuss the orbital evoltuion of close binary systems under the combined influence of fluid viscosity and secular angular momentum losses from processes like gravitational radiation. We show that the existence of global fluid instabilities can have a profound effect on the terminal evolution of coalescing binaries. The validity of our analytic solutions is examined by means of detailed comparisons with the results of recent numerical fluid calculations in three dimensions.
Cooley, Richard L.
1992-01-01
MODFE, a modular finite-element model for simulating steady- or unsteady-state, area1 or axisymmetric flow of ground water in a heterogeneous anisotropic aquifer is documented in a three-part series of reports. In this report, part 2, the finite-element equations are derived by minimizing a functional of the difference between the true and approximate hydraulic head, which produces equations that are equivalent to those obtained by either classical variational or Galerkin techniques. Spatial finite elements are triangular with linear basis functions, and temporal finite elements are one dimensional with linear basis functions. Physical processes that can be represented by the model include (1) confined flow, unconfined flow (using the Dupuit approximation), or a combination of both; (2) leakage through either rigid or elastic confining units; (3) specified recharge or discharge at points, along lines, or areally; (4) flow across specified-flow, specified-head, or head-dependent boundaries; (5) decrease of aquifer thickness to zero under extreme water-table decline and increase of aquifer thickness from zero as the water table rises; and (6) head-dependent fluxes from springs, drainage wells, leakage across riverbeds or confining units combined with aquifer dewatering, and evapotranspiration. The matrix equations produced by the finite-element method are solved by the direct symmetric-Doolittle method or the iterative modified incomplete-Cholesky conjugate-gradient method. The direct method can be efficient for small- to medium-sized problems (less than about 500 nodes), and the iterative method is generally more efficient for larger-sized problems. Comparison of finite-element solutions with analytical solutions for five example problems demonstrates that the finite-element model can yield accurate solutions to ground-water flow problems.
Tishchenko, Oksana; Truhlar, Donald G
2010-02-28
This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics
ERIC Educational Resources Information Center
Pujol, O.; Perez, J. P.
2007-01-01
The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…
Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša
2013-10-01
Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can be applied to any experimental problem in which multianalyte results obtained either by several analytical procedures, analysts, instruments, or laboratories need to be compared.
Analytical advances in pharmaceutical impurity profiling.
Holm, René; Elder, David P
2016-05-25
Impurities will be present in all drug substances and drug products, i.e. nothing is 100% pure if one looks in enough depth. The current regulatory guidance on impurities accepts this, and for drug products with a dose of less than 2g/day identification of impurities is set at 0.1% levels and above (ICH Q3B(R2), 2006). For some impurities, this is a simple undertaking as generally available analytical techniques can address the prevailing analytical challenges; whereas, for others this may be much more challenging requiring more sophisticated analytical approaches. The present review provides an insight into current development of analytical techniques to investigate and quantify impurities in drug substances and drug products providing discussion of progress particular within the field of chromatography to ensure separation of and quantification of those related impurities. Further, a section is devoted to the identification of classical impurities, but in addition, inorganic (metal residues) and solid state impurities are also discussed. Risk control strategies for pharmaceutical impurities aligned with several of the ICH guidelines, are also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Körsgen, Martin; Pelster, Andreas; Dreisewerd, Klaus; Arlinghaus, Heinrich F
2016-02-01
The analytical sensitivity in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is largely affected by the specific analyte-matrix interaction, in particular by the possible incorporation of the analytes into crystalline MALDI matrices. Here we used time-of-flight secondary ion mass spectrometry (ToF-SIMS) to visualize the incorporation of three peptides with different hydrophobicities, bradykinin, Substance P, and vasopressin, into two classic MALDI matrices, 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA). For depth profiling, an Ar cluster ion beam was used to gradually sputter through the matrix crystals without causing significant degradation of matrix or biomolecules. A pulsed Bi3 ion cluster beam was used to image the lateral analyte distribution in the center of the sputter crater. Using this dual beam technique, the 3D distribution of the analytes and spatial segregation effects within the matrix crystals were imaged with sub-μm resolution. The technique could in the future enable matrix-enhanced (ME)-ToF-SIMS imaging of peptides in tissue slices at ultra-high resolution. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Körsgen, Martin; Pelster, Andreas; Dreisewerd, Klaus; Arlinghaus, Heinrich F.
2016-02-01
The analytical sensitivity in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is largely affected by the specific analyte-matrix interaction, in particular by the possible incorporation of the analytes into crystalline MALDI matrices. Here we used time-of-flight secondary ion mass spectrometry (ToF-SIMS) to visualize the incorporation of three peptides with different hydrophobicities, bradykinin, Substance P, and vasopressin, into two classic MALDI matrices, 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA). For depth profiling, an Ar cluster ion beam was used to gradually sputter through the matrix crystals without causing significant degradation of matrix or biomolecules. A pulsed Bi3 ion cluster beam was used to image the lateral analyte distribution in the center of the sputter crater. Using this dual beam technique, the 3D distribution of the analytes and spatial segregation effects within the matrix crystals were imaged with sub-μm resolution. The technique could in the future enable matrix-enhanced (ME)-ToF-SIMS imaging of peptides in tissue slices at ultra-high resolution.
Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Agnieszka; Kubsik, Anna; Widłak, Patrycja; Łukasiak, Adam; Janczewska, Katarzyna; Kociuga, Natalia; Nowakowski, Tomasz; Woldańska-Okońska, Marta
2018-01-01
Introduction: In this article, the authors focused on the symptoms of ischemic stroke and the effect of neurorehabilitation methods on the functional status of patients after ischemic stroke. The aim of the study was to evaluate and compare the functional status of patients after ischemic stroke with improved classic kinesiotherapy, classic kinesiotherapy and NDT-Bobath and classic kinesiotherapy and PNF. Materials and methods: The study involved 120 patients after ischemic stroke. Patients were treated in the Department of Rehabilitation and Physical Medicine USK of Medical University in Lodz. Patients were divided into 3 groups of 40 people. Group 1 was rehabilitated by classical kinesiotherapy. Group 2 was rehabilitated by classic kinesiotherapy and NTD-Bobath. Group 3 was rehabilitated by classical kinesiotherapy and PNF. In all patient groups, magnetostimulation was performed using the Viofor JPS System. The study was conducted twice: before treatment and immediately after 5 weeks after the therapy. The effects of applied neurorehabilitation methods were assessed on the basis of the Rivermead Motor Assessment (RMA). Results: In all three patient groups, functional improvement was achieved. However, a significantly higher improvement was observed in patients in the second group, enhanced with classical kinesitherapy and NDT-Bobath. Conclusions: The use of classical kinesiotherapy combined with the NDT-Bobath method is noticeably more effective in improving functional status than the use only classical kinesiotherapy or combination of classical kinesiotherapy and PNF patients after ischemic stroke.
The problem of self-disclosure in psychoanalysis.
Meissner, W W
2002-01-01
The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.
1980-11-01
to auto ignite in color cinematography of the process. It appears the above interaction reduces classical wall quench(14 ) as the reaction continues...vivid blue hue while the core reaction is white. Continuation of the reaction is seen in the first four frames of Fig. V-3; this figure covers the time
Valuing (and Teaching) the Past
ERIC Educational Resources Information Center
Peart, Sandra J.; Levy, David M.
2005-01-01
There is a difference between the private and social cost of preserving the past. Although it may be privately rational to forget the past, the social cost is significant: We fail to see that classical political economy is analytically egalitarian. The past is a rich source of surprises and debates, and resources on the Web are uniquely suited to…
ERIC Educational Resources Information Center
Ammentorp, William
There is much to be gained by using systems analysis in educational administration. Most administrators, presently relying on classical statistical techniques restricted to problems having few variables, should be trained to use more sophisticated tools such as systems analysis. The systems analyst, interested in the basic processes of a group or…
Developing Students' Ideas about Lens Imaging: Teaching Experiments with an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists' analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students' ideas, teaching experiments are performed and evaluated using…
Peridynamic Modeling of Fracture and Failure of Materials
2013-08-02
is demonstrated through comparisons with classical laminate theory ( CLT ) and FEM analysis by considering laminates with complex layup under in-plane...is a symmetric cross-ply laminate with a layup of [0 / 90 ]S . For symmetric laminates, CLT predicts that there is no coupling between bending and...analytical results from the CLT in Figs. 5 and 6. 16 (a
NASA Astrophysics Data System (ADS)
Huang, Y.; Longo, W. M.; Zheng, Y.; Richter, N.; Dillon, J. T.; Theroux, S.; D'Andrea, W. J.; Toney, J. L.; Wang, L.; Amaral-Zettler, L. A.
2017-12-01
Alkenones are mature, well-established paleo-sea surface temperature proxies that have been widely applied for more than three decades. However, recent advances across a broad range of alkenone-related topics at Brown University are inviting new paleoclimate and paleo-environmental applications for these classic biomarkers. In this presentation, I will summarize our progress in the following areas: (1) Discovery of a freshwater alkenone-producing haptophyte species and structural elucidation of novel alkenone structures unique to the species, performing in-situ temperature calibrations, and classifying alkenone-producing haptophytes into three groups based on molecular ecological approaches (with the new species belonging to Group I Isochrysidales); (2) A global survey of Group I haptophyte distributions and environmental conditions favoring the presence of this alga, as well as examples of using Group I alkenones for paleotemperature reconstructions; (3) New gas chromatographic columns that allow unprecedented resolution of alkenones and alkenoates and associated structural isomers, and development of a new suite of paleotemperature and paleoenvironmental proxies; (4) A new liquid chromatographic separation technique that allows efficient cleanup of alkenones and alkenoates (without the need for saponification) for subsequent coelution-free gas chromatographic analysis; (5) Novel structural features revealed by new analytical methods that now allow a comprehensive re-assessment of taxonomic features of various haptophyte species, with principal component analysis capable of fully resolving species biomarker distributions; (6) Development of UK37 double prime (UK37'') for Group II haptophytes (e.g., those occurring in saline lakes and estuaries), that differs from the traditional unsaturation indices used for SST reconstructions; (7) New assessment of how mixed inputs from different alkenone groups may affect SST reconstructions in marginal ocean environments and possible approaches to solving the problem; and, (8) Optimization of analytical methods for determining the double-bond positions of alkenones and alkenoates, and subsequent discovery of new structural features of short-chain alkenones and the proposal of new biosynthetic pathways.
Stress analysis in curved composites due to thermal loading
NASA Astrophysics Data System (ADS)
Polk, Jared Cornelius
Many structures in aircraft, cars, trucks, ships, machines, tools, bridges, and buildings, consist of curved sections. These sections vary from straight line segments that have curvature at either one or both ends, segments with compound curvatures, segments with two mutually perpendicular curvatures or Gaussian curvatures, and segments with a simple curvature. With the advancements made in multi-purpose composites over the past 60 years, composites slowly but steadily have been appearing in these various vehicles, compound structures, and buildings. These composite sections provide added benefits over isotropic, polymeric, and ceramic materials by generally having a higher specific strength, higher specific stiffnesses, longer fatigue life, lower density, possibilities in reduction of life cycle and/or acquisition cost, and greater adaptability to intended function of structure via material composition and geometry. To be able to design and manufacture a safe composite laminate or structure, it is imperative that the stress distributions, their causes, and effects are thoroughly understood in order to successfully accomplish mission objectives and manufacture a safe and reliable composite. The objective of the thesis work is to expand upon the knowledge of simply curved composite structures by exploring and ascertaining all pertinent parameters, phenomenon, and trends in stress variations in curved laminates due to thermal loading. The simply curved composites consist of composites with one radius of curvature throughout the span of the specimen about only one axis. Analytical beam theory, classical lamination theory, and finite element analysis were used to ascertain stress variations in a flat, isotropic beam. An analytical method was developed to ascertain the stress variations in an isotropic, simply curved beam under thermal loading that is under both free-free and fixed-fixed constraint conditions. This is the first such solution to Author's best knowledge of such a problem. It was ascertained and proven that the general, non-modified (original) version of classical lamination theory cannot be used for an analytical solution for a simply curved beam or any other structure that would require rotations of laminates out their planes in space. Finite element analysis was used to ascertain stress variations in a simply curved beam. It was verified that these solutions reduce to the flat beam solutions as the radius of curvature of the beams tends to infinity. MATLAB was used to conduct the classical lamination theory numerical analysis. A MATLAB program was written to conduct the finite element analysis for the flat and curved beams, isotropic and composite. It does not require incompatibility techniques used in mechanics of isotropic materials for indeterminate structures that are equivalent to fixed-beam problems. Finally, it has the ability to enable the user to define and create unique elements not accessible in commercial software, and modify finite element procedures to take advantage of new paradigms.
Amperometric Enzyme-Based Biosensors for Application in Food and Beverage Industry
NASA Astrophysics Data System (ADS)
Csöoregi, Elisabeth; Gáspñr, Szilveszter; Niculescu, Mihaela; Mattiasson, Bo; Schuhmann, Wolfgang
Continuous, sensitive, selective, and reliable monitoring of a large variety of different compounds in various food and beverage samples is of increasing importance to assure a high-quality and tracing of any possible source of contamination of food and beverages. Most of the presently used classical analytical methods are often requiring expensive instrumentation, long analysis times and well-trained staff. Amperometric enzyme-based biosensors on the other hand have emerged in the last decade from basic science to useful tools with very promising application possibilities in food and beverage industry. Amperometric biosensors are in general highly selective, sensitive, relatively cheap, and easy to integrate into continuous analysis systems. A successful application of such sensors for industrial purposes, however, requires a sensor design, which satisfies the specific needs of monitoring the targeted analyte in the particular application, Since each individual application needs different operational conditions and sensor characteristics, it is obvious that biosensors have to be tailored for the particular case. The characteristics of the biosensors are depending on the used biorecognition element (enzyme), nature of signal transducer (electrode material) and the communication between these two elements (electron-transfer pathway).
NASA Astrophysics Data System (ADS)
Ghaffar, A.; Hussan, M. M.; Illahi, A.; Alkanhal, Majeed A. S.; Ur Rehman, Sajjad; Naz, M. Y.
2018-01-01
Effects on RCS of perfect electromagnetic conductor (PEMC) sphere by coating with anisotropic plasma layer are studied in this paper. The incident, scattered and transmitted electromagnetic fields are expanded in term of spherical vector wave functions using extended classical theory of scattering. Co and cross-polarized scattered field coefficients are obtained at the interface of free space-anisotropic plasma and at anisotropic plasma-PEMC sphere core by scattering matrices method. The presented analytical expressions are general for any perfect conducting sphere (PMC, PEC, or PEMC) with general anisotropic/isotropic material coatings that include plasma and metamaterials. The behavior of the forward and backscattered radar cross section of PEMC sphere with the variation of the magnetic field strength, incident frequency, plasma density, and effective collision frequency for the co-polarized and the cross polarized fields are investigated. It is also observed from the obtained results that anisotropic layer on PEMC sphere shows reciprocal behavior as compared to isotopic plasma layer on PEMC sphere. The comparisons of the numerical results of the presented analytical expressions with available results of some special cases show the correctness of the analysis.
Analysis of latency performance of bluetooth low energy (BLE) networks.
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2014-12-23
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.
Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2015-01-01
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266
Electron Stark Broadening Database for Atomic N, O, and C Lines
NASA Technical Reports Server (NTRS)
Liu, Yen; Yao, Winifred M.; Wray, Alan A.; Carbon, Duane F.
2012-01-01
A database for efficiently computing the electron Stark broadening line widths for atomic N, O, and C lines is constructed. The line width is expressed in terms of the electron number density and electronatom scattering cross sections based on the Baranger impact theory. The state-to-state cross sections are computed using the semiclassical approximation, in which the atom is treated quantum mechanically whereas the motion of the free electron follows a classical trajectory. These state-to-state cross sections are calculated based on newly compiled line lists. Each atomic line list consists of a careful merger of NIST, Vanderbilt, and TOPbase line datasets from wavelength 50 nm to 50 micrometers covering the VUV to IR spectral regions. There are over 10,000 lines in each atomic line list. The widths for each line are computed at 13 electron temperatures between 1,000 K 50,000 K. A linear least squares method using a four-term fractional power series is then employed to obtain an analytical fit for each line-width variation as a function of the electron temperature. The maximum L2 error of the analytic fits for all lines in our line lists is about 5%.
Walach, Harald; Loef, Martin
2015-11-01
The hierarchy of evidence presupposes linearity and additivity of effects, as well as commutativity of knowledge structures. It thereby implicitly assumes a classical theoretical model. This is an argumentative article that uses theoretical analysis based on pertinent literature and known facts to examine the standard view of methodology. We show that the assumptions of the hierarchical model are wrong. The knowledge structures gained by various types of studies are not sequentially indifferent, that is, do not commute. External validity and internal validity are at least partially incompatible concepts. Therefore, one needs a different theoretical structure, typical of quantum-type theories, to model this situation. The consequence of this situation is that the implicit assumptions of the hierarchical model are wrong, if generalized to the concept of evidence in total. The problem can be solved by using a matrix-analytical approach to synthesizing evidence. Here, research methods that produce different types of evidence that complement each other are synthesized to yield the full knowledge. We show by an example how this might work. We conclude that the hierarchical model should be complemented by a broader reasoning in methodology. Copyright © 2015 Elsevier Inc. All rights reserved.
Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent
NASA Astrophysics Data System (ADS)
Nec, Yana
2018-01-01
Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).
Portfolio Analysis for Vector Calculus
ERIC Educational Resources Information Center
Kaplan, Samuel R.
2015-01-01
Classic stock portfolio analysis provides an applied context for Lagrange multipliers that undergraduate students appreciate. Although modern methods of portfolio analysis are beyond the scope of vector calculus, classic methods reinforce the utility of this material. This paper discusses how to introduce classic stock portfolio analysis in a…
NASA Astrophysics Data System (ADS)
Ghiara, G.; Grande, C.; Ferrando, S.; Piccardo, P.
2018-01-01
In this study, tin-bronze analogues of archaeological objects were investigated in the presence of an aerobic Pseudomonas fluorescens strain in a solution, containing chlorides, sulfates, carbonates and nitrates according to a previous archaeological characterization. Classical fixation protocols were employed in order to verify the attachment capacity of such bacteria. In addition, classical metallurgical analytical techniques were used to detect the effect of bacteria on the formation of uncommon corrosion products in such an environment. Results indicate quite a good attachment capacity of the bacteria to the metallic surface and the formation of the uncommon corrosion products sulfates and sulfides is probably connected to the bacterial metabolism.
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
NASA Astrophysics Data System (ADS)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger
2017-04-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
A spatially homogeneous and isotropic Einstein-Dirac cosmology
NASA Astrophysics Data System (ADS)
Finster, Felix; Hainzl, Christian
2011-04-01
We consider a spatially homogeneous and isotropic cosmological model where Dirac spinors are coupled to classical gravity. For the Dirac spinors we choose a Hartree-Fock ansatz where all one-particle wave functions are coherent and have the same momentum. If the scale function is large, the universe behaves like the classical Friedmann dust solution. If however the scale function is small, quantum effects lead to oscillations of the energy-momentum tensor. It is shown numerically and proven analytically that these quantum oscillations can prevent the formation of a big bang or big crunch singularity. The energy conditions are analyzed. We prove the existence of time-periodic solutions which go through an infinite number of expansion and contraction cycles.
Lyapunov dimension formula for the global attractor of the Lorenz system
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.; Korzhemanova, N. A.; Kusakin, D. V.
2016-12-01
The exact Lyapunov dimension formula for the Lorenz system for a positive measure set of parameters, including classical values, was analytically obtained first by G.A. Leonov in 2002. Leonov used the construction technique of special Lyapunov-type functions, which was developed by him in 1991 year. Later it was shown that the consideration of larger class of Lyapunov-type functions permits proving the validity of this formula for all parameters, of the system, such that all the equilibria of the system are hyperbolically unstable. In the present work it is proved the validity of the formula for Lyapunov dimension for a wider variety of parameters values including all parameters, which satisfy the classical physical limitations.
Berry phase and Hannay angle of an interacting boson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S. C.; Graduate School, China Academy of Engineering Physics, Beijing 100088; Liu, J.
2011-04-15
In the present paper, we investigate the Berry phase and the Hannay angle of an interacting two-mode boson system and obtain their analytic expressions in explicit forms. The relation between the Berry phase and the Hannay angle is discussed. We find that, in the large-particle-number limit, the classical Hannay angle equals the particle number derivative of the quantum Berry phase except for a sign. This relationship is applicable to other many-body boson systems where the coherent-state description is available and the total particle number is conserved. The measurement of the classical Hannay angle in the many-body systems is briefly discussedmore » as well.« less
Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.
2014-01-01
Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.
NASA Astrophysics Data System (ADS)
Austin, Rickey W.
In Einstein's theory of Special Relativity (SR), one method to derive relativistic kinetic energy is via applying the classical work-energy theorem to relativistic momentum. This approach starts with a classical based work-energy theorem and applies SR's momentum to the derivation. One outcome of this derivation is relativistic kinetic energy. From this derivation, it is rather straight forward to form a kinetic energy based time dilation function. In the derivation of General Relativity a common approach is to bypass classical laws as a starting point. Instead a rigorous development of differential geometry and Riemannian space is constructed, from which classical based laws are derived. This is in contrast to SR's approach of starting with classical laws and applying the consequences of the universal speed of light by all observers. A possible method to derive time dilation due to Newtonian gravitational potential energy (NGPE) is to apply SR's approach to deriving relativistic kinetic energy. It will be shown this method gives a first order accuracy compared to Schwarzschild's metric. The SR's kinetic energy and the newly derived NGPE derivation are combined to form a Riemannian metric based on these two energies. A geodesic is derived and calculations compared to Schwarzschild's geodesic for an orbiting test mass about a central, non-rotating, non-charged massive body. The new metric results in high accuracy calculations when compared to Einsteins General Relativity's prediction. The new method provides a candidate approach for starting with classical laws and deriving General Relativity effects. This approach mimics SR's method of starting with classical mechanics when deriving relativistic equations. As a compliment to introducing General Relativity, it provides a plausible scaffolding method from classical physics when teaching introductory General Relativity. A straight forward path from classical laws to General Relativity will be derived. This derivation provides a minimum first order accuracy to Schwarzschild's solution to Einstein's field equations.
NASA Astrophysics Data System (ADS)
Caballero, Marcos D.; Doughty, Leanne; Turnbull, Anna M.; Pepper, Rachel E.; Pollock, Steven J.
2017-06-01
Reliable and validated assessments of introductory physics have been instrumental in driving curricular and pedagogical reforms that lead to improved student learning. As part of an effort to systematically improve our sophomore-level classical mechanics and math methods course (CM 1) at CU Boulder, we have developed a tool to assess student learning of CM 1 concepts in the upper division. The Colorado Classical Mechanics and Math Methods Instrument (CCMI) builds on faculty consensus learning goals and systematic observations of student difficulties. The result is a 9-question open-ended post test that probes student learning in the first half of a two-semester classical mechanics and math methods sequence. In this paper, we describe the design and development of this instrument, its validation, and measurements made in classes at CU Boulder and elsewhere.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
An eigenvalue approach to quantum plasmonics based on a self-consistent hydrodynamics method
NASA Astrophysics Data System (ADS)
Ding, Kun; Chan, C. T.
2018-02-01
Plasmonics has attracted much attention not only because it has useful properties such as strong field enhancement, but also because it reveals the quantum nature of matter. To handle quantum plasmonics effects, ab initio packages or empirical Feibelman d-parameters have been used to explore the quantum correction of plasmonic resonances. However, most of these methods are formulated within the quasi-static framework. The self-consistent hydrodynamics model offers a reliable approach to study quantum plasmonics because it can incorporate the quantum effect of the electron gas into classical electrodynamics in a consistent manner. Instead of the standard scattering method, we formulate the self-consistent hydrodynamics method as an eigenvalue problem to study quantum plasmonics with electrons and photons treated on the same footing. We find that the eigenvalue approach must involve a global operator, which originates from the energy functional of the electron gas. This manifests the intrinsic nonlocality of the response of quantum plasmonic resonances. Our model gives the analytical forms of quantum corrections to plasmonic modes, incorporating quantum electron spill-out effects and electrodynamical retardation. We apply our method to study the quantum surface plasmon polariton for a single flat interface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciotti, Luca; Pellegrini, Silvia, E-mail: luca.ciotti@unibo.it
One of the most active fields of research of modern-day astrophysics is that of massive black hole formation and coevolution with the host galaxy. In these investigations, ranging from cosmological simulations, to semi-analytical modeling, to observational studies, the Bondi solution for accretion on a central point-mass is widely adopted. In this work we generalize the classical Bondi accretion theory to take into account the effects of the gravitational potential of the host galaxy, and of radiation pressure in the optically thin limit. Then, we present the fully analytical solution, in terms of the Lambert–Euler W -function, for isothermal accretion inmore » Jaffe and Hernquist galaxies with a central black hole. The flow structure is found to be sensitive to the shape of the mass profile of the host galaxy. These results and the formulae that are provided, most importantly, the one for the critical accretion parameter, allow for a direct evaluation of all flow properties, and are then useful for the abovementioned studies. As an application, we examine the departure from the true mass accretion rate of estimates obtained using the gas properties at various distances from the black hole, under the hypothesis of classical Bondi accretion. An overestimate is obtained from regions close to the black hole, and an underestimate outside a few Bondi radii; the exact position of the transition between the two kinds of departure depends on the galaxy model.« less
Examples of Complete Solvability of 2D Classical Superintegrable Systems
NASA Astrophysics Data System (ADS)
Chen, Yuxuan; Kalnins, Ernie G.; Li, Qiushi; Miller, Willard, Jr.
2015-11-01
Classical (maximal) superintegrable systems in n dimensions are Hamiltonian systems with 2n-1 independent constants of the motion, globally defined, the maximum number possible. They are very special because they can be solved algebraically. In this paper we show explicitly, mostly through examples of 2nd order superintegrable systems in 2 dimensions, how the trajectories can be determined in detail using rather elementary algebraic, geometric and analytic methods applied to the closed quadratic algebra of symmetries of the system, without resorting to separation of variables techniques or trying to integrate Hamilton's equations. We treat a family of 2nd order degenerate systems: oscillator analogies on Darboux, nonzero constant curvature, and flat spaces, related to one another via contractions, and obeying Kepler's laws. Then we treat two 2nd order nondegenerate systems, an analogy of a caged Coulomb problem on the 2-sphere and its contraction to a Euclidean space caged Coulomb problem. In all cases the symmetry algebra structure provides detailed information about the trajectories, some of which are rather complicated. An interesting example is the occurrence of ''metronome orbits'', trajectories confined to an arc rather than a loop, which are indicated clearly from the structure equations but might be overlooked using more traditional methods. We also treat the Post-Winternitz system, an example of a classical 4th order superintegrable system that cannot be solved using separation of variables. Finally we treat a superintegrable system, related to the addition theorem for elliptic functions, whose constants of the motion are only rational in the momenta. It is a system of special interest because its constants of the motion generate a closed polynomial algebra. This paper contains many new results but we have tried to present most of the materials in a fashion that is easily accessible to nonexperts, in order to provide entrée to superintegrablity theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
A stabilized element-based finite volume method for poroelastic problems
NASA Astrophysics Data System (ADS)
Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo
2018-07-01
The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.
NASA Astrophysics Data System (ADS)
Pezzotti, Giuseppe; Adachi, Tetsuya; Gasparutti, Isabella; Vincini, Giulio; Zhu, Wenliang; Boffelli, Marco; Rondinella, Alfredo; Marin, Elia; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato
2017-02-01
The Raman spectroscopic method has been applied to quantitatively assess the in vitro degree of demineralization in healthy human teeth. Based on previous evaluations of Raman selection rules (empowered by an orientation distribution function (ODF) statistical algorithm) and on a newly proposed analysis of phonon density of states (PDOS) for selected vibrational modes of the hexagonal structure of hydroxyapatite, a molecular-scale evaluation of the demineralization process upon in vitro exposure to a highly acidic beverage (i.e., CocaCola™ Classic, pH = 2.5) could be obtained. The Raman method proved quite sensitive and spectroscopic features could be directly related to an increase in off-stoichiometry of the enamel surface structure since the very early stage of the demineralization process (i.e., when yet invisible to other conventional analytical techniques). The proposed Raman spectroscopic algorithm might possess some generality for caries risk assessment, allowing a prompt non-contact diagnostic practice in dentistry.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Hahn, David W; Omenetto, Nicoló
2010-12-01
Laser-induced breakdown spectroscopy (LIBS) has become a very popular analytical method in the last decade in view of some of its unique features such as applicability to any type of sample, practically no sample preparation, remote sensing capability, and speed of analysis. The technique has a remarkably wide applicability in many fields, and the number of applications is still growing. From an analytical point of view, the quantitative aspects of LIBS may be considered its Achilles' heel, first due to the complex nature of the laser-sample interaction processes, which depend upon both the laser characteristics and the sample material properties, and second due to the plasma-particle interaction processes, which are space and time dependent. Together, these may cause undesirable matrix effects. Ways of alleviating these problems rely upon the description of the plasma excitation-ionization processes through the use of classical equilibrium relations and therefore on the assumption that the laser-induced plasma is in local thermodynamic equilibrium (LTE). Even in this case, the transient nature of the plasma and its spatial inhomogeneity need to be considered and overcome in order to justify the theoretical assumptions made. This first article focuses on the basic diagnostics aspects and presents a review of the past and recent LIBS literature pertinent to this topic. Previous research on non-laser-based plasma literature, and the resulting knowledge, is also emphasized. The aim is, on one hand, to make the readers aware of such knowledge and on the other hand to trigger the interest of the LIBS community, as well as the larger analytical plasma community, in attempting some diagnostic approaches that have not yet been fully exploited in LIBS.
Classical simulation of quantum many-body systems
NASA Astrophysics Data System (ADS)
Huang, Yichen
Classical simulation of quantum many-body systems is in general a challenging problem for the simple reason that the dimension of the Hilbert space grows exponentially with the system size. In particular, merely encoding a generic quantum many-body state requires an exponential number of bits. However, condensed matter physicists are mostly interested in local Hamiltonians and especially their ground states, which are highly non-generic. Thus, we might hope that at least some physical systems allow efficient classical simulation. Starting with one-dimensional (1D) quantum systems (i.e., the simplest nontrivial case), the first basic question is: Which classes of states have efficient classical representations? It turns out that this question is quantitatively related to the amount of entanglement in the state, for states with "little entanglement'' are well approximated by matrix product states (a data structure that can be manipulated efficiently on a classical computer). At a technical level, the mathematical notion for "little entanglement'' is area law, which has been proved for unique ground states in 1D gapped systems. We establish an area law for constant-fold degenerate ground states in 1D gapped systems and thus explain the effectiveness of matrix-product-state methods in (e.g.) symmetry breaking phases. This result might not be intuitively trivial as degenerate ground states in gapped systems can be long-range correlated. Suppose an efficient classical representation exists. How can one find it efficiently? The density matrix renormalization group is the leading numerical method for computing ground states in 1D quantum systems. However, it is a heuristic algorithm and the possibility that it may fail in some cases cannot be completely ruled out. Recently, a provably efficient variant of the density matrix renormalization group has been developed for frustration-free 1D gapped systems. We generalize this algorithm to all (i.e., possibly frustrated) 1D gapped systems. Note that the ground-state energy of 1D gapless Hamiltonians is computationally intractable even in the presence of translational invariance. It is tempting to extend methods and tools in 1D to two and higher dimensions (2+D), e.g., matrix product states are generalized to tensor network states. Since an area law for entanglement (if formulated properly) implies efficient matrix product state representations in 1D, an interesting question is whether a similar implication holds in 2+D. Roughly speaking, we show that an area law for entanglement (in any reasonable formulation) does not always imply efficient tensor network representations of the ground states of 2+D local Hamiltonians even in the presence of translational invariance. It should be emphasized that this result does not contradict with the common sense that in practice quantum states with more entanglement usually require more space to be stored classically; rather, it demonstrates that the relationship between entanglement and efficient classical representations is still far from being well understood. Excited eigenstates participate in the dynamics of quantum systems and are particularly relevant to the phenomenon of many-body localization (absence of transport at finite temperature in strongly correlated systems). We study the entanglement of excited eigenstates in random spin chains and expect that its singularities coincide with dynamical quantum phase transitions. This expectation is confirmed in the disordered quantum Ising chain using both analytical and numerical methods. Finally, we study the problem of generating ground states (possibly with topological order) in 1D gapped systems using quantum circuits. This is an interesting problem both in theory and in practice. It not only characterizes the essential difference between the entanglement patterns that give rise to trivial and nontrivial topological order, but also quantifies the difficulty of preparing quantum states with a quantum computer (in experiments).
Ultra-small dye-doped silica nanoparticles via modified sol-gel technique
NASA Astrophysics Data System (ADS)
Riccò, R.; Nizzero, S.; Penna, E.; Meneghello, A.; Cretaio, E.; Enrichi, F.
2018-05-01
In modern biosensing and imaging, fluorescence-based methods constitute the most diffused approach to achieve optimal detection of analytes, both in solution and on the single-particle level. Despite the huge progresses made in recent decades in the development of plasmonic biosensors and label-free sensing techniques, fluorescent molecules remain the most commonly used contrast agents to date for commercial imaging and detection methods. However, they exhibit low stability, can be difficult to functionalise, and often result in a low signal-to-noise ratio. Thus, embedding fluorescent probes into robust and bio-compatible materials, such as silica nanoparticles, can substantially enhance the detection limit and dramatically increase the sensitivity. In this work, ultra-small fluorescent silica nanoparticles (NPs) for optical biosensing applications were doped with a fluorescent dye, using simple water-based sol-gel approaches based on the classical Stöber procedure. By systematically modulating reaction parameters, controllable size tuning of particle diameters as low as 10 nm was achieved. Particles morphology and optical response were evaluated showing a possible single-molecule behaviour, without employing microemulsion methods to achieve similar results. [Figure not available: see fulltext.
Cantilever testing of sintered-silver interconnects
Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.; ...
2017-10-19
Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less
Cantilever testing of sintered-silver interconnects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wereszczak, Andrew A.; Chen, Branndon R.; Jadaan, Osama M.
Cantilever testing is an underutilized test method from which results and interpretations promote greater understanding of the tensile and shear failure responses of interconnects, metallizations, or bonded joints. The use and analysis of this method were pursued through the mechanical testing of sintered-silver interconnects that joined Ni/Au-plated copper pillars or Ti/Ni/Ag-plated silicon pillars to Ag-plated direct bonded copper substrates. Sintered-silver was chosen as the interconnect test medium because of its high electrical and thermal conductivities and high-temperature capability—attractive characteristics for a candidate interconnect in power electronic components and other devices. Deep beam theory was used to improve upon the estimationsmore » of the tensile and shear stresses calculated from classical beam theory. The failure stresses of the sintered-silver interconnects were observed to be dependent on test-condition and test-material-system. In conclusion, the experimental simplicity of cantilever testing, and the ability to analytically calculate tensile and shear stresses at failure, result in it being an attractive mechanical test method to evaluate the failure response of interconnects.« less
Investigation of laser-tissue interaction in medicine by means of laser spectroscopic measurements
NASA Astrophysics Data System (ADS)
Lademann, Juergen; Weigmann, Hans-Juergen
1995-01-01
Toxic and carcinogenic substances were produced during laser application in medicine for the cutting and evaporation of tissue. The laser smoke presents a danger potential for the medical staff and the patients. The laser tissue interaction process was investigated by means of laser spectroscopic measurements which give the possibility of measuring metastable molecular states directly as a prerequisite to understand and to influence fundamental laser tissue interaction processes in order to reduce the amount of harmful chemicals. Highly excited atomic and molecular states and free radicals (CN, OH, C2, CH, CH2) have been detected applying spontaneous and laser induced fluorescence methods. It was found that the formation of harmful substances in the laser plumes can be reduced significantly by optimization of the surrounding gas atmosphere. A high content of oxygen or water in the interaction zone has been found, in agreement with the results of classical and analytical methods, as a suitable way to decrease pollutant emission. The experimental methods and the principal results are applicable not only in laser medicine but in laser material treatment generally.
X-Ray Diffraction of different samples of Swarna Makshika Bhasma.
Gupta, Ramesh Kumar; Lakshmi, Vijay; Jha, Chandra Bhushan
2015-01-01
Shodhana and Marana are a series of complex procedures that identify the undesirable effects of heavy metals/minerals and convert them into absorbable and assimilable forms. Study on the analytical levels is essential to evaluate the structural and chemical changes that take place during and after following such procedures as described in major classical texts to understand the mystery behind these processes. X-Ray Diffraction (XRD) helps to identify and characterize minerals/metals and fix up the particular characteristics pattern of prepared Bhasma. To evaluate the chemical changes in Swarna Makshika Bhasma prepared by using different media and methods. In this study, raw Swarna Makshika, purified Swarna Makshika and four types of Swarna Makshika Bhasma prepared by using different media and methods were analyzed by XRD study. XRD study of different samples revealed strongest peaks of iron oxide in Bhasma. Other phases of Cu2O, FeS2, Cu2S, FeSO4, etc., were also identified in many of the samples. XRD study revealed that Swarna Makshika Bhasma prepared by Kupipakwa method is better, convenient, and can save time.
Lewczuk, Piotr; Riederer, Peter; O'Bryant, Sid E; Verbeek, Marcel M; Dubois, Bruno; Visser, Pieter Jelle; Jellinger, Kurt A; Engelborghs, Sebastiaan; Ramirez, Alfredo; Parnetti, Lucilla; Jack, Clifford R; Teunissen, Charlotte E; Hampel, Harald; Lleó, Alberto; Jessen, Frank; Glodzik, Lidia; de Leon, Mony J; Fagan, Anne M; Molinuevo, José Luis; Jansen, Willemijn J; Winblad, Bengt; Shaw, Leslie M; Andreasson, Ulf; Otto, Markus; Mollenhauer, Brit; Wiltfang, Jens; Turner, Martin R; Zerr, Inga; Handels, Ron; Thompson, Alexander G; Johansson, Gunilla; Ermann, Natalia; Trojanowski, John Q; Karaca, Ilker; Wagner, Holger; Oeckl, Patrick; van Waalwijk van Doorn, Linda; Bjerke, Maria; Kapogiannis, Dimitrios; Kuiperij, H Bea; Farotti, Lucia; Li, Yi; Gordon, Brian A; Epelbaum, Stéphane; Vos, Stephanie J B; Klijn, Catharina J M; Van Nostrand, William E; Minguillon, Carolina; Schmitz, Matthias; Gallo, Carla; Lopez Mato, Andrea; Thibaut, Florence; Lista, Simone; Alcolea, Daniel; Zetterberg, Henrik; Blennow, Kaj; Kornhuber, Johannes
2018-06-01
In the 12 years since the publication of the first Consensus Paper of the WFSBP on biomarkers of neurodegenerative dementias, enormous advancement has taken place in the field, and the Task Force takes now the opportunity to extend and update the original paper. New concepts of Alzheimer's disease (AD) and the conceptual interactions between AD and dementia due to AD were developed, resulting in two sets for diagnostic/research criteria. Procedures for pre-analytical sample handling, biobanking, analyses and post-analytical interpretation of the results were intensively studied and optimised. A global quality control project was introduced to evaluate and monitor the inter-centre variability in measurements with the goal of harmonisation of results. Contexts of use and how to approach candidate biomarkers in biological specimens other than cerebrospinal fluid (CSF), e.g. blood, were precisely defined. Important development was achieved in neuroimaging techniques, including studies comparing amyloid-β positron emission tomography results to fluid-based modalities. Similarly, development in research laboratory technologies, such as ultra-sensitive methods, raises our hopes to further improve analytical and diagnostic accuracy of classic and novel candidate biomarkers. Synergistically, advancement in clinical trials of anti-dementia therapies energises and motivates the efforts to find and optimise the most reliable early diagnostic modalities. Finally, the first studies were published addressing the potential of cost-effectiveness of the biomarkers-based diagnosis of neurodegenerative disorders.