Analytical solution for the advection-dispersion transport equation in layered media
USDA-ARS?s Scientific Manuscript database
The advection-dispersion transport equation with first-order decay was solved analytically for multi-layered media using the classic integral transform technique (CITT). The solution procedure used an associated non-self-adjoint advection-diffusion eigenvalue problem that had the same form and coef...
Extended Analytic Device Optimization Employing Asymptotic Expansion
NASA Technical Reports Server (NTRS)
Mackey, Jonathan; Sehirlioglu, Alp; Dynsys, Fred
2013-01-01
Analytic optimization of a thermoelectric junction often introduces several simplifying assumptionsincluding constant material properties, fixed known hot and cold shoe temperatures, and thermallyinsulated leg sides. In fact all of these simplifications will have an effect on device performance,ranging from negligible to significant depending on conditions. Numerical methods, such as FiniteElement Analysis or iterative techniques, are often used to perform more detailed analysis andaccount for these simplifications. While numerical methods may stand as a suitable solution scheme,they are weak in gaining physical understanding and only serve to optimize through iterativesearching techniques. Analytic and asymptotic expansion techniques can be used to solve thegoverning system of thermoelectric differential equations with fewer or less severe assumptionsthan the classic case. Analytic methods can provide meaningful closed form solutions and generatebetter physical understanding of the conditions for when simplifying assumptions may be valid.In obtaining the analytic solutions a set of dimensionless parameters, which characterize allthermoelectric couples, is formulated and provide the limiting cases for validating assumptions.Presentation includes optimization of both classic rectangular couples as well as practically andtheoretically interesting cylindrical couples using optimization parameters physically meaningful toa cylindrical couple. Solutions incorporate the physical behavior for i) thermal resistance of hot andcold shoes, ii) variable material properties with temperature, and iii) lateral heat transfer through legsides.
Introducing Chemometrics to the Analytical Curriculum: Combining Theory and Lab Experience
ERIC Educational Resources Information Center
Gilbert, Michael K.; Luttrell, Robert D.; Stout, David; Vogt, Frank
2008-01-01
Beer's law is an ideal technique that works only in certain situations. A method for dealing with more complex conditions needs to be integrated into the analytical chemistry curriculum. For that reason, the capabilities and limitations of two common chemometric algorithms, classical least squares (CLS) and principal component regression (PCR),…
Determination of Phosphates by the Gravimetric Quimociac Technique
ERIC Educational Resources Information Center
Shaver, Lee Alan
2008-01-01
The determination of phosphates by the classic quimociac gravimetric technique was used successfully as a laboratory experiment in our undergraduate analytical chemistry course. Phosphate-containing compounds are dissolved in acid and converted to soluble orthophosphate ion (PO[subscript 4][superscript 3-]). The soluble phosphate is easily…
Treatment of a Disorder of Self through Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Ferro-Garcia, Rafael; Lopez-Bermudez, Miguel Angel; Valero-Aguayo, Luis
2012-01-01
This paper presents a clinical case study of a depressed female, treated by means of Functional Analytic Psychotherapy (FAP) based on the theory and techniques for treating an "unstable self" (Kohlenberg & Tsai, 1991), instead of the classic treatment for depression. The client was a 20-year-old college student. The trigger for her problems was a…
Numerical methods for coupled fracture problems
NASA Astrophysics Data System (ADS)
Viesca, Robert C.; Garagash, Dmitry I.
2018-04-01
We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.
Bioassays as one of the Green Chemistry tools for assessing environmental quality: A review.
Wieczerzak, M; Namieśnik, J; Kudłak, B
2016-09-01
For centuries, mankind has contributed to irreversible environmental changes, but due to the modern science of recent decades, scientists are able to assess the scale of this impact. The introduction of laws and standards to ensure environmental cleanliness requires comprehensive environmental monitoring, which should also meet the requirements of Green Chemistry. The broad spectrum of Green Chemistry principle applications should also include all of the techniques and methods of pollutant analysis and environmental monitoring. The classical methods of chemical analyses do not always match the twelve principles of Green Chemistry, and they are often expensive and employ toxic and environmentally unfriendly solvents in large quantities. These solvents can generate hazardous and toxic waste while consuming large volumes of resources. Therefore, there is a need to develop reliable techniques that would not only meet the requirements of Green Analytical Chemistry, but they could also complement and sometimes provide an alternative to conventional classical analytical methods. These alternatives may be found in bioassays. Commercially available certified bioassays often come in the form of ready-to-use toxkits, and they are easy to use and relatively inexpensive in comparison with certain conventional analytical methods. The aim of this study is to provide evidence that bioassays can be a complementary alternative to classical methods of analysis and can fulfil Green Analytical Chemistry criteria. The test organisms discussed in this work include single-celled organisms, such as cell lines, fungi (yeast), and bacteria, and multicellular organisms, such as invertebrate and vertebrate animals and plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Shanzhen; Jiang, Xiaoyun
2012-08-01
In this paper, analytical solutions to time-fractional partial differential equations in a multi-layer annulus are presented. The final solutions are obtained in terms of Mittag-Leffler function by using the finite integral transform technique and Laplace transform technique. In addition, the classical diffusion equation (α=1), the Helmholtz equation (α→0) and the wave equation (α=2) are discussed as special cases. Finally, an illustrative example problem for the three-layer semi-circular annular region is solved and numerical results are presented graphically for various kind of order of fractional derivative.
Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel.
Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G; Ruggeri, Kai
2016-01-01
Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed.
Multilevel Modeling and Policy Development: Guidelines and Applications to Medical Travel
Garcia-Garzon, Eduardo; Zhukovsky, Peter; Haller, Elisa; Plakolm, Sara; Fink, David; Petrova, Dafina; Mahalingam, Vaishali; Menezes, Igor G.; Ruggeri, Kai
2016-01-01
Medical travel has expanded rapidly in recent years, resulting in new markets and increased access to medical care. Whereas several studies investigated the motives of individuals seeking healthcare abroad, the conventional analytical approach is limited by substantial caveats. Classical techniques as found in the literature cannot provide sufficient insight due to the nested nature of data generated. The application of adequate analytical techniques, specifically multilevel modeling, is scarce to non-existent in the context of medical travel. This study introduces the guidelines for application of multilevel techniques in public health research by presenting an application of multilevel modeling in analyzing the decision-making patterns of potential medical travelers. Benefits and potential limitations are discussed. PMID:27252672
Analytic Methods for Adjusting Subjective Rating Schemes.
ERIC Educational Resources Information Center
Cooper, Richard V. L.; Nelson, Gary R.
Statistical and econometric techniques of correcting for supervisor bias in models of individual performance appraisal were developed, using a variant of the classical linear regression model. Location bias occurs when individual performance is systematically overestimated or underestimated, while scale bias results when raters either exaggerate…
Conceptual data sampling for breast cancer histology image classification.
Rezk, Eman; Awan, Zainab; Islam, Fahad; Jaoua, Ali; Al Maadeed, Somaya; Zhang, Nan; Das, Gautam; Rajpoot, Nasir
2017-10-01
Data analytics have become increasingly complicated as the amount of data has increased. One technique that is used to enable data analytics in large datasets is data sampling, in which a portion of the data is selected to preserve the data characteristics for use in data analytics. In this paper, we introduce a novel data sampling technique that is rooted in formal concept analysis theory. This technique is used to create samples reliant on the data distribution across a set of binary patterns. The proposed sampling technique is applied in classifying the regions of breast cancer histology images as malignant or benign. The performance of our method is compared to other classical sampling methods. The results indicate that our method is efficient and generates an illustrative sample of small size. It is also competing with other sampling methods in terms of sample size and sample quality represented in classification accuracy and F1 measure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Preparation, Characterization, and Selectivity Study of Mixed-Valence Sulfites
ERIC Educational Resources Information Center
Silva, Luciana A.; de Andrade, Jailson B.
2010-01-01
A project involving the synthesis of an isomorphic double sulfite series and characterization by classical inorganic chemical analyses is described. The project is performed by upper-level undergraduate students in the laboratory. This compound series is suitable for examining several chemical concepts and analytical techniques in inorganic…
Insecticide ADME for support of early-phase discovery: combining classical and modern techniques.
David, Michael D
2017-04-01
The two factors that determine an insecticide's potency are its binding to a target site (intrinsic activity) and the ability of its active form to reach the target site (bioavailability). Bioavailability is dictated by the compound's stability and transport kinetics, which are determined by both physical and biochemical characteristics. At BASF Global Insecticide Research, we characterize bioavailability in early research with an ADME (Absorption, Distribution, Metabolism and Excretion) approach, combining classical and modern techniques. For biochemical assessment of metabolism, we purify native insect enzymes using classical techniques, and recombinantly express individual insect enzymes that are known to be relevant in insecticide metabolism and resistance. For analytical characterization of an experimental insecticide and its metabolites, we conduct classical radiotracer translocation studies when a radiolabel is available. In discovery, where typically no radiolabel has been synthesized, we utilize modern high-resolution mass spectrometry to probe complex systems for the test compounds and its metabolites. By using these combined approaches, we can rapidly compare the ADME properties of sets of new experimental insecticides and aid in the design of structures with an improved potential to advance in the research pipeline. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
NASA Technical Reports Server (NTRS)
1983-01-01
The general principles of classical liquid chromatography and high pressure liquid chromatography (HPLC) are reviewed, and their advantages and disadvantages are compared. Several chromatographic techniques are reviewed, and the analytical separation of a C-ether liquid lubricant by each technique is illustrated. A practical application of HPLC is then demonstrated by analyzing a degraded C-ether liquid lubricant from full scale, high temperature bearing tests.
Analytical advances in pharmaceutical impurity profiling.
Holm, René; Elder, David P
2016-05-25
Impurities will be present in all drug substances and drug products, i.e. nothing is 100% pure if one looks in enough depth. The current regulatory guidance on impurities accepts this, and for drug products with a dose of less than 2g/day identification of impurities is set at 0.1% levels and above (ICH Q3B(R2), 2006). For some impurities, this is a simple undertaking as generally available analytical techniques can address the prevailing analytical challenges; whereas, for others this may be much more challenging requiring more sophisticated analytical approaches. The present review provides an insight into current development of analytical techniques to investigate and quantify impurities in drug substances and drug products providing discussion of progress particular within the field of chromatography to ensure separation of and quantification of those related impurities. Further, a section is devoted to the identification of classical impurities, but in addition, inorganic (metal residues) and solid state impurities are also discussed. Risk control strategies for pharmaceutical impurities aligned with several of the ICH guidelines, are also discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Körsgen, Martin; Pelster, Andreas; Dreisewerd, Klaus; Arlinghaus, Heinrich F
2016-02-01
The analytical sensitivity in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is largely affected by the specific analyte-matrix interaction, in particular by the possible incorporation of the analytes into crystalline MALDI matrices. Here we used time-of-flight secondary ion mass spectrometry (ToF-SIMS) to visualize the incorporation of three peptides with different hydrophobicities, bradykinin, Substance P, and vasopressin, into two classic MALDI matrices, 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA). For depth profiling, an Ar cluster ion beam was used to gradually sputter through the matrix crystals without causing significant degradation of matrix or biomolecules. A pulsed Bi3 ion cluster beam was used to image the lateral analyte distribution in the center of the sputter crater. Using this dual beam technique, the 3D distribution of the analytes and spatial segregation effects within the matrix crystals were imaged with sub-μm resolution. The technique could in the future enable matrix-enhanced (ME)-ToF-SIMS imaging of peptides in tissue slices at ultra-high resolution. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Körsgen, Martin; Pelster, Andreas; Dreisewerd, Klaus; Arlinghaus, Heinrich F.
2016-02-01
The analytical sensitivity in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is largely affected by the specific analyte-matrix interaction, in particular by the possible incorporation of the analytes into crystalline MALDI matrices. Here we used time-of-flight secondary ion mass spectrometry (ToF-SIMS) to visualize the incorporation of three peptides with different hydrophobicities, bradykinin, Substance P, and vasopressin, into two classic MALDI matrices, 2,5-dihydroxybenzoic acid (DHB) and α-cyano-4-hydroxycinnamic acid (HCCA). For depth profiling, an Ar cluster ion beam was used to gradually sputter through the matrix crystals without causing significant degradation of matrix or biomolecules. A pulsed Bi3 ion cluster beam was used to image the lateral analyte distribution in the center of the sputter crater. Using this dual beam technique, the 3D distribution of the analytes and spatial segregation effects within the matrix crystals were imaged with sub-μm resolution. The technique could in the future enable matrix-enhanced (ME)-ToF-SIMS imaging of peptides in tissue slices at ultra-high resolution.
Off-diagonal series expansion for quantum partition functions
NASA Astrophysics Data System (ADS)
Hen, Itay
2018-05-01
We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Everett, W.R.; Rechnitz, G.A.
1999-01-01
A mini review of enzyme-based electrochemical biosensors for inhibition analysis of organophosphorus and carbamate pesticides is presented. Discussion includes the most recent literature to present advances in detection limits, selectivity and real sample analysis. Recent reviews on the monitoring of pesticides and their residues suggest that the classical analytical techniques of gas and liquid chromatography are the most widely used methods of detection. These techniques, although very accurate in their determinations, can be quite time consuming and expensive and usually require extensive sample clean up and pro-concentration. For these and many other reasons, the classical techniques are very difficult tomore » adapt for field use. Numerous researchers, in the past decade, have developed and made improvements on biosensors for use in pesticide analysis. This mini review will focus on recent advances made in enzyme-based electrochemical biosensors for the determinations of organophosphorus and carbamate pesticides.« less
ERIC Educational Resources Information Center
Ammentorp, William
There is much to be gained by using systems analysis in educational administration. Most administrators, presently relying on classical statistical techniques restricted to problems having few variables, should be trained to use more sophisticated tools such as systems analysis. The systems analyst, interested in the basic processes of a group or…
Classical problems in computational aero-acoustics
NASA Technical Reports Server (NTRS)
Hardin, Jay C.
1996-01-01
In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.
Liu, X; Abd El-Aty, A M; Shim, J-H
2011-10-01
Nigella sativa L. (black cumin), commonly known as black seed, is a member of the Ranunculaceae family. This seed is used as a natural remedy in many Middle Eastern and Far Eastern countries. Extracts prepared from N. sativa have, for centuries, been used for medical purposes. Thus far, the organic compounds in N. sativa, including alkaloids, steroids, carbohydrates, flavonoids, fatty acids, etc. have been fairly well characterized. Herein, we summarize some new extraction techniques, including microwave assisted extraction (MAE) and supercritical extraction techniques (SFE), in addition to the classical method of hydrodistillation (HD), which have been employed for isolation and various analytical techniques used for the identification of secondary metabolites in black seed. We believe that some compounds contained in N. sativa remain to be identified, and that high-throughput screening could help to identify new compounds. A study addressing environmentally-friendly techniques that have minimal or no environmental effects is currently underway in our laboratory.
Cortez, Juliana; Pasquini, Celio
2013-02-05
The ring-oven technique, originally applied for classical qualitative analysis in the years 1950s to 1970s, is revisited to be used in a simple though highly efficient and green procedure for analyte preconcentration prior to its determination by the microanalytical techniques presently available. The proposed preconcentration technique is based on the dropwise delivery of a small volume of sample to a filter paper substrate, assisted by a flow-injection-like system. The filter paper is maintained in a small circular heated oven (the ring oven). Drops of the sample solution diffuse by capillarity from the center to a circular area of the paper substrate. After the total sample volume has been delivered, a ring with a sharp (c.a. 350 μm) circular contour, of about 2.0 cm diameter, is formed on the paper to contain most of the analytes originally present in the sample volume. Preconcentration coefficients of the analyte can reach 250-fold (on a m/m basis) for a sample volume as small as 600 μL. The proposed system and procedure have been evaluated to concentrate Na, Fe, and Cu in fuel ethanol, followed by simultaneous direct determination of these species in the ring contour, employing the microanalytical technique of laser induced breakdown spectroscopy (LIBS). Detection limits of 0.7, 0.4, and 0.3 μg mL(-1) and mean recoveries of (109 ± 13)%, (92 ± 18)%, and (98 ± 12)%, for Na, Fe, and Cu, respectively, were obtained in fuel ethanol. It is possible to anticipate the application of the technique, coupled to modern microanalytical and multianalyte techniques, to several analytical problems requiring analyte preconcentration and/or sample stabilization.
The Shock and Vibration Bulletin. Part 2. Invited Papers, Structural Dynamics
1974-08-01
VIKING LANDER DYNAMICS 41 Mr. Joseph C. Pohlen, Martin Marietta Aerospace, Denver, Colorado Structural Dynamics PERFORMANCE OF STATISTICAL ENERGY ANALYSIS 47...aerospace structures. Analytical prediction of these environments is beyond the current scope of classical modal techniques. Statistical energy analysis methods...have been developed that circumvent the difficulties of high-frequency nodal analysis. These statistical energy analysis methods are evaluated
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
1992-01-01
A two dimensional airfoil model was tested in the adaptive wall test section of the NASA Langley 0.3 meter Transonic Cryogenic Tunnel (TCT) and in the ventilated test section of the National Aeronautical Establishment Two Dimensional High Reynold Number Facility (HRNF). The primary goal of the tests was to compare different techniques (adaptive test section walls and classical, analytical corrections) to account for wall interference. Tests were conducted over a Mach number range from 0.3 to 0.8 at chord Reynolds numbers of 10 x 10(exp 6), 15 x 10(exp 6), and 20 x 10(exp 6). The angle of attack was varied from about 12 degrees up to stall. Movement of the top and bottom test section walls was used to account for the wall interference in the HRNF tests. The test results are in good agreement.
Protein assay structured on paper by using lithography
NASA Astrophysics Data System (ADS)
Wilhelm, E.; Nargang, T. M.; Al Bitar, W.; Waterkotte, B.; Rapp, B. E.
2015-03-01
There are two main challenges in producing a robust, paper-based analytical device. The first one is to create a hydrophobic barrier which unlike the commonly used wax barriers does not break if the paper is bent. The second one is the creation of the (bio-)specific sensing layer. For this proteins have to be immobilized without diminishing their activity. We solve both problems using light-based fabrication methods that enable fast, efficient manufacturing of paper-based analytical devices. The first technique relies on silanization by which we create a flexible hydrophobic barrier made of dimethoxydimethylsilane. The second technique demonstrated within this paper uses photobleaching to immobilize proteins by means of maskless projection lithography. Both techniques have been tested on a classical lithography setup using printed toner masks and on a lithography system for maskless lithography. Using these setups we could demonstrate that the proposed manufacturing techniques can be carried out at low costs. The resolution of the paper-based analytical devices obtained with static masks was lower due to the lower mask resolution. Better results were obtained using advanced lithography equipment. By doing so we demonstrated, that our technique enables fabrication of effective hydrophobic boundary layers with a thickness of only 342 μm. Furthermore we showed that flourescine-5-biotin can be immobilized on the non-structured paper and be employed for the detection of streptavidinalkaline phosphatase. By carrying out this assay on a paper-based analytical device which had been structured using the silanization technique we proofed biological compatibility of the suggested patterning technique.
Inertia effects in thin film flow with a corrugated boundary
NASA Technical Reports Server (NTRS)
Serbetci, Ilter; Tichy, John A.
1991-01-01
An analytical solution is presented for two-dimensional, incompressible film flow between a sinusoidally grooved (or rough) surface and a flat-surface. The upper grooved surface is stationary whereas the lower, smooth surface moves with a constant speed. The Navier-Stokes equations were solved employing both mapping techniques and perturbation expansions. Due to the inclusion of the inertia effects, a different pressure distribution is obtained than predicted by the classical lubrication theory. In particular, the amplitude of the pressure distribution of the classical lubrication theory is found to be in error by over 100 perent (for modified Reynolds number of 3-4).
Mirski, Tomasz; Bartoszcze, Michał; Bielawska-Drózd, Agata; Cieślik, Piotr; Michalski, Aleksander J; Niemcewicz, Marcin; Kocik, Janusz; Chomiczewski, Krzysztof
2014-01-01
Modern threats of bioterrorism force the need to develop methods for rapid and accurate identification of dangerous biological agents. Currently, there are many types of methods used in this field of studies that are based on immunological or genetic techniques, or constitute a combination of both methods (immuno-genetic). There are also methods that have been developed on the basis of physical and chemical properties of the analytes. Each group of these analytical assays can be further divided into conventional methods (e.g. simple antigen-antibody reactions, classical PCR, real-time PCR), and modern technologies (e.g. microarray technology, aptamers, phosphors, etc.). Nanodiagnostics constitute another group of methods that utilize the objects at a nanoscale (below 100 nm). There are also integrated and automated diagnostic systems, which combine different methods and allow simultaneous sampling, extraction of genetic material and detection and identification of the analyte using genetic, as well as immunological techniques.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
Alexovič, Michal; Horstkotte, Burkhard; Solich, Petr; Sabo, Ján
2016-02-04
Simplicity, effectiveness, swiftness, and environmental friendliness - these are the typical requirements for the state of the art development of green analytical techniques. Liquid phase microextraction (LPME) stands for a family of elegant sample pretreatment and analyte preconcentration techniques preserving these principles in numerous applications. By using only fractions of solvent and sample compared to classical liquid-liquid extraction, the extraction kinetics, the preconcentration factor, and the cost efficiency can be increased. Moreover, significant improvements can be made by automation, which is still a hot topic in analytical chemistry. This review surveys comprehensively and in two parts the developments of automation of non-dispersive LPME methodologies performed in static and dynamic modes. Their advantages and limitations and the reported analytical performances are discussed and put into perspective with the corresponding manual procedures. The automation strategies, techniques, and their operation advantages as well as their potentials are further described and discussed. In this first part, an introduction to LPME and their static and dynamic operation modes as well as their automation methodologies is given. The LPME techniques are classified according to the different approaches of protection of the extraction solvent using either a tip-like (needle/tube/rod) support (drop-based approaches), a wall support (film-based approaches), or microfluidic devices. In the second part, the LPME techniques based on porous supports for the extraction solvent such as membranes and porous media are overviewed. An outlook on future demands and perspectives in this promising area of analytical chemistry is finally given. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghiara, G.; Grande, C.; Ferrando, S.; Piccardo, P.
2018-01-01
In this study, tin-bronze analogues of archaeological objects were investigated in the presence of an aerobic Pseudomonas fluorescens strain in a solution, containing chlorides, sulfates, carbonates and nitrates according to a previous archaeological characterization. Classical fixation protocols were employed in order to verify the attachment capacity of such bacteria. In addition, classical metallurgical analytical techniques were used to detect the effect of bacteria on the formation of uncommon corrosion products in such an environment. Results indicate quite a good attachment capacity of the bacteria to the metallic surface and the formation of the uncommon corrosion products sulfates and sulfides is probably connected to the bacterial metabolism.
Lyapunov dimension formula for the global attractor of the Lorenz system
NASA Astrophysics Data System (ADS)
Leonov, G. A.; Kuznetsov, N. V.; Korzhemanova, N. A.; Kusakin, D. V.
2016-12-01
The exact Lyapunov dimension formula for the Lorenz system for a positive measure set of parameters, including classical values, was analytically obtained first by G.A. Leonov in 2002. Leonov used the construction technique of special Lyapunov-type functions, which was developed by him in 1991 year. Later it was shown that the consideration of larger class of Lyapunov-type functions permits proving the validity of this formula for all parameters, of the system, such that all the equilibria of the system are hyperbolically unstable. In the present work it is proved the validity of the formula for Lyapunov dimension for a wider variety of parameters values including all parameters, which satisfy the classical physical limitations.
Rotation forms and local Hamiltonian monodromy
NASA Astrophysics Data System (ADS)
Efstathiou, K.; Giacobbe, A.; Mardešić, P.; Sugny, D.
2017-02-01
The monodromy of torus bundles associated with completely integrable systems can be computed using geometric techniques (constructing homology cycles) or analytic arguments (computing discontinuities of abelian integrals). In this article, we give a general approach to the computation of monodromy that resembles the analytical one, reducing the problem to the computation of residues of polar 1-forms. We apply our technique to three celebrated examples of systems with monodromy (the champagne bottle, the spherical pendulum, the hydrogen atom) and to the case of non-degenerate focus-focus singularities, re-obtaining the classical results. An advantage of this approach is that the residue-like formula can be shown to be local in a neighborhood of a singularity, hence allowing the definition of monodromy also in the case of non-compact fibers. This idea has been introduced in the literature under the name of scattering monodromy. We prove the coincidence of the two definitions with the monodromy of an appropriately chosen compactification.
Cruise performance and range prediction reconsidered
NASA Astrophysics Data System (ADS)
Torenbeek, Egbert
1997-05-01
A unified analytical treatment of the cruise performance of subsonic transport aircraft is derived, valid for gas turbine powerplant installations: turboprop, turbojet and turbofan powered aircraft. Different from the classical treatment the present article deals with compressibility effects on the aerodynamic characteristics. Analytical criteria are derived for optimum cruise lift coefficient and Mach number, with and without constraints on the altitude and engine rating. A simple alternative to the Bréguet range equation is presented which applies to several practical cruising flight techniques: flight at constant altitude and Mach number and stepped cruise/climb. A practical non-iterative procedure for computing mission and reserve fuel loads in the preliminary design stage is proposed.
Accuracy of specific BIVA for the assessment of body composition in the United States population.
Buffa, Roberto; Saragat, Bruno; Cabras, Stefano; Rinaldi, Andrea C; Marini, Elisabetta
2013-01-01
Bioelectrical impedance vector analysis (BIVA) is a technique for the assessment of hydration and nutritional status, used in the clinical practice. Specific BIVA is an analytical variant, recently proposed for the Italian elderly population, that adjusts bioelectrical values for body geometry. Evaluating the accuracy of specific BIVA in the adult U.S. population, compared to the 'classic' BIVA procedure, using DXA as the reference technique, in order to obtain an interpretative model of body composition. A cross-sectional sample of 1590 adult individuals (836 men and 754 women, 21-49 years old) derived from the NHANES 2003-2004 was considered. Classic and specific BIVA were applied. The sensitivity and specificity in recognizing individuals below the 5(th) and above the 95(th) percentiles of percent fat (FMDXA%) and extracellular/intracellular water (ECW/ICW) ratio were evaluated by receiver operating characteristic (ROC) curves. Classic and specific BIVA results were compared by a probit multiple-regression. Specific BIVA was significantly more accurate than classic BIVA in evaluating FMDXA% (ROC areas: 0.84-0.92 and 0.49-0.61 respectively; p = 0.002). The evaluation of ECW/ICW was accurate (ROC areas between 0.83 and 0.96) and similarly performed by the two procedures (p = 0.829). The accuracy of specific BIVA was similar in the two sexes (p = 0.144) and in FMDXA% and ECW/ICW (p = 0.869). Specific BIVA showed to be an accurate technique. The tolerance ellipses of specific BIVA can be used for evaluating FM% and ECW/ICW in the U.S. adult population.
2016-02-15
do not quote them here. A sequel details a yet more efficient analytic technique based on holomorphic functions of the internal - state Markov chain...required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state ...minimal, unifilar predictor4. The -machine’s causal states σ ∈ are defined by the equivalence relation that groups all histories = −∞ ←x x :0 that
Bukhvostov-Lipatov model and quantum-classical duality
NASA Astrophysics Data System (ADS)
Bazhanov, Vladimir V.; Lukyanov, Sergei L.; Runov, Boris A.
2018-02-01
The Bukhvostov-Lipatov model is an exactly soluble model of two interacting Dirac fermions in 1 + 1 dimensions. The model describes weakly interacting instantons and anti-instantons in the O (3) non-linear sigma model. In our previous work [arxiv:arXiv:1607.04839] we have proposed an exact formula for the vacuum energy of the Bukhvostov-Lipatov model in terms of special solutions of the classical sinh-Gordon equation, which can be viewed as an example of a remarkable duality between integrable quantum field theories and integrable classical field theories in two dimensions. Here we present a complete derivation of this duality based on the classical inverse scattering transform method, traditional Bethe ansatz techniques and analytic theory of ordinary differential equations. In particular, we show that the Bethe ansatz equations defining the vacuum state of the quantum theory also define connection coefficients of an auxiliary linear problem for the classical sinh-Gordon equation. Moreover, we also present details of the derivation of the non-linear integral equations determining the vacuum energy and other spectral characteristics of the model in the case when the vacuum state is filled by 2-string solutions of the Bethe ansatz equations.
Comparison between different techniques applied to quartz CPO determination in granitoid mylonites
NASA Astrophysics Data System (ADS)
Fazio, Eugenio; Punturo, Rosalda; Cirrincione, Rosolino; Kern, Hartmut; Wenk, Hans-Rudolph; Pezzino, Antonino; Goswami, Shalini; Mamtani, Manish
2016-04-01
Since the second half of the last century, several techniques have been adopted to resolve the crystallographic preferred orientation (CPO) of major minerals constituting crustal and mantle rocks. To this aim, many efforts have been made to increase the accuracy of such analytical devices as well as to progressively reduce the time needed to perform microstructural analysis. It is worth noting that many of these microstructural studies deal with quartz CPO because of the wide occurrence of this mineral phase in crustal rocks as well as its quite simple chemical composition. In the present work, four different techniques were applied to define CPOs of dynamically recrystallized quartz domains from naturally deformed rocks collected from a ductile crustal scale shear zone in order to compare their advantages and limitation. The selected Alpine shear zone is located in the Aspromonte Massif (Calabrian Peloritani Orogen, southern Italy) representing granitoid lithotypes. The adopted methods span from "classical" universal stage (US), to image analysis technique (CIP), electron back-scattered diffraction (EBSD), and time of flight neutron diffraction (TOF). When compared, bulk texture pole figures obtained by means of these different techniques show a good correlation. Advances in analytical techniques used for microstructural investigations are outlined by discussing results of quartz CPO that are presented in this study.
Kann, Birthe; Windbergs, Maike
2013-04-01
Confocal Raman microscopy is an analytical technique with a steadily increasing impact in the field of pharmaceutics as the instrumental setup allows for nondestructive visualization of component distribution within drug delivery systems. Here, the attention is mainly focused on classic solid carrier systems like tablets, pellets, or extrudates. Due to the opacity of these systems, Raman analysis is restricted either to exterior surfaces or cross sections. As Raman spectra are only recorded from one focal plane at a time, the sample is usually altered to create a smooth and even surface. However, this manipulation can lead to misinterpretation of the analytical results. Here, we present a trendsetting approach to overcome these analytical pitfalls with a combination of confocal Raman microscopy and optical profilometry. By acquiring a topography profile of the sample area of interest prior to Raman spectroscopy, the profile height information allowed to level the focal plane to the sample surface for each spectrum acquisition. We first demonstrated the basic principle of this complementary approach in a case study using a tilted silica wafer. In a second step, we successfully adapted the two techniques to investigate an extrudate and a lyophilisate as two exemplary solid drug carrier systems. Component distribution analysis with the novel analytical approach was neither hampered by the curvature of the cylindrical extrudate nor the highly structured surface of the lyophilisate. Therefore, the combined analytical approach bears a great potential to be implemented in diversified fields of pharmaceutical sciences.
Supercritical fluid extraction of selected pharmaceuticals from water and serum.
Simmons, B R; Stewart, J T
1997-01-24
Selected drugs from benzodiazepine, anabolic agent and non-steroidal anti-inflammatory drug (NSAID) therapeutic classes were extracted from water and serum using a supercritical CO2 mobile phase. The samples were extracted at a pump pressure of 329 MPa, an extraction chamber temperature of 45 degrees C, and a restrictor temperature of 60 degrees C. The static extraction time for all samples was 2.5 min and the dynamic extraction time ranged from 5 to 20 min. The analytes were collected in appropriate solvent traps and assayed by modified literature HPLC procedures. Analyte recoveries were calculated based on peak height measurements of extracted vs. unextracted analyte. The recovery of the benzodiazepines ranged from 80 to 98% in water and from 75 to 94% in serum. Anabolic drug recoveries from water and serum ranged from 67 to 100% and 70 to 100%, respectively. The NSAIDs were recovered from water in the 76 to 97% range and in the 76 to 100% range from serum. Accuracy, precision and endogenous peak interference, if any, were determined for blank and spiked serum extractions and compared with classical sample preparation techniques of liquid-liquid and solid-phase extraction reported in the literature. For the benzodiazepines, accuracy and precision for supercritical fluid extraction (SFE) ranged from 1.95 to 3.31 and 0.57 to 1.25%, respectively (n = 3). The SFE accuracy and precision data for the anabolic agents ranged from 4.03 to 7.84 and 0.66 to 2.78%, respectively (n = 3). The accuracy and precision data reported for the SFE of the NSAIDs ranged from 2.79 to 3.79 and 0.33 to 1.27%, respectively (n = 3). The precision of the SFE method from serum was shown to be comparable to the precision obtained with other classical preparation techniques.
Analytical Chemistry: A retrospective view on some current trends.
Niessner, Reinhard
2018-04-01
In a retrospective view some current trends in Analytical Chemistry are outlined and connected to work published more than a hundred years ago in the same field. For example, gravimetric microanalysis after specific precipitation, once the sole basis for chemical analysis, has been transformed into a mass-sensitive transducer in combination with compound-specific receptors. Molecular spectroscopy, still practising the classical absorption/emission techniques for detecting elements or molecules experiences a change to Raman spectroscopy, is now allowing analysis of a multitude of additional features. Chemical sensors are now used to perform a vast number of analytical measurements. Especially paper-based devices (dipsticks, microfluidic pads) celebrate a revival as they can potentially revolutionize medicine in the developing world. Industry 4.0 will lead to a further increase of sensor applications. Preceding separation and enrichment of analytes from complicated matrices remains the backbone for a successful analysis, despite increasing attempts to avoid clean-up. Continuous separation techniques will become a key element for 24/7 production of goods with certified quality. Attempts to get instantaneous and specific chemical information by optical or electrical transduction will need highly selective receptors in large quantities. Further understanding of ligand - receptor complex structures is the key for successful generation of artificial bio-inspired receptors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kurylyk, Barret L.; McKenzie, Jeffrey M; MacQuarrie, Kerry T. B.; Voss, Clifford I.
2014-01-01
Numerous cold regions water flow and energy transport models have emerged in recent years. Dissimilarities often exist in their mathematical formulations and/or numerical solution techniques, but few analytical solutions exist for benchmarking flow and energy transport models that include pore water phase change. This paper presents a detailed derivation of the Lunardini solution, an approximate analytical solution for predicting soil thawing subject to conduction, advection, and phase change. Fifteen thawing scenarios are examined by considering differences in porosity, surface temperature, Darcy velocity, and initial temperature. The accuracy of the Lunardini solution is shown to be proportional to the Stefan number. The analytical solution results obtained for soil thawing scenarios with water flow and advection are compared to those obtained from the finite element model SUTRA. Three problems, two involving the Lunardini solution and one involving the classic Neumann solution, are recommended as standard benchmarks for future model development and testing.
A procedure to achieve fine control in MW processing of foods
NASA Astrophysics Data System (ADS)
Cuccurullo, G.; Cinquanta, L.; Sorrentino, G.
2007-01-01
A two-dimensional analytical model for predicting the unsteady temperature field in a cylindrical shaped body affected by spatially varying heat generation is presented. The dimensionless problem is solved analytically by using both partial solutions and the variation of parameters techniques. Having in mind industrial microwave heating for food pasteurization, the easy-to-handle solution is used to confirm the intrinsic lack of spatial uniformity of such a treatment in comparison to the traditional one. From an experimental point of view, a batch pasteurization treatment was realized to compare the effect of two different control techniques both based on IR thermography readout: the former assured a classical PID control, while the latter was based on a "shadowing" technique, consisting in covering portions of the sample which are hot enough with a mobile metallic screen. A measure of the effectiveness of the two control techniques was obtained by evaluating the thermal death curves of a strain Lactobacillus plantarum submitted to pasteurization temperatures. Preliminary results showed meaningful increases in the microwave thermal inactivation of the L. plantarum and similar significant decreases in thermal inactivation time with respect to the traditional pasteurization thermal treatment.
Analytic theory of orbit contraction
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.
1977-01-01
The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.
Solitons and ionospheric modification
NASA Technical Reports Server (NTRS)
Sheerin, J. P.; Nicholson, D. R.; Payne, G. L.; Hansen, P. J.; Weatherall, J. C.; Goldman, M. V.
1982-01-01
The possibility of Langmuir soliton formation and collapse during ionospheric modification is investigated. Parameters characterizing former facilities, existing facilities, and planned facilities are considered, using a combination of analytical and numerical techniques. At a spatial location corresponding to the exact classical reflection point of the modifier wave, the Langmuir wave evolution is found to be dominated by modulational instability followed by soliton formation and three-dimensional collapse. The earth's magnetic field is found to affect the shape of the collapsing soliton. These results provide an alternative explanation for some recent observations.
Suba, Dávid; Urbányi, Zoltán; Salgó, András
2016-10-01
Capillary electrophoresis techniques are widely used in the analytical biotechnology. Different electrophoretic techniques are very adequate tools to monitor size-and charge heterogenities of protein drugs. Method descriptions and development studies of capillary zone electrophoresis (CZE) have been described in literature. Most of them are performed based on the classical one-factor-at-time (OFAT) approach. In this study a very simple method development approach is described for capillary zone electrophoresis: a "two-phase-four-step" approach is introduced which allows a rapid, iterative method development process and can be a good platform for CZE method. In every step the current analytical target profile and an appropriate control strategy were established to monitor the current stage of development. A very good platform was established to investigate intact and digested protein samples. Commercially available monoclonal antibody was chosen as model protein for the method development study. The CZE method was qualificated after the development process and the results were presented. The analytical system stability was represented by the calculated RSD% value of area percentage and migration time of the selected peaks (<0.8% and <5%) during the intermediate precision investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Oud, Bart; Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-01-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. PMID:22152095
Oud, Bart; van Maris, Antonius J A; Daran, Jean-Marc; Pronk, Jack T
2012-03-01
Successful reverse engineering of mutants that have been obtained by nontargeted strain improvement has long presented a major challenge in yeast biotechnology. This paper reviews the use of genome-wide approaches for analysis of Saccharomyces cerevisiae strains originating from evolutionary engineering or random mutagenesis. On the basis of an evaluation of the strengths and weaknesses of different methods, we conclude that for the initial identification of relevant genetic changes, whole genome sequencing is superior to other analytical techniques, such as transcriptome, metabolome, proteome, or array-based genome analysis. Key advantages of this technique over gene expression analysis include the independency of genome sequences on experimental context and the possibility to directly and precisely reproduce the identified changes in naive strains. The predictive value of genome-wide analysis of strains with industrially relevant characteristics can be further improved by classical genetics or simultaneous analysis of strains derived from parallel, independent strain improvement lineages. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Applications of Flow Cytometry to Clinical Microbiology†
Álvarez-Barrientos, Alberto; Arroyo, Javier; Cantón, Rafael; Nombela, César; Sánchez-Pérez, Miguel
2000-01-01
Classical microbiology techniques are relatively slow in comparison to other analytical techniques, in many cases due to the need to culture the microorganisms. Furthermore, classical approaches are difficult with unculturable microorganisms. More recently, the emergence of molecular biology techniques, particularly those on antibodies and nucleic acid probes combined with amplification techniques, has provided speediness and specificity to microbiological diagnosis. Flow cytometry (FCM) allows single- or multiple-microbe detection in clinical samples in an easy, reliable, and fast way. Microbes can be identified on the basis of their peculiar cytometric parameters or by means of certain fluorochromes that can be used either independently or bound to specific antibodies or oligonucleotides. FCM has permitted the development of quantitative procedures to assess antimicrobial susceptibility and drug cytotoxicity in a rapid, accurate, and highly reproducible way. Furthermore, this technique allows the monitoring of in vitro antimicrobial activity and of antimicrobial treatments ex vivo. The most outstanding contribution of FCM is the possibility of detecting the presence of heterogeneous populations with different responses to antimicrobial treatments. Despite these advantages, the application of FCM in clinical microbiology is not yet widespread, probably due to the lack of access to flow cytometers or the lack of knowledge about the potential of this technique. One of the goals of this review is to attempt to mitigate this latter circumstance. We are convinced that in the near future, the availability of commercial kits should increase the use of this technique in the clinical microbiology laboratory. PMID:10755996
Synthesis of active controls for flutter suppression on a flight research wing
NASA Technical Reports Server (NTRS)
Abel, I.; Perry, B., III; Murrow, H. N.
1977-01-01
This paper describes some activities associated with the preliminary design of an active control system for flutter suppression capable of demonstrating a 20% increase in flutter velocity. Results from two control system synthesis techniques are given. One technique uses classical control theory, and the other uses an 'aerodynamic energy method' where control surface rates or displacements are minimized. Analytical methods used to synthesize the control systems and evaluate their performance are described. Some aspects of a program for flight testing the active control system are also given. This program, called DAST (Drones for Aerodynamics and Structural Testing), employs modified drone-type vehicles for flight assessments and validation testing.
The propagation of Lamb waves in multilayered plates: phase-velocity measurement
NASA Astrophysics Data System (ADS)
Grondel, Sébastien; Assaad, Jamal; Delebarre, Christophe; Blanquet, Pierrick; Moulin, Emmanuel
1999-05-01
Owing to the dispersive nature and complexity of the Lamb waves generated in a composite plate, the measurement of the phase velocities by using classical methods is complicated. This paper describes a measurement method based upon the spectrum-analysis technique, which allows one to overcome these problems. The technique consists of using the fast Fourier transform to compute the spatial power-density spectrum. Additionally, weighted functions are used to increase the probability of detecting the various propagation modes. Experimental Lamb-wave dispersion curves of multilayered plates are successfully compared with the analytical ones. This technique is expected to be a useful way to design composite parts integrating ultrasonic transducers in the field of health monitoring. Indeed, Lamb waves and particularly their velocities are very sensitive to defects.
NASA Astrophysics Data System (ADS)
Mitchell, Justin Chadwick
2011-12-01
Using light to probe the structure of matter is as natural as opening our eyes. Modern physics and chemistry have turned this art into a rich science, measuring the delicate interactions possible at the molecular level. Perhaps the most commonly used tool in computational spectroscopy is that of matrix diagonalization. While this is invaluable for calculating everything from molecular structure and energy levels to dipole moments and dynamics, the process of numerical diagonalization is an opaque one. This work applies symmetry and semi-classical techniques to elucidate numerical spectral analysis for high-symmetry molecules. Semi-classical techniques, such as the Potential Energy Surfaces, have long been used to help understand molecular vibronic and rovibronic spectra and dynamics. This investigation focuses on newer semi-classical techniques that apply Rotational Energy Surfaces (RES) to rotational energy level clustering effects in high-symmetry molecules. Such clusters exist in rigid rotor molecules as well as deformable spherical tops. This study begins by using the simplicity of rigid symmetric top molecules to clarify the classical-quantum correspondence of RES semi-classical analysis and then extends it to a more precise and complete theory of modern high-resolution spectra. RES analysis is extended to molecules having more complex and higher rank tensorial rotational and rovibrational Hamiltonians than were possible to understand before. Such molecules are shown to produce an extraordinary range of rotational level clusters, corresponding to a panoply of symmetries ranging from C4v to C2 and C1 (no symmetry) with a corresponding range of new angular momentum localization and J-tunneling effects. Using RES topography analysis and the commutation duality relations between symmetry group operators in the lab-frame to those in the body-frame, it is shown how to better describe and catalog complex splittings found in rotational level clusters. Symmetry character analysis is generalized to give analytic eigensolutions. An appendix provides vibrational analogies. For the first time, interactions between molecular vibrations (polyads) are described semi-classically by multiple RES. This is done for the nu 3/2nu4 dyad of CF4. The nine-surface RES topology of the U(9)-dyad agrees with both computational and experimental work. A connection between this and a simpler U(2) example is detailed in an Appendix.
Exact analytical solution of a classical Josephson tunnel junction problem
NASA Astrophysics Data System (ADS)
Kuplevakhsky, S. V.; Glukhov, A. M.
2010-10-01
We give an exact and complete analytical solution of the classical problem of a Josephson tunnel junction of arbitrary length W ɛ(0,∞) in the presence of external magnetic fields and transport currents. Contrary to a wide-spread belief, the exact analytical solution unambiguously proves that there is no qualitative difference between so-called "small" (W≪1) and "large" junctions (W≫1). Another unexpected physical implication of the exact analytical solution is the existence (in the current-carrying state) of unquantized Josephson vortices carrying fractional flux and located near one of the edges of the junction. We also refine the mathematical definition of critical transport current.
NASA Astrophysics Data System (ADS)
Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru
2007-10-01
Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.
NASA Astrophysics Data System (ADS)
Oudini, N.; Sirse, N.; Taccogna, F.; Ellingboe, A. R.; Bendib, A.
2018-05-01
We propose a new technique for diagnosing negative ion properties using Langmuir probe assisted pulsed laser photo-detachment. While the classical technique uses a laser pulse to convert negative ions into electron-atom pairs and a positively biased Langmuir probe tracking the change of electron saturation current, the proposed method uses a negatively biased Langmuir probe to track the temporal evolution of positive ion current. The negative bias aims to avoid the parasitic electron current inherent to probe tip surface ablation. In this work, we show through analytical and numerical approaches that, by knowing electron temperature and performing photo-detachment at two different laser wavelengths, it is possible to deduce plasma electronegativity (ratio of negative ion to electron densities) α, and anisothermicity (ratio of electron to negative ion temperatures) γ-. We present an analytical model that links the change in the collected positive ion current to plasma electronegativity and anisothermicity. Particle-In-Cell simulation is used as a numerical experiment covering a wide range of α and γ- to test the new analysis technique. The new technique is sensitive to α in the range 0.5 < α < 10 and yields γ- for large α, where negative ion flux affects the probe sheath behavior, typically α > 1.
New Polymorph Form of Dexamethasone Acetate.
Silva, Ronaldo Pedro da; Ambrósio, Mateus Felipe Schuchter; Piovesan, Luciana Almeida; Freitas, Maria Clara Ramalho; Aguiar, Daniel Lima Marques de; Horta, Bruno Araújo Cautiero; Epprecht, Eugenio Kahn; San Gil, Rosane Aguiar da Silva; Visentin, Lorenzo do Canto
2018-02-01
A new monohydrated polymorph of dexamethasone acetate was crystallized and its crystal structure characterized. The different analytical techniques used for describing its structural and vibrational properties were: single crystal and polycrystal X-ray diffraction, solid state nuclear magnetic resonance, infrared spectroscopy. A Hirshfeld surface analysis was carried out through self-arrangement cemented by H-bonds observed in this new polymorph. This new polymorph form appeared because of self-arrangement via classical hydrogen bonds around the water molecule. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Quasi-Static Analysis of Round LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of round LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Quasi-Static Analysis of LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Odoardi, Sara; Fisichella, Marco; Romolo, Francesco Saverio; Strano-Rossi, Sabina
2015-09-01
The increasing number of new psychoactive substances (NPS) present in the illicit market render their identification in biological fluids/tissues of great concern for clinical and forensic toxicology. Analytical methods able to detect the huge number of substances that can be used are sought, considering also that many NPS are not detected by the standard immunoassays generally used for routine drug screening. The aim of this work was to develop a method for the screening of different classes of NPS (a total of 78 analytes including cathinones, synthetic cannabinoids, phenethylamines, piperazines, ketamine and analogues, benzofurans, tryptamines) from blood samples. The simultaneous extraction of analytes was performed by Dispersive Liquid/Liquid Microextraction DLLME, a very rapid, cheap and efficient extraction technique that employs microliters amounts of organic solvents. Analyses were performed by a target Ultrahigh Performance Liquid Chromatography tandem Mass Spectrometry (UHPLC-MS/MS) method in multiple reaction monitoring (MRM). The method allowed the detection of the studied analytes with limits of detection (LODs) ranging from 0.2 to 2ng/mL. The proposed DLLME method can be used as an alternative to classical liquid/liquid or solid-phase extraction techniques due to its rapidity, necessity to use only microliters amounts of organic solvents, cheapness, and to its ability to extract simultaneously a huge number of analytes also from different chemical classes. The method was then applied to 60 authentic real samples from forensic cases, demonstrating its suitability for the screening of a wide number of NPS. Copyright © 2015 Elsevier B.V. All rights reserved.
A statistical theory for sound radiation and reflection from a duct
NASA Technical Reports Server (NTRS)
Cho, Y. C.
1979-01-01
A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.
Prediction of Experimental Surface Heat Flux of Thin Film Gauges using ANFIS
NASA Astrophysics Data System (ADS)
Sarma, Shrutidhara; Sahoo, Niranjan; Unal, Aynur
2018-05-01
Precise quantification of surface heat fluxes in highly transient environment is of paramount importance from the design point of view of several engineering equipment like thermal protection or cooling systems. Such environments are simulated in experimental facilities by exposing the surface with transient heat loads typically step/impulsive in nature. The surface heating rates are then determined from highly transient temperature history captured by efficient surface temperature sensors. The classical approach is to use thin film gauges (TFGs) in which temperature variations are acquired within milliseconds, thereby allowing calculation of surface heat flux, based on the theory of one-dimensional heat conduction on a semi-infinite body. With recent developments in the soft computing methods, the present study is an attempt for the application of intelligent system technique, called adaptive neuro fuzzy inference system (ANFIS) to recover surface heat fluxes from a given temperature history recorded by TFGs without having the need to solve lengthy analytical equations. Experiments have been carried out by applying known quantity of `impulse heat load' through laser beam on TFGs. The corresponding voltage signals have been acquired and surface heat fluxes are estimated through classical analytical approach. These signals are then used to `train' the ANFIS model, which later predicts output for `test' values. Results from both methods have been compared and these surface heat fluxes are used to predict the non-linear relationship between thermal and electrical properties of the gauges that are exceedingly pertinent to the design of efficient TFGs. Further, surface plots have been created to give an insight about dimensionality effect of the non-linear dependence of thermal/electrical parameters on each other. Later, it is observed that a properly optimized ANFIS model can predict the impulsive heat profiles with significant accuracy. This paper thus shows the appropriateness of soft computing technique as a practically constructive replacement for tedious analytical formulation and henceforth, effectively quantifies the modeling of TFGs.
Barker, John R; Martinez, Antonio
2018-04-04
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
NASA Astrophysics Data System (ADS)
Barker, John R.; Martinez, Antonio
2018-04-01
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
Separation techniques for the clean-up of radioactive mixed waste for ICP-AES/ICP-MS analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swafford, A.M.; Keller, J.M.
1993-03-17
Two separation techniques were investigated for the clean-up of typical radioactive mixed waste samples requiring elemental analysis by Inductively Coupled Plasma-Atomic Emission Spectroscopy (ICP-AES) or Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). These measurements frequently involve regulatory or compliance criteria which include the determination of elements on the EPA Target Analyte List (TAL). These samples usually consist of both an aqueous phase and a solid phase which is mostly an inorganic sludge. Frequently, samples taken from the waste tanks contain high levels of uranium and thorium which can cause spectral interferences in ICP-AES or ICP-MS analysis. The removal of these interferences ismore » necessary to determine the presence of the EPA TAL elements in the sample. Two clean-up methods were studied on simulated aqueous waste samples containing the EPA TAL elements. The first method studied was a classical procedure based upon liquid-liquid extraction using tri-n- octylphosphine oxide (TOPO) dissolved in cyclohexane. The second method investigated was based on more recently developed techniques using extraction chromatography; specifically the use of a commercially available Eichrom TRU[center dot]Spec[trademark] column. Literature on these two methods indicates the efficient removal of uranium and thorium from properly prepared samples and provides considerable qualitative information on the extraction behavior of many other elements. However, there is a lack of quantitative data on the extraction behavior of elements on the EPA Target Analyte List. Experimental studies on these two methods consisted of determining whether any of the analytes were extracted by these methods and the recoveries obtained. Both methods produced similar results; the EPA target analytes were only slightly or not extracted. Advantages and disadvantages of each method were evaluated and found to be comparable.« less
Van Gosen, Bradley S.
2008-01-01
A study conducted in 2006 by the U.S. Geological Survey collected 57 surface rock samples from nine types of intrusive rock in the Iron Hill carbonatite complex. This intrusive complex, located in Gunnison County of southwestern Colorado, is known for its classic carbonatite-alkaline igneous geology and petrology. The Iron Hill complex is also noteworthy for its diverse mineral resources, including enrichments in titanium, rare earth elements, thorium, niobium (columbium), and vanadium. This study was performed to reexamine the chemistry and metallic content of the major rock units of the Iron Hill complex by using modern analytical techniques, while providing a broader suite of elements than the earlier published studies. The report contains the geochemical analyses of the samples in tabular and digital spreadsheet format, providing the analytical results for 55 major and trace elements.
A new frequency approach for light flicker evaluation in electric power systems
NASA Astrophysics Data System (ADS)
Feola, Luigi; Langella, Roberto; Testa, Alfredo
2015-12-01
In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.
Thermodynamics of ultra-sonic cavitation bubbles in flotation ore processes
NASA Astrophysics Data System (ADS)
Royer, J. J.; Monnin, N.; Pailot-Bonnetat, N.; Filippov, L. O.; Filippova, I. V.; Lyubimova, T.
2017-07-01
Ultra-sonic enhanced flotation ore process is a more efficient technique for ore recovery than classical flotation method. A classical simplified analytical Navier-Stokes model is used to predict the effect of the ultrasonic waves on the cavitations bubble behaviour. Then, a thermodynamics approach estimates the temperature and pressure inside a bubble, and investigates the energy exchanges between flotation liquid and gas bubbles. Several gas models (including ideal gas, Soave-Redlich-Kwong, and Peng-Robinson) assuming polytropic transformations (from isothermal to adiabatic) are used to predict the evolution of the internal pressure and temperature inside the bubble during the ultrasonic treatment, together with the energy and heat exchanges between the gas and the surrounding fluid. Numerical simulation illustrates the suggest theory. If the theory is verified experimentally, it predicts an increase of the temperature and pressure inside the bubbles. Preliminary ultrasonic flotation results performed on a potash ore seem to confirm the theory.
Big genomics and clinical data analytics strategies for precision cancer prognosis.
Ow, Ghim Siong; Kuznetsov, Vladimir A
2016-11-07
The field of personalized and precise medicine in the era of big data analytics is growing rapidly. Previously, we proposed our model of patient classification termed Prognostic Signature Vector Matching (PSVM) and identified a 37 variable signature comprising 36 let-7b associated prognostic significant mRNAs and the age risk factor that stratified large high-grade serous ovarian cancer patient cohorts into three survival-significant risk groups. Here, we investigated the predictive performance of PSVM via optimization of the prognostic variable weights, which represent the relative importance of one prognostic variable over the others. In addition, we compared several multivariate prognostic models based on PSVM with classical machine learning techniques such as K-nearest-neighbor, support vector machine, random forest, neural networks and logistic regression. Our results revealed that negative log-rank p-values provides more robust weight values as opposed to the use of other quantities such as hazard ratios, fold change, or a combination of those factors. PSVM, together with the classical machine learning classifiers were combined in an ensemble (multi-test) voting system, which collectively provides a more precise and reproducible patient stratification. The use of the multi-test system approach, rather than the search for the ideal classification/prediction method, might help to address limitations of the individual classification algorithm in specific situation.
Bagheri, Zahra; Massudi, Reza
2017-05-01
An analytical quantum model is used to calculate electrical permittivity of a metal nanoparticle located in an adjacent molecule. Different parameters, such as radiative and non-radiative decay rates, quantum yield, electrical field enhancement factor, and fluorescence enhancement are calculated by such a model and they are compared with those obtained by using the classical Drude model. It is observed that using an analytical quantum model presents a higher enhancement factor, up to 30%, as compared to classical model for nanoparticles smaller than 10 nm. Furthermore, the results are in better agreement with those experimentally realized.
Identification of Microorganisms by Modern Analytical Techniques.
Buszewski, Bogusław; Rogowska, Agnieszka; Pomastowski, Paweł; Złoch, Michał; Railean-Plugaru, Viorica
2017-11-01
Rapid detection and identification of microorganisms is a challenging and important aspect in a wide range of fields, from medical to industrial, affecting human lives. Unfortunately, classical methods of microorganism identification are based on time-consuming and labor-intensive approaches. Screening techniques require the rapid and cheap grouping of bacterial isolates; however, modern bioanalytics demand comprehensive bacterial studies at a molecular level. Modern approaches for the rapid identification of bacteria use molecular techniques, such as 16S ribosomal RNA gene sequencing based on polymerase chain reaction or electromigration, especially capillary zone electrophoresis and capillary isoelectric focusing. However, there are still several challenges with the analysis of microbial complexes using electromigration technology, such as uncontrolled aggregation and/or adhesion to the capillary surface. Thus, an approach using capillary electrophoresis of microbial aggregates with UV and matrix-assisted laser desorption ionization time-of-flight MS detection is presented.
Discordance between net analyte signal theory and practical multivariate calibration.
Brown, Christopher D
2004-08-01
Lorber's concept of net analyte signal is reviewed in the context of classical and inverse least-squares approaches to multivariate calibration. It is shown that, in the presence of device measurement error, the classical and inverse calibration procedures have radically different theoretical prediction objectives, and the assertion that the popular inverse least-squares procedures (including partial least squares, principal components regression) approximate Lorber's net analyte signal vector in the limit is disproved. Exact theoretical expressions for the prediction error bias, variance, and mean-squared error are given under general measurement error conditions, which reinforce the very discrepant behavior between these two predictive approaches, and Lorber's net analyte signal theory. Implications for multivariate figures of merit and numerous recently proposed preprocessing treatments involving orthogonal projections are also discussed.
Analytic Thermoelectric Couple Modeling: Variable Material Properties and Transient Operation
NASA Technical Reports Server (NTRS)
Mackey, Jonathan A.; Sehirlioglu, Alp; Dynys, Fred
2015-01-01
To gain a deeper understanding of the operation of a thermoelectric couple a set of analytic solutions have been derived for a variable material property couple and a transient couple. Using an analytic approach, as opposed to commonly used numerical techniques, results in a set of useful design guidelines. These guidelines can serve as useful starting conditions for further numerical studies, or can serve as design rules for lab built couples. The analytic modeling considers two cases and accounts for 1) material properties which vary with temperature and 2) transient operation of a couple. The variable material property case was handled by means of an asymptotic expansion, which allows for insight into the influence of temperature dependence on different material properties. The variable property work demonstrated the important fact that materials with identical average Figure of Merits can lead to different conversion efficiencies due to temperature dependence of the properties. The transient couple was investigated through a Greens function approach; several transient boundary conditions were investigated. The transient work introduces several new design considerations which are not captured by the classic steady state analysis. The work helps to assist in designing couples for optimal performance, and also helps assist in material selection.
Comparative investigation of diagnosis media for induction machine mechanical unbalance fault.
Salah, Mohamed; Bacha, Khmais; Chaari, Abdelkader
2013-11-01
For an induction machine, we suggest a theoretical development of the mechanical unbalance effect on the analytical expressions of radial vibration and stator current. Related spectra are described and characteristic defect frequencies are determined. Moreover, the stray flux expressions are developed for both axial and radial sensor coil positions and a substitute diagnosis technique is proposed. In addition, the load torque effect on the detection efficiency of these diagnosis media is discussed and a comparative investigation is performed. The decisive factor of comparison is the fault sensitivity. Experimental results show that spectral analysis of the axial stray flux can be an alternative solution to cover effectiveness limitation of the traditional stator current technique and to substitute the classical vibration practice. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
The fate of the dream in contemporary psychoanalysis.
Loden, Susan
2003-01-01
Freud's metapsychology of dream formation has implicitly been discarded, as indicated in a brief review of trends in psychoanalytic thinking about dreams, with a focus on the relationship of the dream process to ego capacities. The current bias toward exclusive emphasis on the exploration of the analytic relationship and the transference has evolved at the expense of classical, in-depth dream interpretation, and, by extension, at the expense of strengthening the patient's capacity for self-inquiry. This trend is shown to be especially evident in the treatment of borderline patients, who today are believed by many analysts to misuse the dream in the analytic situation. An extended clinical example of a borderline patient with whom an unmodified Freudian associative technique of dream interpretation is used with good outcome illustrates the author's contrary conviction. In clinical practice, we should neglect neither the uniqueness of the dream as a central intrapsychic event nor the Freudian art of total dream analysis.
Development of a computational testbed for numerical simulation of combustion instability
NASA Technical Reports Server (NTRS)
Grenda, Jeffrey; Venkateswaran, Sankaran; Merkle, Charles L.
1993-01-01
A synergistic hierarchy of analytical and computational fluid dynamic techniques is used to analyze three-dimensional combustion instabilities in liquid rocket engines. A mixed finite difference/spectral procedure is employed to study the effects of a distributed vaporization zone on standing and spinning instability modes within the chamber. Droplet atomization and vaporization are treated by a variety of classical models found in the literature. A multi-zone, linearized analytical solution is used to validate the accuracy of the numerical simulations at small amplitudes for a distributed vaporization region. This comparison indicates excellent amplitude and phase agreement under both stable and unstable operating conditions when amplitudes are small and proper grid resolution is used. As amplitudes get larger, expected nonlinearities are observed. The effect of liquid droplet temperature fluctuations was found to be of critical importance in driving the instabilities of the combustion chamber.
Quantitative proteomics in the field of microbiology.
Otto, Andreas; Becher, Dörte; Schmidt, Frank
2014-03-01
Quantitative proteomics has become an indispensable analytical tool for microbial research. Modern microbial proteomics covers a wide range of topics in basic and applied research from in vitro characterization of single organisms to unravel the physiological implications of stress/starvation to description of the proteome content of a cell at a given time. With the techniques available, ranging from classical gel-based procedures to modern MS-based quantitative techniques, including metabolic and chemical labeling, as well as label-free techniques, quantitative proteomics is today highly successful in sophisticated settings of high complexity such as host-pathogen interactions, mixed microbial communities, and microbial metaproteomics. In this review, we will focus on the vast range of techniques practically applied in current research with an introduction of the workflows used for quantitative comparisons, a description of the advantages/disadvantages of the various methods, reference to hallmark publications and presentation of applications in current microbial research. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Determination of mycotoxins in foods: current state of analytical methods and limitations.
Köppen, Robert; Koch, Matthias; Siegel, David; Merkel, Stefan; Maul, Ronald; Nehls, Irene
2010-05-01
Mycotoxins are natural contaminants produced by a range of fungal species. Their common occurrence in food and feed poses a threat to the health of humans and animals. This threat is caused either by the direct contamination of agricultural commodities or by a "carry-over" of mycotoxins and their metabolites into animal tissues, milk, and eggs after feeding of contaminated hay or corn. As a consequence of their diverse chemical structures and varying physical properties, mycotoxins exhibit a wide range of biological effects. Individual mycotoxins can be genotoxic, mutagenic, carcinogenic, teratogenic, and oestrogenic. To protect consumer health and to reduce economic losses, surveillance and control of mycotoxins in food and feed has become a major objective for producers, regulatory authorities and researchers worldwide. However, the variety of chemical structures makes it impossible to use one single technique for mycotoxin analysis. Hence, a vast number of analytical methods has been developed and validated. The heterogeneity of food matrices combined with the demand for a fast, simultaneous and accurate determination of multiple mycotoxins creates enormous challenges for routine analysis. The most crucial issues will be discussed in this review. These are (1) the collection of representative samples, (2) the performance of classical and emerging analytical methods based on chromatographic or immunochemical techniques, (3) the validation of official methods for enforcement, and (4) the limitations and future prospects of the current methods.
Application of Classical and Lie Transform Methods to Zonal Perturbation in the Artificial Satellite
NASA Astrophysics Data System (ADS)
San-Juan, J. F.; San-Martin, M.; Perez, I.; Lopez-Ochoa, L. M.
2013-08-01
A scalable second-order analytical orbit propagator program is being carried out. This analytical orbit propagator combines modern perturbation methods, based on the canonical frame of the Lie transform, and classical perturbation methods in function of orbit types or the requirements needed for a space mission, such as catalog maintenance operations, long period evolution, and so on. As a first step on the validation of part of our orbit propagator, in this work we only consider the perturbation produced by zonal harmonic coefficients in the Earth's gravity potential, so that it is possible to analyze the behaviour of the perturbation methods involved in the corresponding analytical theories.
Principles of Micellar Electrokinetic Capillary Chromatography Applied in Pharmaceutical Analysis
Hancu, Gabriel; Simon, Brigitta; Rusu, Aura; Mircia, Eleonora; Gyéresi, Árpád
2013-01-01
Since its introduction capillary electrophoresis has shown great potential in areas where electrophoretic techniques have rarely been used before, including here the analysis of pharmaceutical substances. The large majority of pharmaceutical substances are neutral from electrophoretic point of view, consequently separations by the classic capillary zone electrophoresis; where separation is based on the differences between the own electrophoretic mobilities of the analytes; are hard to achieve. Micellar electrokinetic capillary chromatography, a hybrid method that combines chromatographic and electrophoretic separation principles, extends the applicability of capillary electrophoretic methods to neutral analytes. In micellar electrokinetic capillary chromatography, surfactants are added to the buffer solution in concentration above their critical micellar concentrations, consequently micelles are formed; micelles that undergo electrophoretic migration like any other charged particle. The separation is based on the differential partitioning of an analyte between the two-phase system: the mobile aqueous phase and micellar pseudostationary phase. The present paper aims to summarize the basic aspects regarding separation principles and practical applications of micellar electrokinetic capillary chromatography, with particular attention to those relevant in pharmaceutical analysis. PMID:24312804
Collisional excitation of molecules in dense interstellar clouds
NASA Technical Reports Server (NTRS)
Green, S.
1985-01-01
State transitions which permit the identification of the molecular species in dense interstellar clouds are reviewed, along with the techniques used to calculate the transition energies, the database on known molecular transitions and the accuracy of the values. The transition energies cannot be measured directly and therefore must be modeled analytically. Scattering theory is used to determine the intermolecular forces on the basis of quantum mechanics. The nuclear motions can also be modeled with classical mechanics. Sample rate constants are provided for molecular systems known to inhabit dense interstellar clouds. The values serve as a database for interpreting microwave and RF astrophysical data on the transitions undergone by interstellar molecules.
Classical And Quantum Rainbow Scattering From Surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, H.; Schueller, A.; Busch, M.
2011-06-01
The structure of clean and adsorbate covered surfaces as well as of ultrathin films can be investigated by grazing scattering of fast atoms. We present two recent experimental techniques which allow one to study the structure of ordered arrangements of surface atoms in detail. (1) Rainbow scattering under axial surface channeling conditions, and (2) fast atom diffraction. Our examples demonstrate the attractive features of grazing fast atom scattering as a powerful analytical tool in studies on the structure of surfaces. We will concentrate our discussion on the structure of ultrathin silica films on a Mo(112) surface and of adsorbed oxygenmore » atoms on a Fe(110) surface.« less
Multi-hole pressure probes to air data system for subsonic small-scale air vehicles
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Berezin, D. R.; Puzirev, L. N.; Tarasov, A. Z.; Kharitonov, A. M.; Shmakov, A. S.
2016-10-01
A brief review of research performed to develop multi-hole probes to measure of aerodynamic angles, dynamic head, and static pressure of a flying vehicle. The basis of these works is the application a well-known classical multi-hole pressure probe technique of measuring of a 3D flow to use in the air data system. Two multi-hole pressure probes with spherical and hemispherical head to air-data system for subsonic small-scale vehicles have been developed. A simple analytical probe model with separation of variables is proposed. The probes were calibrated in the wind tunnel, one of them is in-flight tested.
Emergent quantum mechanics without wavefunctions
NASA Astrophysics Data System (ADS)
Mesa Pascasio, J.; Fussy, S.; Schwabl, H.; Grössing, G.
2016-03-01
We present our model of an Emergent Quantum Mechanics which can be characterized by “realism without pre-determination”. This is illustrated by our analytic description and corresponding computer simulations of Bohmian-like “surreal” trajectories, which are obtained classically, i.e. without the use of any quantum mechanical tool such as wavefunctions. However, these trajectories do not necessarily represent ontological paths of particles but rather mappings of the probability density flux in a hydrodynamical sense. Modelling emergent quantum mechanics in a high-low intesity double slit scenario gives rise to the “quantum sweeper effect” with a characteristic intensity pattern. This phenomenon should be experimentally testable via weak measurement techniques.
The gravitational potential of axially symmetric bodies from a regularized green kernel
NASA Astrophysics Data System (ADS)
Trova, A.; Huré, J.-M.; Hersant, F.
2011-12-01
The determination of the gravitational potential inside celestial bodies (rotating stars, discs, planets, asteroids) is a common challenge in numerical Astrophysics. Under axial symmetry, the potential is classically found from a two-dimensional integral over the body's meridional cross-section. Because it involves an improper integral, high accuracy is generally difficult to reach. We have discovered that, for homogeneous bodies, the singular Green kernel can be converted into a regular kernel by direct analytical integration. This new kernel, easily managed with standard techniques, opens interesting horizons, not only for numerical calculus but also to generate approximations, in particular for geometrically thin discs and rings.
Advances in Biosensors, Chemosensors and Assays for the Determination of Fusarium Mycotoxins.
Lin, Xialu; Guo, Xiong
2016-05-24
The contaminations of Fusarium mycotoxins in grains and related products, and the exposure in human body are considerable concerns in food safety and human health worldwide. The common Fusarium mycotoxins include fumonisins, T-2 toxin, deoxynivalenol and zearalenone. For this reason, simple, fast and sensitive analytical techniques are particularly important for the screening and determination of Fusarium mycotoxins. In this review, we outlined the related advances in biosensors, chemosensors and assays based on the classical and novel recognition elements such as antibodies, aptamers and molecularly imprinted polymers. Application to food/feed commodities, limit and time of detection were also discussed.
Rossoe, Ed Wilson Tsuneo; Tebcherani, Antonio José; Sittart, José Alexandre; Pires, Mario Cezar
2011-01-01
Chronic actinic cheilitis is actinic keratosis located on the vermilion border. Treatment is essential because of the potential for malignant transformation. To evaluate the aesthetic and functional results of vermilionectomy using the classic and W-plasty techniques in actinic cheilitis. In the classic technique, the scar is linear and in the W-plasty one, it is a broken line. 32 patients with clinical and histopathological diagnosis of actinic cheilitis were treated. Out of the 32 patients, 15 underwent the W-plasty technique and 17 underwent the classic one. We evaluated parameters such as scar retraction and functional changes. A statistically significant association between the technique used and scar retraction was found, which was positive when using the classic technique (p = 0.01 with Yates' correction). The odds ratio was calculated at 11.25, i.e., there was a greater chance of retraction in patients undergoing the classic technique. Both techniques revealed no functional changes. We evaluated postoperative complications such as the presence of crusts, dry lips, paresthesia, and suture dehiscence. There was no statistically significant association between complications and the technique used (p = 0.69). We concluded that vermilionectomy using the W-plasty technique shows better cosmetic results and similar complication rates.
Matching in an undisturbed natural human environment.
McDowell, J J; Caron, Marcia L
2010-05-01
Data from the Oregon Youth Study, consisting of the verbal behavior of 210 adolescent boys determined to be at risk for delinquency (targets) and 210 of their friends (peers), were analyzed for their conformance to the complete family of matching theory equations in light of recent findings from the basic science, and using recently developed analytic techniques. Equations of the classic and modern theories of matching were fitted as ensembles to rates and time allocations of the boys' rule-break and normative talk obtained from conversations between pairs of boys. The verbal behavior of each boy in a conversation was presumed to be reinforced by positive social responses from the other boy. Consistent with recent findings from the basic science, the boys' verbal behavior was accurately described by the modern but not the classic theory of matching. These findings also add support to the assertion that basic principles and processes that are known to govern behavior in laboratory experiments also govern human social behavior in undisturbed natural environments.
NASA Astrophysics Data System (ADS)
Wu, Sheng-Jhih; Chu, Moody T.
2017-08-01
An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.
Extending the Dynamic Range of the Ion Trap by Differential Mobility Filtration
Hall, Adam B.; Coy, Stephen L.; Kafle, Amol; Glick, James; Nazarov, Erkinjon
2013-01-01
A miniature, planar, differential ion mobility spectrometer (DMS) was interfaced to an LCQ classic ion trap to conduct selective ion filtration prior to mass analysis in order to extend the dynamic range of the trap. Space charge effects are known to limit the functional ion storage capacity of ion trap mass analyzers and this, in turn, can affect the quality of the mass spectral data generated. This problem is further exacerbated in the analysis of mixtures where the indiscriminate introduction of matrix ions results in premature trap saturation with non-targeted species, thereby reducing the number of parent ions that may be used to conduct MS/MS experiments for quantitation or other diagnostic studies. We show that conducting differential mobility-based separations prior to mass analysis allows the isolation of targeted analytes from electrosprayed mixtures preventing the indiscriminate introduction of matrix ions and premature trap saturation with analytically unrelated species. Coupling these two analytical techniques is shown to enhance the detection of a targeted drug metabolite from a biological matrix. In its capacity as a selective ion filter, the DMS can improve the analytical performance of analyzers such as quadrupole (3-D or linear) and ion cyclotron resonance (FT-ICR) ion traps that depend on ion accumulation. PMID:23797861
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
The evolution of analytical chemistry methods in foodomics.
Gallo, Monica; Ferranti, Pasquale
2016-01-08
The methodologies of food analysis have greatly evolved over the past 100 years, from basic assays based on solution chemistry to those relying on the modern instrumental platforms. Today, the development and optimization of integrated analytical approaches based on different techniques to study at molecular level the chemical composition of a food may allow to define a 'food fingerprint', valuable to assess nutritional value, safety and quality, authenticity and security of foods. This comprehensive strategy, defined foodomics, includes emerging work areas such as food chemistry, phytochemistry, advanced analytical techniques, biosensors and bioinformatics. Integrated approaches can help to elucidate some critical issues in food analysis, but also to face the new challenges of a globalized world: security, sustainability and food productions in response to environmental world-wide changes. They include the development of powerful analytical methods to ensure the origin and quality of food, as well as the discovery of biomarkers to identify potential food safety problems. In the area of nutrition, the future challenge is to identify, through specific biomarkers, individual peculiarities that allow early diagnosis and then a personalized prognosis and diet for patients with food-related disorders. Far from the aim of an exhaustive review of the abundant literature dedicated to the applications of omic sciences in food analysis, we will explore how classical approaches, such as those used in chemistry and biochemistry, have evolved to intersect with the new omics technologies to produce a progress in our understanding of the complexity of foods. Perhaps most importantly, a key objective of the review will be to explore the development of simple and robust methods for a fully applied use of omics data in food science. Copyright © 2015 Elsevier B.V. All rights reserved.
Revisiting the positive DC corona discharge theory: Beyond Peek's and Townsend's law
NASA Astrophysics Data System (ADS)
Monrolin, Nicolas; Praud, Olivier; Plouraboué, Franck
2018-06-01
The classical positive Corona Discharge theory in a cylindrical axisymmetric configuration is revisited in order to find analytically the influence of gas properties and thermodynamic conditions on the corona current. The matched asymptotic expansion of Durbin and Turyn [J. Phys. D: Appl. Phys. 20, 1490-1495 (1987)] of a simplified but self-consistent problem is performed and explicit analytical solutions are derived. The mathematical derivation enables us to express a new positive DC corona current-voltage characteristic, choosing either a dimensionless or dimensional formulation. In dimensional variables, the current voltage law and the corona inception voltage explicitly depend on the electrode size and physical gas properties such as ionization and photoionization parameters. The analytical predictions are successfully confronted with experiments and Peek's and Townsend's laws. An analytical expression of the corona inception voltage φ o n is proposed, which depends on the known values of physical parameters without adjustable parameters. As a proof of consistency, the classical Townsend current-voltage law I = C φ ( φ - φ o n ) is retrieved by linearizing the non-dimensional analytical solution. A brief parametric study showcases the interest in this analytical current model, especially for exploring small corona wires or considering various thermodynamic conditions.
Characterization of classical static noise via qubit as probe
NASA Astrophysics Data System (ADS)
Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif
2018-03-01
The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.
Classical Civilization (Greece-Hellenistic-Rome). Teacher's Manual. 1968 Edition.
ERIC Educational Resources Information Center
Leppert, Ella C.; Smith, Rozella B.
This secondary teachers guide builds upon a previous sequential course described in SO 003 173, and consists of three sections on the classical civilizations--Greek, Hellenistic, and Rome. Major emphasis is upon students gaining an understanding of cultural development and transmission. Using an analytic method, students learn to examine primary…
Discovering Romanticism and Classicism in the English Classroom.
ERIC Educational Resources Information Center
Stark, Sandra A.
1994-01-01
Details the concepts of romanticism and classicism and how they relate to secondary English instruction. Argues that teachers should offer students both the imaginative adventure of the romantic and the analytical power of the classicist. Describes a visual lesson by which these two modes might be illustrated and fostered. (HB)
Ammari, Faten; Jouan-Rimbaud-Bouveresse, Delphine; Boughanmi, Néziha; Rutledge, Douglas N
2012-09-15
The aim of this study was to find objective analytical methods to study the degradation of edible oils during heating and thus to suggest solutions to improve their stability. The efficiency of Nigella seed extract as natural antioxidant was compared with butylated hydroxytoluene (BHT) during accelerated oxidation of edible vegetable oils at 120 and 140 °C. The modifications during heating were monitored by 3D-front-face fluorescence spectroscopy along with Independent Components Analysis (ICA), (1)H NMR spectroscopy and classical physico-chemical methods such as anisidine value and viscosity. The results of the study clearly indicate that the natural seed extract at a level of 800 ppm exhibited antioxidant effects similar to those of the synthetic antioxidant BHT at a level of 200 ppm and thus contributes to an increase in the oxidative stability of the oil. Copyright © 2012 Elsevier B.V. All rights reserved.
Pricing foreign equity option under stochastic volatility tempered stable Lévy processes
NASA Astrophysics Data System (ADS)
Gong, Xiaoli; Zhuang, Xintian
2017-10-01
Considering that financial assets returns exhibit leptokurtosis, asymmetry properties as well as clustering and heteroskedasticity effect, this paper substitutes the logarithm normal jumps in Heston stochastic volatility model by the classical tempered stable (CTS) distribution and normal tempered stable (NTS) distribution to construct stochastic volatility tempered stable Lévy processes (TSSV) model. The TSSV model framework permits infinite activity jump behaviors of return dynamics and time varying volatility consistently observed in financial markets through subordinating tempered stable process to stochastic volatility process, capturing leptokurtosis, fat tailedness and asymmetry features of returns. By employing the analytical characteristic function and fast Fourier transform (FFT) technique, the formula for probability density function (PDF) of TSSV returns is derived, making the analytical formula for foreign equity option (FEO) pricing available. High frequency financial returns data are employed to verify the effectiveness of proposed models in reflecting the stylized facts of financial markets. Numerical analysis is performed to investigate the relationship between the corresponding parameters and the implied volatility of foreign equity option.
Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes
NASA Technical Reports Server (NTRS)
Williams Colin P.
1999-01-01
Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.
Lauretti, Gabriela R; Corrêa, Selma W R O; Mattos, Anita L
2015-09-01
The aim of the study was to compare the efficacy of the greater occipital nerve (GON) block using the classical technique and different volumes of injectate with the subcompartmental technique for the treatment of cervicogenic headache (CH). Thirty patients acted as his/her own control. All patients were submitted to the GON block by the classical technique with 10 mg dexamethasone, plus 40 mg lidocaine (5 mL volume). Patients were randomly allocated into 1 of 3 groups (n = 10) when pain VAS was > 3 cm. Each group was submitted to a GON subcompartmental technique (10 mg dexamethasone + 40 mg lidocaine + nonionic iodine contrast + saline) under fluoroscopy using either 5, 10, or 15 mL final volume. Analgesia and quality of life were evaluated. The classical GON technique resulted in 2 weeks of analgesia and less rescue analgesic consumption, compared to 24 weeks after the subcompartmental technique (P < 0.01). Quality of life improved at 2 and 24 weeks after the classical and the suboccipital techniques, respectively (P < 0.05). The data revealed that groups were similar regarding analgesia when compared to volume of injection (P > 0.05). While the classical technique for GON block resulted in only 2 weeks of analgesia, the subcompartmental technique resulted in at least 24 weeks of analgesia, being 5 mL volume sufficient for the performance of the block under fluoroscopy. © 2014 World Institute of Pain.
An Artificial Neural Networks Method for Solving Partial Differential Equations
NASA Astrophysics Data System (ADS)
Alharbi, Abir
2010-09-01
While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.
Artificial intelligence in healthcare: past, present and future.
Jiang, Fei; Jiang, Yong; Zhi, Hui; Dong, Yi; Li, Hao; Ma, Sufeng; Wang, Yilong; Dong, Qiang; Shen, Haipeng; Wang, Yongjun
2017-12-01
Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. We survey the current status of AI applications in healthcare and discuss its future. AI can be applied to various types of healthcare data (structured and unstructured). Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology. We then review in more details the AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation. We conclude with discussion about pioneer AI systems, such as IBM Watson, and hurdles for real-life deployment of AI.
Marine environment pollution: The contribution of mass spectrometry to the study of seawater.
Magi, Emanuele; Di Carro, Marina
2016-09-09
The study of marine pollution has been traditionally addressed to persistent chemicals, generally known as priority pollutants; a current trend in environmental analysis is a shift toward "emerging pollutants," defined as newly identified or previously unrecognized contaminants. The present review is focused on the peculiar contribution of mass spectrometry (MS) to the study of pollutants in the seawater compartment. The work is organized in five paragraphs where the most relevant groups of pollutants, both "classical" and "emerging," are presented and discussed, highlighting the relative data obtained by the means of different MS techniques. The hyphenation of MS and separative techniques, together with the development of different ion sources, makes MS and tandem MS the analytical tool of choice for the determination of trace organic contaminants in seawater. © 2016 Wiley Periodicals, Inc. Mass Spec Rev. © 2016 Wiley Periodicals, Inc.
Application of the variational-asymptotical method to composite plates
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Lee, Bok W.; Atilgan, Ali R.
1992-01-01
A method is developed for the 3D analysis of laminated plate deformation which is an extension of a variational-asymptotical method by Atilgan and Hodges (1991). Both methods are based on the treatment of plate deformation by splitting the 3D analysis into linear through-the-thickness analysis and 2D plate analysis. Whereas the first technique tackles transverse shear deformation in the second asymptotical approximation, the present method simplifies its treatment and restricts it to the first approximation. Both analytical techniques are applied to the linear cylindrical bending problem, and the strain and stress distributions are derived and compared with those of the exact solution. The present theory provides more accurate results than those of the classical laminated-plate theory for the transverse displacement of 2-, 3-, and 4-layer cross-ply laminated plates. The method can give reliable estimates of the in-plane strain and displacement distributions.
A Meta-Analysis of Hypnotherapeutic Techniques in the Treatment of PTSD Symptoms.
O'Toole, Siobhan K; Solomon, Shelby L; Bergdahl, Stephen A
2016-02-01
The efficacy of hypnotherapeutic techniques as treatment for symptoms of posttraumatic stress disorder (PTSD) was explored through meta-analytic methods. Studies were selected through a search of 29 databases. Altogether, 81 studies discussing hypnotherapy and PTSD were reviewed for inclusion criteria. The outcomes of 6 studies representing 391 participants were analyzed using meta-analysis. Evaluation of effect sizes related to avoidance and intrusion, in addition to overall PTSD symptoms after hypnotherapy treatment, revealed that all studies showed that hypnotherapy had a positive effect on PTSD symptoms. The overall Cohen's d was large (-1.18) and statistically significant (p < .001). Effect sizes varied based on study quality; however, they were large and statistically significant. Using the classic fail-safe N to assess for publication bias, it was determined it would take 290 nonsignificant studies to nullify these findings. Copyright © 2016 International Society for Traumatic Stress Studies.
Artificial intelligence in healthcare: past, present and future
Jiang, Fei; Jiang, Yong; Zhi, Hui; Dong, Yi; Li, Hao; Ma, Sufeng; Wang, Yilong; Dong, Qiang; Shen, Haipeng; Wang, Yongjun
2017-01-01
Artificial intelligence (AI) aims to mimic human cognitive functions. It is bringing a paradigm shift to healthcare, powered by increasing availability of healthcare data and rapid progress of analytics techniques. We survey the current status of AI applications in healthcare and discuss its future. AI can be applied to various types of healthcare data (structured and unstructured). Popular AI techniques include machine learning methods for structured data, such as the classical support vector machine and neural network, and the modern deep learning, as well as natural language processing for unstructured data. Major disease areas that use AI tools include cancer, neurology and cardiology. We then review in more details the AI applications in stroke, in the three major areas of early detection and diagnosis, treatment, as well as outcome prediction and prognosis evaluation. We conclude with discussion about pioneer AI systems, such as IBM Watson, and hurdles for real-life deployment of AI. PMID:29507784
Optics-Integrated Microfluidic Platforms for Biomolecular Analyses
Bates, Kathleen E.; Lu, Hang
2016-01-01
Compared with conventional optical methods, optics implemented on microfluidic chips provide small, and often much cheaper ways to interrogate biological systems from the level of single molecules up to small model organisms. The optical probing of single molecules has been used to investigate the mechanical properties of individual biological molecules; however, multiplexing of these measurements through microfluidics and nanofluidics confers many analytical advantages. Optics-integrated microfluidic systems can significantly simplify sample processing and allow a more user-friendly experience; alignments of on-chip optical components are predetermined during fabrication and many purely optical techniques are passively controlled. Furthermore, sample loss from complicated preparation and fluid transfer steps can be virtually eliminated, a particularly important attribute for biological molecules at very low concentrations. Excellent fluid handling and high surface area/volume ratios also contribute to faster detection times for low abundance molecules in small sample volumes. Although integration of optical systems with classical microfluidic analysis techniques has been limited, microfluidics offers a ready platform for interrogation of biophysical properties. By exploiting the ease with which fluids and particles can be precisely and dynamically controlled in microfluidic devices, optical sensors capable of unique imaging modes, single molecule manipulation, and detection of minute changes in concentration of an analyte are possible. PMID:27119629
Crossing Fibers Detection with an Analytical High Order Tensor Decomposition
Megherbi, T.; Kachouane, M.; Oulebsir-Boumghar, F.; Deriche, R.
2014-01-01
Diffusion magnetic resonance imaging (dMRI) is the only technique to probe in vivo and noninvasively the fiber structure of human brain white matter. Detecting the crossing of neuronal fibers remains an exciting challenge with an important impact in tractography. In this work, we tackle this challenging problem and propose an original and efficient technique to extract all crossing fibers from diffusion signals. To this end, we start by estimating, from the dMRI signal, the so-called Cartesian tensor fiber orientation distribution (CT-FOD) function, whose maxima correspond exactly to the orientations of the fibers. The fourth order symmetric positive definite tensor that represents the CT-FOD is then analytically decomposed via the application of a new theoretical approach and this decomposition is used to accurately extract all the fibers orientations. Our proposed high order tensor decomposition based approach is minimal and allows recovering the whole crossing fibers without any a priori information on the total number of fibers. Various experiments performed on noisy synthetic data, on phantom diffusion, data and on human brain data validate our approach and clearly demonstrate that it is efficient, robust to noise and performs favorably in terms of angular resolution and accuracy when compared to some classical and state-of-the-art approaches. PMID:25246940
Classical Dynamics of Fullerenes
NASA Astrophysics Data System (ADS)
Sławianowski, Jan J.; Kotowski, Romuald K.
2017-06-01
The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.
NASA Astrophysics Data System (ADS)
Gets, A. V.; Krainov, V. P.
2018-01-01
The yield of spontaneous photons at the tunneling ionization of atoms by intense low-frequency laser radiation near the classical cut-off is estimated analytically by using the three-step model. The Bell-shaped dependence in the universal photon spectrum is explained qualitatively.
Analytical approximations to seawater optical phase functions of scattering
NASA Astrophysics Data System (ADS)
Haltrin, Vladimir I.
2004-11-01
This paper proposes a number of analytical approximations to the classic and recently measured seawater light scattering phase functions. The three types of analytical phase functions are derived: individual representations for 15 Petzold, 41 Mankovsky, and 91 Gulf of Mexico phase functions; collective fits to Petzold phase functions; and analytical representations that take into account dependencies between inherent optical properties of seawater. The proposed phase functions may be used for problems of radiative transfer, remote sensing, visibility and image propagation in natural waters of various turbidity.
Characterization of an Indian sword: classic and noninvasive methods of investigation in comparison
NASA Astrophysics Data System (ADS)
Barzagli, E.; Grazzi, F.; Williams, A.; Edge, D.; Scherillo, A.; Kelleher, J.; Zoppi, M.
2015-04-01
The evolution of metallurgy in history is one of the most interesting topics in Archaeometry. The production of steel and its forging methods to make tools and weapons are topics of great interest in the field of the history of metallurgy. In the production of weapons, we find almost always the highest level of technology. These were generally produced by skilled craftsmen who used the best quality materials available. Indian swords are an outstanding example in this field and one of the most interesting classes of objects for the study of the evolution of metallurgy. This work presents the study of a Shamsheer (a sword with a curved blade with single edge) made available by the Wallace Collection in London. The purpose of this study was to determine the composition, the microstructure, the level and the direction of residual strain and their distribution in the blade. We have used two different approaches: the classical one (metallography) and a nondestructive technique (neutron diffraction): In this way, we can test differences and complementarities of these two techniques. To obtain a good characterization of artifacts studied by traditional analytical methods, an invasive approach is required. However, the most ancient objects are scarce in number, and the most interesting ones are usually in an excellent state of conservation, so it is unthinkable to apply techniques with a destructive approach. The analysis of blades that has been performed by metallographic microscopy has demonstrated the specificity of the production of this type of steel. However, metallographic analysis can give only limited information about the structural characteristics of these artifacts of high quality, and it is limited to the sampled areas. The best approach for nondestructive analysis is therefore to use neutron techniques.
Quantum-classical boundary for precision optical phase estimation
NASA Astrophysics Data System (ADS)
Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo
2017-12-01
Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.
Crack Development in Cross-Ply Laminates Under Uniaxial Tension
NASA Technical Reports Server (NTRS)
Gyekenyesi, Andrew L.
1996-01-01
This study addresses matrix-dominated failures in carbon fiber/polymer matrix composite laminates in a cross-ply lay-up. The events of interest are interlaminar fracture in the form of transverse cracks in the 90' plies and longitudinal splitting in the 0 deg plies and interlaminar fracture in the form of 0/90 delamination. These events were observed using various nondestructive evaluation (NDE) techniques during static tensile tests. Acoustic emission (AE) x radiography, and edge view microscopy were the principal ones utilized in a real-time environment. A comparison of the NDE results with an analytical model based on the classical linear fracture mechanics concept of strain energy release rate as a criterion for crack growth was performed. The virtual crack closure theory was incorporated with a finite element model to generate strain energy release rate curves for the analytical case. Celion carbon fiber/polyimide matrix (G30-500/PMR-15) was the material tested with cross-ply lay-ups of (0(2)/90(6))s and (0(4)/90(4))s. The test specimens contained thermally induced cracks caused by the high-temperature processing. The analytical model was updated to compensate for the initial damage and to study further accumulation by taking into account the crack interactions. By correlating the experimental and analytical data, the critical energy release rates were found for the observable events of interest.
Complexometric Determination of Mercury Based on a Selective Masking Reaction
ERIC Educational Resources Information Center
Romero, Mercedes; Guidi, Veronica; Ibarrolaza, Agustin; Castells, Cecilia
2009-01-01
In the first analytical chemistry course, students are introduced to the concepts of equilibrium in water solutions and classical (non-instrumental) analytical methods. Our teaching experience shows that "real samples" stimulate students' enthusiasm for the laboratory work. From this diagnostic, we implemented an optional activity at the end of…
Levesque, Danielle L; Menzies, Allyson K; Landry-Cuerrier, Manuelle; Larocque, Guillaume; Humphries, Murray M
2017-07-01
Recent research is revealing incredible diversity in the thermoregulatory patterns of wild and captive endotherms. As a result of these findings, classic thermoregulatory categories of 'homeothermy', 'daily heterothermy', and 'hibernation' are becoming harder to delineate, impeding our understanding of the physiological and evolutionary significance of variation within and around these categories. However, we lack a generalized analytical approach for evaluating and comparing the complex and diversified nature of the full breadth of heterothermy expressed by individuals, populations, and species. Here we propose a new approach that decomposes body temperature time series into three inherent properties-waveform, amplitude, and period-using a non-stationary technique that accommodates the temporal variability of body temperature patterns. This approach quantifies circadian and seasonal variation in thermoregulatory patterns, and uses the distribution of observed thermoregulatory patterns as a basis for intra- and inter-specific comparisons. We analyse body temperature time series from multiple species, including classical hibernators, tropical heterotherms, and homeotherms, to highlight the approach's general usefulness and the major axes of thermoregulatory variation that it reveals.
Spin-Ice Thin Films: Large-N Theory and Monte Carlo Simulations
NASA Astrophysics Data System (ADS)
Lantagne-Hurtubise, Étienne; Rau, Jeffrey G.; Gingras, Michel J. P.
2018-04-01
We explore the physics of highly frustrated magnets in confined geometries, focusing on the Coulomb phase of pyrochlore spin ices. As a specific example, we investigate thin films of nearest-neighbor spin ice, using a combination of analytic large-N techniques and Monte Carlo simulations. In the simplest film geometry, with surfaces perpendicular to the [001] crystallographic direction, we observe pinch points in the spin-spin correlations characteristic of a two-dimensional Coulomb phase. We then consider the consequences of crystal symmetry breaking on the surfaces of the film through the inclusion of orphan bonds. We find that when these bonds are ferromagnetic, the Coulomb phase is destroyed by the presence of fluctuating surface magnetic charges, leading to a classical Z2 spin liquid. Building on this understanding, we discuss other film geometries with surfaces perpendicular to the [110] or the [111] direction. We generically predict the appearance of surface magnetic charges and discuss their implications for the physics of such films, including the possibility of an unusual Z3 classical spin liquid. Finally, we comment on open questions and promising avenues for future research.
A new method for multi-bit and qudit transfer based on commensurate waveguide arrays
NASA Astrophysics Data System (ADS)
Petrovic, J.; Veerman, J. J. P.
2018-05-01
The faithful state transfer is an important requirement in the construction of classical and quantum computers. While the high-speed transfer is realized by optical-fibre interconnects, its implementation in integrated optical circuits is affected by cross-talk. The cross-talk between densely packed optical waveguides limits the transfer fidelity and distorts the signal in each channel, thus severely impeding the parallel transfer of states such as classical registers, multiple qubits and qudits. Here, we leverage on the suitably engineered cross-talk between waveguides to achieve the parallel transfer on optical chip. Waveguide coupling coefficients are designed to yield commensurate eigenvalues of the array and hence, periodic revivals of the input state. While, in general, polynomially complex, the inverse eigenvalue problem permits analytic solutions for small number of waveguides. We present exact solutions for arrays of up to nine waveguides and use them to design realistic buses for multi-(qu)bit and qudit transfer. Advantages and limitations of the proposed solution are discussed in the context of available fabrication techniques.
Wall interference tests of a CAST 10-2/DOA 2 airfoil in an adaptive-wall test section
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
1987-01-01
A wind-tunnel investigation of a CAST 10-2/DOA 2 airfoil model has been conducted in the adaptive-wall test section of the Langley 0.3-Meter Transonic Cryogenic Tunnel (TCT) and in the National Aeronautical Establishment High Reynolds Number Two-Dimensional Test Facility. The primary goal of the tests was to assess two different wall-interference correction techniques: adaptive test-section walls and classical analytical corrections. Tests were conducted over a Mach number range from 0.3 to 0.8 and over a chord Reynolds number range from 6 million to 70 million. The airfoil aerodynamic characteristics from the tests in the 0.3-m TCT have been corrected for wall interference by the movement of the adaptive walls. No additional corrections for any residual interference have been applied to the data, to allow comparison with the classically corrected data from the same model in the conventional National Aeronautical Establishment facility. The data are presented graphically in this report as integrated force-and-moment coefficients and chordwise pressure distributions.
How much a quantum measurement is informative?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia
2014-12-04
The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.
NASA Astrophysics Data System (ADS)
Miranda Guedes, Rui
2018-02-01
Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.
Should the Bible Be Taught as a Literary Classic in Public Education?
ERIC Educational Resources Information Center
Malikow, Max
2010-01-01
The research question "Should the Bible be taught as a literary classic in public education?" was pursued by a survey of nineteen scholars from three disciplines: education, literature, and law. The collected data served to guide the researcher in the writing of an analytical essay responding to the research question. The research…
The Dispersion Relation for the 1/sinh(exp 2) Potential in the Classical Limit
NASA Technical Reports Server (NTRS)
Campbell, Joel
2009-01-01
The dispersion relation for the inverse hyperbolic potential is calculated in the classical limit. This is shown for both the low amplitude phonon branch and the high amplitude soliton branch. It is shown these results qualitatively follow that previously found for the inverse squared potential where explicit analytic solutions are known.
Classical theory of atomic collisions - The first hundred years
NASA Astrophysics Data System (ADS)
Grujić, Petar V.
2012-05-01
Classical calculations of the atomic processes started in 1911 with famous Rutherford's evaluation of the differential cross section for α particles scattered on foil atoms [1]. The success of these calculations was soon overshadowed by the rise of Quantum Mechanics in 1925 and its triumphal success in describing processes at the atomic and subatomic levels. It was generally recognized that the classical approach should be inadequate and it was neglected until 1953, when the famous paper by Gregory Wannier appeared, in which the threshold law for the single ionization cross section behaviour by electron impact was derived. All later calculations and experimental studies confirmed the law derived by purely classical theory. The next step was taken by Ian Percival and collaborators in 60s, who developed a general classical three-body computer code, which was used by many researchers in evaluating various atomic processes like ionization, excitation, detachment, dissociation, etc. Another approach was pursued by Michal Gryzinski from Warsaw, who started a far reaching programme for treating atomic particles and processes as purely classical objects [2]. Though often criticized for overestimating the domain of the classical theory, results of his group were able to match many experimental data. Belgrade group was pursuing the classical approach using both analytical and numerical calculations, studying a number of atomic collisions, in particular near-threshold processes. Riga group, lead by Modris Gailitis [3], contributed considerably to the field, as it was done by Valentin Ostrovsky and coworkers from Sanct Petersbourg, who developed powerful analytical methods within purely classical mechanics [4]. We shall make an overview of these approaches and show some of the remarkable results, which were subsequently confirmed by semiclassical and quantum mechanical calculations, as well as by the experimental evidence. Finally we discuss the theoretical and epistemological background of the classical calculations and explain why these turned out so successful, despite the essentially quantum nature of the atomic and subatomic systems.
Harrison, Alexandra
2014-01-01
My comments focus on a consideration of three issues central to child psychoanalysis stimulated by rereading the classic paper by Berta Bornstein, "The Analysis of a Phobic Child: Some Problems of Theory and Technique in Child Analysis": (1) the importance of "co-creativity" and its use in analysis to repair disruptions in the mother-child relationship; (2) working analytically with the "inner world of the child "; and (3) the fundamental importance of multiple simultaneous meaning-making processes. I begin with a discussion of current thinking about the importance of interactive processes in developmental and therapeutic change and then lead to the concepts of "co-creativity" and interactive repair, elements that are missing in the "Frankie" paper. The co-creative process that I outline includes multiple contributions that Frankie and his caregivers brought to their relationships--his mother, his father, his nurse, and even his analyst. I then address the question of how child analysts can maintain a central focus on the inner world of the child while still taking into account the complex nature of co-creativity in the change process. Finally, I discuss insights into the multiple simultaneous meaning-making processes in the analytic relationship to effect therapeutic change, including what I call the "sandwich model," an attempt to organize this complexity so that is more accessible to the practicing clinician. In terms of the specific case of Frankie, my reading of the case suggests that failure to repair disruptions in the mother-child relationship from infancy through the time of the analytic treatment was central to Frankie's problems. My hypothesis is that, rather than the content of his analyst's interpretations, what was helpful to Frankie in the analysis was the series of attempts at interactive repair in the analytic process. Unfortunately, the case report does not offer data to test this hypothesis. Indeed, one concluding observation from my reading of this classic case is how useful it would be for the contemporary analyst to pay attention to the multifaceted co-creative process in order to explain and foster the therapeutic change that can occur in analysis.
A comparative study of two different uncinectomy techniques: swing-door and classical.
Singhania, Ankit A; Bansal, Chetan; Chauhan, Nirali; Soni, Saurav
2012-01-01
The aim of this study was to determine which technique of uncinectomy, classical or swing door technique. Four hundred eighty Cases of sinusitis were selected and operated for Functional Endoscopic Sinus Surgery (FESS). Out of these, in 240 uncinectomies classical uncinectomy was done whereas in another 240 uncinectomies swing door technique was used. Initially patients were medically managed treated according to their symptoms and prior management. Patients who had received previous adequate medical management were evaluated with CT scan of the sinuses. If disease still persists than they were operated for FESS. The authors' experience indicates that Functional endoscopic sinus surgery can be performed under local or general anesthesia, as permitted or tolerated. In this review classical technique was used in 240 uncinectomies. Out of this, ethmoidal complex injury was noted in 4 cases, missed maxillary ostium syndrome (incomplete removal) was reported in 12 patients and orbital fat exposure was encountered in 5 patients. As compared to 240 uncinectomies done with swing door technique, incomplete removal was evident in 2 cases and lacrimal duct injury was reported in 3 cases. 'Evidence that underscores how this 'swing door technique' successfully combines 'the conservation goals of the anterior-to-posterior approach and anatomic virtues of the posterior-to-anterior approach to ethmoidectomy of the total 480 uncinectomies operated. Out of which 240 uncinectomies have been performed using the 'swing-door' technique. The 240 uncinectomies performed using classical technique were used as controls. The incidence of orbital penetration, incomplete removal, ethmoidal complex injury and ostium non-identification was significantly less with the new technique. Three lacrimal injuries occurred with the 'swing-door' technique compared to no injuries with classical technique. The authors recommend swing door technique as it is easy to learn, allows complete removal of the uncinate flush with the lateral nasal wall and allows easy identification of the natural ostium of the maxillary sinus without injuring the ethmoidal complex.
Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research
ERIC Educational Resources Information Center
He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne
2018-01-01
In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
Higher order approximation to the Hill problem dynamics about the libration points
NASA Astrophysics Data System (ADS)
Lara, Martin; Pérez, Iván L.; López, Rosario
2018-06-01
An analytical solution to the Hill problem Hamiltonian expanded about the libration points has been obtained by means of perturbation techniques. In order to compute the higher orders of the perturbation solution that are needed to capture all the relevant periodic orbits originated from the libration points within a reasonable accuracy, the normalization is approached in complex variables. The validity of the solution extends to energy values considerably far away from that of the libration points and, therefore, can be used in the computation of Halo orbits as an alternative to the classical Lindstedt-Poincaré approach. Furthermore, the theory correctly predicts the existence of the two-lane bridge of periodic orbits linking the families of planar and vertical Lyapunov orbits.
Evolution of In-Situ Generated Reinforcement Precipitates in Metal Matrix Composites
NASA Technical Reports Server (NTRS)
Sen, S.; Kar, S. K.; Catalina, A. V.; Stefanescu, D. M.; Dhindaw, B. K.
2004-01-01
Due to certain inherent advantages, in-situ production of Metal Matrix Composites (MMCs) have received considerable attention in the recent past. ln-situ techniques typically involve a chemical reaction that results in precipitation of a ceramic reinforcement phase. The size and spatial distribution of these precipitates ultimately determine the mechanical properties of these MMCs. In this paper we will investigate the validity of using classical growth laws and analytical expressions to describe the interaction between a precipitate and a solid-liquid interface (SLI) to predict the size and spatial evolution of the in-situ generated precipitates. Measurements made on size and distribution of Tic precipitates in a Ni&I matrix will be presented to test the validity of such an approach.
Timms, John F; Hale, Oliver J; Cramer, Rainer
2016-06-01
The last 20 years have seen significant improvements in the analytical capabilities of biological mass spectrometry (MS). Studies using advanced MS have resulted in new insights into cell biology and the etiology of diseases as well as its use in clinical applications. This review discusses recent developments in MS-based technologies and their cancer-related applications with a focus on proteomics. It also discusses the issues around translating the research findings to the clinic and provides an outline of where the field is moving. Expert commentary: Proteomics has been problematic to adapt for the clinical setting. However, MS-based techniques continue to demonstrate potential in novel clinical uses beyond classical cancer proteomics.
Analysis of scattering by a linear chain of spherical inclusions in an optical fiber
NASA Astrophysics Data System (ADS)
Chremmos, Ioannis D.; Uzunoglu, Nikolaos K.
2006-12-01
The scattering by a linear chain of spherical dielectric inclusions, embedded along the axis of an optical fiber, is analyzed using a rigorous integral equation formulation, based on the dyadic Green's function theory. The coupled electric field integral equations are solved by applying the Galerkin technique with Mie-type expansion of the field inside the spheres in terms of spherical waves. The analysis extends the previously studied case of a single spherical inhomogeneity inside a fiber to the multisphere-scattering case, by utilizing the classic translational addition theorems for spherical waves in order to analytically extract the direct-intersphere-coupling coefficients. Results for the transmitted and reflected power, on incidence of the fundamental HE11 mode, are presented for several cases.
Fundamentals and techniques of nonimaging optics research
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1987-07-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line or trumpet concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. Present efforts can be classed into two main areas: (1) classical geometrical nonimaging optics, and (2) logical extensions of nonimaging concepts to the physical optics domain.
Fundamentals and techniques of nonimaging optics research at the University of Chicago
NASA Astrophysics Data System (ADS)
Winston, R.; Ogallagher, J.
1986-11-01
Nonimaging Optics differs from conventional approaches in its relaxation of unnecessary constraints on energy transport imposed by the traditional methods for optimizing image formation and its use of more broadly based analytical techniques such as phase space representations of energy flow, radiative transfer analysis, thermodynamic arguments, etc. Based on these means, techniques for designing optical elements which approach and in some cases attain the maximum concentration permitted by the Second Law of Thermodynamics were developed. The most widely known of these devices are the family of Compound Parabolic Concentrators (CPC's) and their variants and the so called Flow-Line concentrator derived from the geometric vector flux formalism developed under this program. Applications of these and other such ideal or near-ideal devices permits increases of typically a factor of four (though in some cases as much as an order of magnitude) in the concentration above that possible with conventional means. In the most recent phase, our efforts can be classed into two main areas; (a) ''classical'' geometrical nonimaging optics; and (b) logical extensions of nonimaging concepts to the physical optics domain.
Global electromagnetic induction in the moon and planets. [poloidal eddy current transient response
NASA Technical Reports Server (NTRS)
Dyal, P.; Parkin, C. W.
1973-01-01
Experiments and analyses concerning electromagnetic induction in the moon and other extraterrestrial bodies are summarized. The theory of classical electromagnetic induction in a sphere is first considered, and this treatment is extended to the case of the moon, where poloidal eddy-current response has been found experimentally to dominate other induction modes. Analysis of lunar poloidal induction yields lunar internal electrical conductivity and temperature profiles. Two poloidal-induction analytical techniques are discussed: a transient-response method applied to time-series magnetometer data, and a harmonic-analysis method applied to data numerically Fourier-transformed to the frequency domain, with emphasis on the former technique. Attention is given to complicating effects of the solar wind interaction with both induced poloidal fields and remanent steady fields. The static magnetization field induction mode is described, from which are calculated bulk magnetic permeability profiles. Magnetic field measurements obtained from the moon and from fly-bys of Venus and Mars are studied to determine the feasibility of extending theoretical and experimental induction techniques to other bodies in the solar system.
Hughes, Sarah A; Mahaffey, Ashley; Shore, Bryon; Baker, Josh; Kilgour, Bruce; Brown, Christine; Peru, Kerry M; Headley, John V; Bailey, Howard C
2017-11-01
Previous assessments of oil sands process-affected water (OSPW) toxicity were hampered by lack of high-resolution analytical analysis, use of nonstandard toxicity methods, and variability between OSPW samples. We integrated ultrahigh-resolution mass spectrometry with a toxicity identification evaluation (TIE) approach to quantitatively identify the primary cause of acute toxicity of OSPW to rainbow trout (Oncorhynchus mykiss). The initial characterization of OSPW toxicity indicated that toxicity was associated with nonpolar organic compounds, and toxicant(s) were further isolated within a range of discrete methanol fractions that were then subjected to Orbitrap mass spectrometry to evaluate the contribution of naphthenic acid fraction compounds to toxicity. The results showed that toxicity was attributable to classical naphthenic acids, with the potency of individual compounds increasing as a function of carbon number. Notably, the mass of classical naphthenic acids present in OSPW was dominated by carbon numbers ≤16; however, toxicity was largely a function of classical naphthenic acids with ≥17 carbons. Additional experiments found that acute toxicity of the organic fraction was similar when tested at conductivities of 400 and 1800 μmhos/cm and that rainbow trout fry were more sensitive to the organic fraction than larval fathead minnows (Pimephales promelas). Collectively, the results will aid in developing treatment goals and targets for removal of OSPW toxicity in water return scenarios both during operations and on mine closure. Environ Toxicol Chem 2017;36:3148-3157. © 2017 SETAC. © 2017 SETAC.
Proliferation of Observables and Measurement in Quantum-Classical Hybrids
NASA Astrophysics Data System (ADS)
Elze, Hans-Thomas
2012-01-01
Following a review of quantum-classical hybrid dynamics, we discuss the ensuing proliferation of observables and relate it to measurements of (would-be) quantum mechanical degrees of freedom performed by (would-be) classical ones (if they were separable). Hybrids consist in coupled classical (CL) and quantum mechanical (QM) objects. Numerous consistency requirements for their description have been discussed and are fulfilled here. We summarize a representation of quantum mechanics in terms of classical analytical mechanics which is naturally extended to QM-CL hybrids. This framework allows for superposition, separable, and entangled states originating in the QM sector, admits experimenter's "Free Will", and is local and nonsignaling. Presently, we study the set of hybrid observables, which is larger than the Cartesian product of QM and CL observables of its components; yet it is smaller than a corresponding product of all-classical observables. Thus, quantumness and classicality infect each other.
NASA Astrophysics Data System (ADS)
Vitanov, Nikolay V.
2018-05-01
In the experimental determination of the population transfer efficiency between discrete states of a coherently driven quantum system it is often inconvenient to measure the population of the target state. Instead, after the interaction that transfers the population from the initial state to the target state, a second interaction is applied which brings the system back to the initial state, the population of which is easy to measure and normalize. If the transition probability is p in the forward process, then classical intuition suggests that the probability to return to the initial state after the backward process should be p2. However, this classical expectation is generally misleading because it neglects interference effects. This paper presents a rigorous theoretical analysis based on the SU(2) and SU(3) symmetries of the propagators describing the evolution of quantum systems with two and three states, resulting in explicit analytic formulas that link the two-step probabilities to the single-step ones. Explicit examples are given with the popular techniques of rapid adiabatic passage and stimulated Raman adiabatic passage. The present results suggest that quantum-mechanical probabilities degrade faster in repeated processes than classical probabilities. Therefore, the actual single-pass efficiencies in various experiments, calculated from double-pass probabilities, might have been greater than the reported values.
Higher-order differential phase shift keyed modulation
NASA Astrophysics Data System (ADS)
Vanalphen, Deborah K.; Lindsey, William C.
1994-02-01
Advanced modulation/demodulation techniques which are robust in the presence of phase and frequency uncertainties continue to be of interest to communication engineers. We are particularly interested in techniques which accommodate slow channel phase and frequency variations with minimal performance degradation and which alleviate the need for phase and frequency tracking loops in the receiver. We investigate the performance sensitivity to frequency offsets of a modulation technique known as binary Double Differential Phase Shift Keying (DDPSK) and compare it to that of classical binary Differential Phase Shift Keying (DPSK). We also generalize our analytical results to include n(sup -th) order, M-ary DPSK. The DDPSK (n = 2) technique was first introduced in the Russian literature circa 1972 and was studied more thoroughly in the late 1970's by Pent and Okunev. Here, we present an expression for the symbol error probability that is easy to derive and to evaluate numerically. We also present graphical results that establish when, as a function of signal energy-to-noise ratio and normalized frequency offset, binary DDPSK is preferable to binary DPSK with respect to performance in additive white Gaussian noise. Finally, we provide insight into the optimum receiver from a detection theory viewpoint.
Foodomics: MS-based strategies in modern food science and nutrition.
Herrero, Miguel; Simó, Carolina; García-Cañas, Virginia; Ibáñez, Elena; Cifuentes, Alejandro
2012-01-01
Modern research in food science and nutrition is moving from classical methodologies to advanced analytical strategies in which MS-based techniques play a crucial role. In this context, Foodomics has been recently defined as a new discipline that studies food and nutrition domains through the application of advanced omics technologies in which MS techniques are considered indispensable. Applications of Foodomics include the genomic, transcriptomic, proteomic, and/or metabolomic study of foods for compound profiling, authenticity, and/or biomarker-detection related to food quality or safety; the development of new transgenic foods, food contaminants, and whole toxicity studies; new investigations on food bioactivity, food effects on human health, etc. This review work does not intend to provide an exhaustive revision of the many works published so far on food analysis using MS techniques. The aim of the present work is to provide an overview of the different MS-based strategies that have been (or can be) applied in the new field of Foodomics, discussing their advantages and drawbacks. Besides, some ideas about the foreseen development and applications of MS-techniques in this new discipline are also provided. Copyright © 2011 Wiley Periodicals, Inc.
New robust bilinear least squares method for the analysis of spectral-pH matrix data.
Goicoechea, Héctor C; Olivieri, Alejandro C
2005-07-01
A new second-order multivariate method has been developed for the analysis of spectral-pH matrix data, based on a bilinear least-squares (BLLS) model achieving the second-order advantage and handling multiple calibration standards. A simulated Monte Carlo study of synthetic absorbance-pH data allowed comparison of the newly proposed BLLS methodology with constrained parallel factor analysis (PARAFAC) and with the combination multivariate curve resolution-alternating least-squares (MCR-ALS) technique under different conditions of sample-to-sample pH mismatch and analyte-background ratio. The results indicate an improved prediction ability for the new method. Experimental data generated by measuring absorption spectra of several calibration standards of ascorbic acid and samples of orange juice were subjected to second-order calibration analysis with PARAFAC, MCR-ALS, and the new BLLS method. The results indicate that the latter method provides the best analytical results in regard to analyte recovery in samples of complex composition requiring strict adherence to the second-order advantage. Linear dependencies appear when multivariate data are produced by using the pH or a reaction time as one of the data dimensions, posing a challenge to classical multivariate calibration models. The presently discussed algorithm is useful for these latter systems.
Modeling of classical swirl injector dynamics
NASA Astrophysics Data System (ADS)
Ismailov, Maksud M.
The knowledge of the dynamics of a swirl injector is crucial in designing a stable liquid rocket engine. Since the swirl injector is a complex fluid flow device in itself, not much work has been conducted to describe its dynamics either analytically or by using computational fluid dynamics techniques. Even the experimental observation is limited up to date. Thus far, there exists an analytical linear theory by Bazarov [1], which is based on long-wave disturbances traveling on the free surface of the injector core. This theory does not account for variation of the nozzle reflection coefficient as a function of disturbance frequency, and yields a response function which is strongly dependent on the so called artificial viscosity factor. This causes an uncertainty in designing an injector for the given operational combustion instability frequencies in the rocket engine. In this work, the author has studied alternative techniques to describe the swirl injector response, both analytically and computationally. In the analytical part, by using the linear small perturbation analysis, the entire phenomenon of unsteady flow in swirl injectors is dissected into fundamental components, which are the phenomena of disturbance wave refraction and reflection, and vortex chamber resonance. This reveals the nature of flow instability and the driving factors leading to maximum injector response. In the computational part, by employing the nonlinear boundary element method (BEM), the author sets the boundary conditions such that they closely simulate those in the analytical part. The simulation results then show distinct peak responses at frequencies that are coincident with those resonant frequencies predicted in the analytical part. Moreover, a cold flow test of the injector related to this study also shows a clear growth of instability with its maximum amplitude at the first fundamental frequency predicted both by analytical methods and BEM. It shall be noted however that Bazarov's theory does not predict the resonant peaks. Overall this methodology provides clearer understanding of the injector dynamics compared to Bazarov's. Even though the exact value of response is not possible to obtain at this stage of theoretical, computational, and experimental investigation, this methodology sets the starting point from where the theoretical description of reflection/refraction, resonance, and their interaction between each other may be refined to higher order to obtain its more precise value.
Ariyama, Kaoru; Horita, Hiroshi; Yasui, Akemi
2004-09-22
The composition of concentration ratios of 19 inorganic elements to Mg (hereinafter referred to as 19-element/Mg composition) was applied to chemometric techniques to determine the geographic origin (Japan or China) of Welsh onions (Allium fistulosum L.). Using a composition of element ratios has the advantage of simplified sample preparation, and it was possible to determine the geographic origin of a Welsh onion within 2 days. The classical technique based on 20 element concentrations was also used along with the new simpler one based on 19 elements/Mg in order to validate the new technique. Twenty elements, Na, P, K, Ca, Mg, Mn, Fe, Cu, Zn, Sr, Ba, Co, Ni, Rb, Mo, Cd, Cs, La, Ce, and Tl, in 244 Welsh onion samples were analyzed by flame atomic absorption spectroscopy, inductively coupled plasma atomic emission spectrometry, and inductively coupled plasma mass spectrometry. Linear discriminant analysis (LDA) on 20-element concentrations and 19-element/Mg composition was applied to these analytical data, and soft independent modeling of class analogy (SIMCA) on 19-element/Mg composition was applied to these analytical data. The results showed that techniques based on 19-element/Mg composition were effective. LDA, based on 19-element/Mg composition for classification of samples from Japan and from Shandong, Shanghai, and Fujian in China, classified 101 samples used for modeling 97% correctly and predicted another 119 samples excluding 24 nonauthentic samples 93% correctly. In discriminations by 10 times of SIMCA based on 19-element/Mg composition modeled using 101 samples, 220 samples from known production areas including samples used for modeling and excluding 24 nonauthentic samples were predicted 92% correctly.
ERIC Educational Resources Information Center
Kohli, Nidhi; Koran, Jennifer; Henn, Lisa
2015-01-01
There are well-defined theoretical differences between the classical test theory (CTT) and item response theory (IRT) frameworks. It is understood that in the CTT framework, person and item statistics are test- and sample-dependent. This is not the perception with IRT. For this reason, the IRT framework is considered to be theoretically superior…
ERIC Educational Resources Information Center
Loughmiller-Newman, Jennifer Ann
2012-01-01
This dissertation presents a multidisciplinary means of determining the actual content (foodstuff, non-foodstuff, or lack of contents) of Classic Mayan (A.D. 250-900) vessels. Based on previous studies that have identified the residues of foodstuffs named in hieroglyphic texts (e.g. cacao), this study is designed to further investigate foodstuff…
Duration of classicality in highly degenerate interacting Bosonic systems
Sikivie, Pierre; Todarello, Elisa M.
2017-04-28
We study sets of oscillators that have high quantum occupancy and that interact by exchanging quanta. It is shown by analytical arguments and numerical simulation that such systems obey classical equations of motion only on time scales of order their relaxation time τ and not longer than that. The results are relevant to the cosmology of axions and axion-like particles.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
Fate of classical solitons in one-dimensional quantum systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pustilnik, M.; Matveev, K. A.
We study one-dimensional quantum systems near the classical limit described by the Korteweg-de Vries (KdV) equation. The excitations near this limit are the well-known solitons and phonons. The classical description breaks down at long wavelengths, where quantum effects become dominant. Focusing on the spectra of the elementary excitations, we describe analytically the entire classical-to-quantum crossover. We show that the ultimate quantum fate of the classical KdV excitations is to become fermionic quasiparticles and quasiholes. We discuss in detail two exactly solvable models exhibiting such crossover, the Lieb-Liniger model of bosons with weak contact repulsion and the quantum Toda model, andmore » argue that the results obtained for these models are universally applicable to all quantum one-dimensional systems with a well-defined classical limit described by the KdV equation.« less
Eigensystem analysis of classical relaxation techniques with applications to multigrid analysis
NASA Technical Reports Server (NTRS)
Lomax, Harvard; Maksymiuk, Catherine
1987-01-01
Classical relaxation techniques are related to numerical methods for solution of ordinary differential equations. Eigensystems for Point-Jacobi, Gauss-Seidel, and SOR methods are presented. Solution techniques such as eigenvector annihilation, eigensystem mixing, and multigrid methods are examined with regard to the eigenstructure.
Cabrera-Barona, Pablo; Ghorbanzadeh, Omid
2018-01-16
Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas.
Cabrera-Barona, Pablo
2018-01-01
Deprivation indices are useful measures to study health inequalities. Different techniques are commonly applied to construct deprivation indices, including multi-criteria decision methods such as the analytical hierarchy process (AHP). The multi-criteria deprivation index for the city of Quito is an index in which indicators are weighted by applying the AHP. In this research, a variation of this index is introduced that is calculated using interval AHP methodology. Both indices are compared by applying logistic generalized linear models and multilevel models, considering self-reported health as the dependent variable and deprivation and self-reported quality of life as the independent variables. The obtained results show that the multi-criteria deprivation index for the city of Quito is a meaningful measure to assess neighborhood effects on self-reported health and that the alternative deprivation index using the interval AHP methodology more thoroughly represents the local knowledge of experts and stakeholders. These differences could support decision makers in improving health planning and in tackling health inequalities in more deprived areas. PMID:29337915
From classical to quantum mechanics: ``How to translate physical ideas into mathematical language''
NASA Astrophysics Data System (ADS)
Bergeron, H.
2001-09-01
Following previous works by E. Prugovečki [Physica A 91A, 202 (1978) and Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)] on common features of classical and quantum mechanics, we develop a unified mathematical framework for classical and quantum mechanics (based on L2-spaces over classical phase space), in order to investigate to what extent quantum mechanics can be obtained as a simple modification of classical mechanics (on both logical and analytical levels). To obtain this unified framework, we split quantum theory in two parts: (i) general quantum axiomatics (a system is described by a state in a Hilbert space, observables are self-adjoints operators, and so on) and (ii) quantum mechanics proper that specifies the Hilbert space as L2(Rn); the Heisenberg rule [pi,qj]=-iℏδij with p=-iℏ∇, the free Hamiltonian H=-ℏ2Δ/2m and so on. We show that general quantum axiomatics (up to a supplementary "axiom of classicity") can be used as a nonstandard mathematical ground to formulate physical ideas and equations of ordinary classical statistical mechanics. So, the question of a "true quantization" with "ℏ" must be seen as an independent physical problem not directly related with quantum formalism. At this stage, we show that this nonstandard formulation of classical mechanics exhibits a new kind of operation that has no classical counterpart: this operation is related to the "quantization process," and we show why quantization physically depends on group theory (the Galilei group). This analytical procedure of quantization replaces the "correspondence principle" (or canonical quantization) and allows us to map classical mechanics into quantum mechanics, giving all operators of quantum dynamics and the Schrödinger equation. The great advantage of this point of view is that quantization is based on concrete physical arguments and not derived from some "pure algebraic rule" (we exhibit also some limit of the correspondence principle). Moreover spins for particles are naturally generated, including an approximation of their interaction with magnetic fields. We also recover by this approach the semi-classical formalism developed by E. Prugovečki [Stochastic Quantum Mechanics and Quantum Space-time (Reidel, Dordrecht, 1986)].
Singular Hopf bifurcation in a differential equation with large state-dependent delay
Kozyreff, G.; Erneux, T.
2014-01-01
We study the onset of sustained oscillations in a classical state-dependent delay (SDD) differential equation inspired by control theory. Owing to the large delays considered, the Hopf bifurcation is singular and the oscillations rapidly acquire a sawtooth profile past the instability threshold. Using asymptotic techniques, we explicitly capture the gradual change from nearly sinusoidal to sawtooth oscillations. The dependence of the delay on the solution can be either linear or nonlinear, with at least quadratic dependence. In the former case, an asymptotic connection is made with the Rayleigh oscillator. In the latter, van der Pol’s equation is derived for the small-amplitude oscillations. SDD differential equations are currently the subject of intense research in order to establish or amend general theorems valid for constant-delay differential equation, but explicit analytical construction of solutions are rare. This paper illustrates the use of singular perturbation techniques and the unusual way in which solvability conditions can arise for SDD problems with large delays. PMID:24511255
Singular Hopf bifurcation in a differential equation with large state-dependent delay.
Kozyreff, G; Erneux, T
2014-02-08
We study the onset of sustained oscillations in a classical state-dependent delay (SDD) differential equation inspired by control theory. Owing to the large delays considered, the Hopf bifurcation is singular and the oscillations rapidly acquire a sawtooth profile past the instability threshold. Using asymptotic techniques, we explicitly capture the gradual change from nearly sinusoidal to sawtooth oscillations. The dependence of the delay on the solution can be either linear or nonlinear, with at least quadratic dependence. In the former case, an asymptotic connection is made with the Rayleigh oscillator. In the latter, van der Pol's equation is derived for the small-amplitude oscillations. SDD differential equations are currently the subject of intense research in order to establish or amend general theorems valid for constant-delay differential equation, but explicit analytical construction of solutions are rare. This paper illustrates the use of singular perturbation techniques and the unusual way in which solvability conditions can arise for SDD problems with large delays.
Stepwise Iterative Fourier Transform: The SIFT
NASA Technical Reports Server (NTRS)
Benignus, V. A.; Benignus, G.
1975-01-01
A program, designed specifically to study the respective effects of some common data problems on results obtained through stepwise iterative Fourier transformation of synthetic data with known waveform composition, was outlined. Included in this group were the problems of gaps in the data, different time-series lengths, periodic but nonsinusoidal waveforms, and noisy (low signal-to-noise) data. Results on sinusoidal data were also compared with results obtained on narrow band noise with similar characteristics. The findings showed that the analytic procedure under study can reliably reduce data in the nature of (1) sinusoids in noise, (2) asymmetric but periodic waves in noise, and (3) sinusoids in noise with substantial gaps in the data. The program was also able to analyze narrow-band noise well, but with increased interpretational problems. The procedure was shown to be a powerful technique for analysis of periodicities, in comparison with classical spectrum analysis techniques. However, informed use of the stepwise procedure nevertheless requires some background of knowledge concerning characteristics of the biological processes under study.
A stabilized element-based finite volume method for poroelastic problems
NASA Astrophysics Data System (ADS)
Honório, Hermínio T.; Maliska, Clovis R.; Ferronato, Massimiliano; Janna, Carlo
2018-07-01
The coupled equations of Biot's poroelasticity, consisting of stress equilibrium and fluid mass balance in deforming porous media, are numerically solved. The governing partial differential equations are discretized by an Element-based Finite Volume Method (EbFVM), which can be used in three dimensional unstructured grids composed of elements of different types. One of the difficulties for solving these equations is the numerical pressure instability that can arise when undrained conditions take place. In this paper, a stabilization technique is developed to overcome this problem by employing an interpolation function for displacements that considers also the pressure gradient effect. The interpolation function is obtained by the so-called Physical Influence Scheme (PIS), typically employed for solving incompressible fluid flows governed by the Navier-Stokes equations. Classical problems with analytical solutions, as well as three-dimensional realistic cases are addressed. The results reveal that the proposed stabilization technique is able to eliminate the spurious pressure instabilities arising under undrained conditions at a low computational cost.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques.
Barzaghi, Riccardo; Cazzaniga, Noemi Emanuela; De Gaetani, Carlo Iapige; Pinto, Livio; Tornatore, Vincenza
2018-03-02
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D'Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations.
Visualizing nD Point Clouds as Topological Landscape Profiles to Guide Local Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2012-05-04
Analyzing high-dimensional point clouds is a classical challenge in visual analytics. Traditional techniques, such as projections or axis-based techniques, suffer from projection artifacts, occlusion, and visual complexity.We propose to split data analysis into two parts to address these shortcomings. First, a structural overview phase abstracts data by its density distribution. This phase performs topological analysis to support accurate and non-overlapping presentation of the high-dimensional cluster structure as a topological landscape profile. Utilizing a landscape metaphor, it presents clusters and their nesting as hills whose height, width, and shape reflect cluster coherence, size, and stability, respectively. A second local analysis phasemore » utilizes this global structural knowledge to select individual clusters or point sets for further, localized data analysis. Focusing on structural entities significantly reduces visual clutter in established geometric visualizations and permits a clearer, more thorough data analysis. In conclusion, this analysis complements the global topological perspective and enables the user to study subspaces or geometric properties, such as shape.« less
Ślączka-Wilk, Magdalena M; Włodarczyk, Elżbieta; Kaleniecka, Aleksandra; Zarzycki, Paweł K
2017-07-01
There is increasing interest in the development of simple analytical systems enabling the fast screening of target components in complex samples. A number of newly invented protocols are based on quasi separation techniques involving microfluidic paper-based analytical devices and/or micro total analysis systems. Under such conditions, the quantification of target components can be performed mainly due to selective detection. The main goal of this paper is to demonstrate that miniaturized planar chromatography has the capability to work as an efficient separation and quantification tool for the analysis of multiple targets within complex environmental samples isolated and concentrated using an optimized SPE method. In particular, we analyzed various samples collected from surface water ecosystems (lakes, rivers, and the Baltic Sea of Middle Pomerania in the northern part of Poland) in different seasons, as well as samples collected during key wastewater technological processes (originating from the "Jamno" wastewater treatment plant in Koszalin, Poland). We documented that the multiple detection of chromatographic spots on RP-18W microplates-under visible light, fluorescence, and fluorescence quenching conditions, and using the visualization reagent phosphomolybdic acid-enables fast and robust sample classification. The presented data reveal that the proposed micro-TLC system is useful, inexpensive, and can be considered as a complementary method for the fast control of treated sewage water discharged by a municipal wastewater treatment plant, particularly for the detection of low-molecular mass micropollutants with polarity ranging from estetrol to progesterone, as well as chlorophyll-related dyes. Due to the low consumption of mobile phases composed of water-alcohol binary mixtures (less than 1 mL/run for the simultaneous separation of up to nine samples), this method can be considered an environmentally friendly and green chemistry analytical tool. The described analytical protocol can be complementary to those involving classical column chromatography (HPLC) or various planar microfluidic devices.
Multivariable Hermite polynomials and phase-space dynamics
NASA Technical Reports Server (NTRS)
Dattoli, G.; Torre, Amalia; Lorenzutta, S.; Maino, G.; Chiccoli, C.
1994-01-01
The phase-space approach to classical and quantum systems demands for advanced analytical tools. Such an approach characterizes the evolution of a physical system through a set of variables, reducing to the canonically conjugate variables in the classical limit. It often happens that phase-space distributions can be written in terms of quadratic forms involving the above quoted variables. A significant analytical tool to treat these problems may come from the generalized many-variables Hermite polynomials, defined on quadratic forms in R(exp n). They form an orthonormal system in many dimensions and seem the natural tool to treat the harmonic oscillator dynamics in phase-space. In this contribution we discuss the properties of these polynomials and present some applications to physical problems.
ERIC Educational Resources Information Center
Ginway, M. Elizabeth
2013-01-01
This study focuses on some of the classical features of Rubem Fonseca's "A grande arte" (1983) in order to emphasize the puzzle-solving tradition of the detective novel that is embedded within Fonseca's crime thriller, producing a work that does not entirely fit into traditional divisions of detective, hardboiled, or crime…
ERIC Educational Resources Information Center
Helms, LuAnn Sherbeck
This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…
Torres-Climent, A; Gomis, P; Martín-Mata, J; Bustamante, M A; Marhuenda-Egea, F C; Pérez-Murcia, M D; Pérez-Espinosa, A; Paredes, C; Moral, R
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio.
Torres-Climent, A.; Gomis, P.; Martín-Mata, J.; Bustamante, M. A.; Marhuenda-Egea, F. C.; Pérez-Murcia, M. D.; Pérez-Espinosa, A.; Paredes, C.; Moral, R.
2015-01-01
The objective of this work was to study the co-composting process of wastes from the winery and distillery industry with animal manures, using the classical chemical methods traditionally used in composting studies together with advanced instrumental methods (thermal analysis, FT-IR and CPMAS 13C NMR techniques), to evaluate the development of the process and the quality of the end-products obtained. For this, three piles were elaborated by the turning composting system, using as raw materials winery-distillery wastes (grape marc and exhausted grape marc) and animal manures (cattle manure and poultry manure). The classical analytical methods showed a suitable development of the process in all the piles, but these techniques were ineffective to study the humification process during the composting of this type of materials. However, their combination with the advanced instrumental techniques clearly provided more information regarding the turnover of the organic matter pools during the composting process of these materials. Thermal analysis allowed to estimate the degradability of the remaining material and to assess qualitatively the rate of OM stabilization and recalcitrant C in the compost samples, based on the energy required to achieve the same mass losses. FT-IR spectra mainly showed variations between piles and time of sampling in the bands associated to complex organic compounds (mainly at 1420 and 1540 cm-1) and to nitrate and inorganic components (at 875 and 1384 cm-1, respectively), indicating composted material stability and maturity; while CPMAS 13C NMR provided semi-quantitatively partition of C compounds and structures during the process, being especially interesting their variation to evaluate the biotransformation of each C pool, especially in the comparison of recalcitrant C vs labile C pools, such as Alkyl /O-Alkyl ratio. PMID:26418458
Numerical Asymptotic Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1992-01-01
Numerical algorithms derived and compared with classical analytical methods. In method, expansions replaced with integrals evaluated numerically. Resulting numerical solutions retain linear independence, main advantage of asymptotic solutions.
The range and valence of a real Smirnov function
NASA Astrophysics Data System (ADS)
Ferguson, Timothy; Ross, William T.
2018-02-01
We give a complete description of the possible ranges of real Smirnov functions (quotients of two bounded analytic functions on the open unit disk where the denominator is outer and such that the radial boundary values are real almost everywhere on the unit circle). Our techniques use the theory of unbounded symmetric Toeplitz operators, some general theory of unbounded symmetric operators, classical Hardy spaces, and an application of the uniformization theorem. In addition, we completely characterize the possible valences for these real Smirnov functions when the valence is finite. To do so we construct Riemann surfaces we call disk trees by welding together copies of the unit disk and its complement in the Riemann sphere. We also make use of certain trees we call valence trees that mirror the structure of disk trees.
Analytical description of the modern steam automobile
NASA Technical Reports Server (NTRS)
Peoples, J. A.
1974-01-01
The sensitivity of operating conditions upon performance of the modern steam automobile is discussed. The word modern has been used in the title to indicate that emphasis is upon miles per gallon rather than theoretical thermal efficiency. This has been accomplished by combining classical power analysis with the ideal Pressure-Volume diagram. Several parameters are derived which characterize performance capability of the modern steam car. The report illustrates that performance is dictated by the characteristics of the working medium, and the supply temperature. Performance is nearly independent of pressures above 800 psia. Analysis techniques were developed specifically for reciprocating steam engines suitable for automotive application. Specific performance charts have been constructed on the basis of water as a working medium. The conclusions and data interpretation are therefore limited within this scope.
Extending the Distributed Lag Model framework to handle chemical mixtures.
Bello, Ghalib A; Arora, Manish; Austin, Christine; Horton, Megan K; Wright, Robert O; Gennings, Chris
2017-07-01
Distributed Lag Models (DLMs) are used in environmental health studies to analyze the time-delayed effect of an exposure on an outcome of interest. Given the increasing need for analytical tools for evaluation of the effects of exposure to multi-pollutant mixtures, this study attempts to extend the classical DLM framework to accommodate and evaluate multiple longitudinally observed exposures. We introduce 2 techniques for quantifying the time-varying mixture effect of multiple exposures on an outcome of interest. Lagged WQS, the first technique, is based on Weighted Quantile Sum (WQS) regression, a penalized regression method that estimates mixture effects using a weighted index. We also introduce Tree-based DLMs, a nonparametric alternative for assessment of lagged mixture effects. This technique is based on the Random Forest (RF) algorithm, a nonparametric, tree-based estimation technique that has shown excellent performance in a wide variety of domains. In a simulation study, we tested the feasibility of these techniques and evaluated their performance in comparison to standard methodology. Both methods exhibited relatively robust performance, accurately capturing pre-defined non-linear functional relationships in different simulation settings. Further, we applied these techniques to data on perinatal exposure to environmental metal toxicants, with the goal of evaluating the effects of exposure on neurodevelopment. Our methods identified critical neurodevelopmental windows showing significant sensitivity to metal mixtures. Copyright © 2017 Elsevier Inc. All rights reserved.
Self psychology as a shift away from the paranoid strain in classical analytic theory.
Terman, David M
2014-12-01
Classical psychoanalytic theory has a paranoid strain. There is, in effect, an "evil other"--the id--within each individual that must be tamed in development and confronted and worked through as resistance in treatment. This last has historically endgendered an adversarial relationship between patient and analyst. This paranoid strain came from a paranoid element in Freud's personality that affected his worldview, his relationships, and his theory. Self psychology offers a different view of development and conflict. It stresses the child's need for responsiveness from and admiration of caretakers in order to develop a well-functioning self. Though severe behavioral and character problems may result from faults in the process of self-construction, the essential need is not instinctual discharge but connection. Hence the long-assumed opposition between individual needs and social institutions or between patient and analyst is no longer inevitable or universal. Rather, an understanding of the primary need for connection creates both a different interpretive stance and a more cooperative ambience. These changes in theory and technique are traced to Kohut's personal struggles to emancipate himself from his paranoid mother. © 2014 by the American Psychoanalytic Association.
Olmo, B; García, A; Marín, A; Barbas, C
2005-03-25
The development of new pharmaceutical forms with classical active compounds generates new analytical problems. That is the case of sugar-free sachets of cough-cold products containing acetaminophen, phenylephrine hydrochloride and chlorpheniramine maleate. Two cyanopropyl stationary phases have been employed to tackle the problem. The Discovery cyanopropyl (SUPELCO) column permitted the separation of the three actives, maleate and excipients (mainly saccharine and orange flavour) with a constant proportion of aqueous/ organic solvent (95:5, v/v) and a pH gradient from 7.5 to 2. The run lasted 14 min. This technique avoids many problems related to baseline shifts with classical organic solvent gradients and opens great possibilities to modify selectivity not generally used in reversed phase HPLC. On the other hand, the Agilent Zorbax SB-CN column with a different retention profile permitted us to separate not only the three actives and the excipients but also the three known related compounds: 4-aminophenol, 4-chloracetanilide and 4-nitrophenol in an isocratic method with a run time under 30 min. This method was validated following ICH guidelines and validation parameters showed that it could be employed as stability-indicating method for this pharmaceutical form.
NASA Astrophysics Data System (ADS)
Kumar, Dinesh; Singh, Surjan; Rai, K. N.
2016-06-01
In this paper, the temperature distribution in a finite biological tissue in presence of metabolic and external heat source when the surface subjected to different type of boundary conditions is studied. Classical Fourier, single-phase-lag (SPL) and dual-phase-lag (DPL) models were developed for bio-heat transfer in biological tissues. The analytical solution obtained for all the three models using Laplace transform technique and results are compared. The effect of the variability of different parameters such as relaxation time, metabolic heat source, spatial heat source, different type boundary conditions on temperature distribution in different type of the tissues like muscle, tumor, fat, dermis and subcutaneous based on three models are analyzed and discussed in detail. The result obtained in three models is compared with experimental observation of Stolwijk and Hardy (Pflug Arch 291:129-162, 1966). It has been observe that the DPL bio-heat transfer model provides better result in comparison of other two models. The value of metabolic and spatial heat source in boundary condition of first, second and third kind for different type of thermal therapies are evaluated.
Delaby, Constance; Gabelle, Audrey; Meynier, Philippe; Loubiere, Vincent; Vialaret, Jérôme; Tiers, Laurent; Ducos, Jacques; Hirtz, Christophe; Lehmann, Sylvain
2014-05-01
The use of dried blood spots on filter paper is well documented as an affordable and practical alternative to classical venous sampling for various clinical needs. This technique has indeed many advantages in terms of collection, biological safety, storage, and shipment. Amyloid β (Aβ) peptides are useful cerebrospinal fluid (CSF) biomarkers for Alzheimer disease diagnosis. However, Aβ determination is hindered by preanalytical difficulties in terms of sample collection and stability in tubes. We compared the quantification of Aβ peptides (1-40, 1-42, and 1-38) by simplex and multiplex ELISA, following either a standard operator method (liquid direct quantification) or after spotting CSF onto dried matrix paper card. The use of dried matrix spot (DMS) overcame preanalytical problems and allowed the determination of Aβ concentrations that were highly commutable (Bland-Altman) with those obtained using CSF in classical tubes. Moreover, we found a positive and significant correlation (r2=0.83, Pearson coefficient p=0.0329) between the two approaches. This new DMS method for CSF represents an interesting alternative that increases the quality and efficiency in preanalytics. This should enable the better exploitation of Aβ analytes for Alzheimer's diagnosis.
Management of Type II Odontoid Fracture for Osteoporotic Bone Structure: Preliminary Report.
Cosar, Murat; Ozer, A Fahir; Alkan, Bahadır; Guven, Mustafa; Akman, Tarık; Aras, Adem Bozkurt; Ceylan, Davut; Tokmak, Mehmet
2015-01-01
Anterior transodontoid screw fixation technique is generally chosen for the management of type II odontoid fractures. The nonunion of type II odontoid fractures is still a major problem especially in elderly and osteoporotic patients. Eleven osteoporotic type II odontoid fracured patients were presented in this article. We have divided 11 patients in two groups as classical and Ozer's technique. We have also compared (radiologically and clinically) the classical anterior transodontoid screw fixation (group II: 6 cases) and Ozer's transodontoid screw fixation technique (group I: 5 cases) retrospectively. There was no difference regaring the clinical features of the groups. However, the radiological results showed 100% fusion for Ozer's screw fixation technique and 83% fusion for the classical screw fixation technique. In conclusion, we suggest that Ozer's technique may help to increase the fusion capacity for osteoporotic type II odontoid fractures.
Classical and quantum theories of proton disorder in hexagonal water ice
NASA Astrophysics Data System (ADS)
Benton, Owen; Sikora, Olga; Shannon, Nic
2016-03-01
It has been known since the pioneering work of Bernal, Fowler, and Pauling that common, hexagonal (Ih) water ice is the archetype of a frustrated material: a proton-bonded network in which protons satisfy strong local constraints (the "ice rules") but do not order. While this proton disorder is well established, there is now a growing body of evidence that quantum effects may also have a role to play in the physics of ice at low temperatures. In this paper, we use a combination of numerical and analytic techniques to explore the nature of proton correlations in both classical and quantum models of ice Ih. In the case of classical ice Ih, we find that the ice rules have two, distinct, consequences for scattering experiments: singular "pinch points," reflecting a zero-divergence condition on the uniform polarization of the crystal, and broad, asymmetric features, coming from its staggered polarization. In the case of the quantum model, we find that the collective quantum tunneling of groups of protons can convert states obeying the ice rules into a quantum liquid, whose excitations are birefringent, emergent photons. We make explicit predictions for scattering experiments on both classical and quantum ice Ih, and show how the quantum theory can explain the "wings" of incoherent inelastic scattering observed in recent neutron scattering experiments [Bove et al., Phys. Rev. Lett. 103, 165901 (2009), 10.1103/PhysRevLett.103.165901]. These results raise the intriguing possibility that the protons in ice Ih could form a quantum liquid at low temperatures, in which protons are not merely disordered, but continually fluctuate between different configurations obeying the ice rules.
Gore, Christopher J; Sharpe, Ken; Garvican-Lewis, Laura A; Saunders, Philo U; Humberstone, Clare E; Robertson, Eileen Y; Wachsmuth, Nadine B; Clark, Sally A; McLean, Blake D; Friedmann-Bette, Birgit; Neya, Mitsuo; Pottgiesser, Torben; Schumacher, Yorck O; Schmidt, Walter F
2013-01-01
Objective To characterise the time course of changes in haemoglobin mass (Hbmass) in response to altitude exposure. Methods This meta-analysis uses raw data from 17 studies that used carbon monoxide rebreathing to determine Hbmass prealtitude, during altitude and postaltitude. Seven studies were classic altitude training, eight were live high train low (LHTL) and two mixed classic and LHTL. Separate linear-mixed models were fitted to the data from the 17 studies and the resultant estimates of the effects of altitude used in a random effects meta-analysis to obtain an overall estimate of the effect of altitude, with separate analyses during altitude and postaltitude. In addition, within-subject differences from the prealtitude phase for altitude participant and all the data on control participants were used to estimate the analytical SD. The ‘true’ between-subject response to altitude was estimated from the within-subject differences on altitude participants, between the prealtitude and during-altitude phases, together with the estimated analytical SD. Results During-altitude Hbmass was estimated to increase by ∼1.1%/100 h for LHTL and classic altitude. Postaltitude Hbmass was estimated to be 3.3% higher than prealtitude values for up to 20 days. The within-subject SD was constant at ∼2% for up to 7 days between observations, indicative of analytical error. A 95% prediction interval for the ‘true’ response of an athlete exposed to 300 h of altitude was estimated to be 1.1–6%. Conclusions Camps as short as 2 weeks of classic and LHTL altitude will quite likely increase Hbmass and most athletes can expect benefit. PMID:24282204
Analytical techniques for steroid estrogens in water samples - A review.
Fang, Ting Yien; Praveena, Sarva Mangala; deBurbure, Claire; Aris, Ahmad Zaharin; Ismail, Sharifah Norkhadijah Syed; Rasdi, Irniza
2016-12-01
In recent years, environmental concerns over ultra-trace levels of steroid estrogens concentrations in water samples have increased because of their adverse effects on human and animal life. Special attention to the analytical techniques used to quantify steroid estrogens in water samples is therefore increasingly important. The objective of this review was to present an overview of both instrumental and non-instrumental analytical techniques available for the determination of steroid estrogens in water samples, evidencing their respective potential advantages and limitations using the Need, Approach, Benefit, and Competition (NABC) approach. The analytical techniques highlighted in this review were instrumental and non-instrumental analytical techniques namely gas chromatography mass spectrometry (GC-MS), liquid chromatography mass spectrometry (LC-MS), enzyme-linked immuno sorbent assay (ELISA), radio immuno assay (RIA), yeast estrogen screen (YES) assay, and human breast cancer cell line proliferation (E-screen) assay. The complexity of water samples and their low estrogenic concentrations necessitates the use of highly sensitive instrumental analytical techniques (GC-MS and LC-MS) and non-instrumental analytical techniques (ELISA, RIA, YES assay and E-screen assay) to quantify steroid estrogens. Both instrumental and non-instrumental analytical techniques have their own advantages and limitations. However, the non-instrumental ELISA analytical techniques, thanks to its lower detection limit and simplicity, its rapidity and cost-effectiveness, currently appears to be the most reliable for determining steroid estrogens in water samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Langmuir probe analysis in electronegative plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bredin, Jerome, E-mail: jerome.bredin@lpp.polytechnique.fr; Chabert, Pascal; Aanesland, Ane
2014-12-15
This paper compares two methods to analyze Langmuir probe data obtained in electronegative plasmas. The techniques are developed to allow investigations in plasmas, where the electronegativity α{sub 0} = n{sub –}/n{sub e} (the ratio between the negative ion and electron densities) varies strongly. The first technique uses an analytical model to express the Langmuir probe current-voltage (I-V) characteristic and its second derivative as a function of the electron and ion densities (n{sub e}, n{sub +}, n{sub –}), temperatures (T{sub e}, T{sub +}, T{sub –}), and masses (m{sub e}, m{sub +}, m{sub –}). The analytical curves are fitted to the experimental data bymore » adjusting these variables and parameters. To reduce the number of fitted parameters, the ion masses are assumed constant within the source volume, and quasi-neutrality is assumed everywhere. In this theory, Maxwellian distributions are assumed for all charged species. We show that this data analysis can predict the various plasma parameters within 5–10%, including the ion temperatures when α{sub 0} > 100. However, the method is tedious, time consuming, and requires a precise measurement of the energy distribution function. A second technique is therefore developed for easier access to the electron and ion densities, but does not give access to the ion temperatures. Here, only the measured I-V characteristic is needed. The electron density, temperature, and ion saturation current for positive ions are determined by classical probe techniques. The electronegativity α{sub 0} and the ion densities are deduced via an iterative method since these variables are coupled via the modified Bohm velocity. For both techniques, a Child-Law sheath model for cylindrical probes has been developed and is presented to emphasize the importance of this model for small cylindrical Langmuir probes.« less
NASA Astrophysics Data System (ADS)
Jankovic, I.; Barnes, R. J.; Soule, R.
2001-12-01
The analytic element method is used to model local three-dimensional flow in the vicinity of partially penetrating wells. The flow domain is bounded by an impermeable horizontal base, a phreatic surface with recharge and a cylindrical lateral boundary. The analytic element solution for this problem contains (1) a fictitious source technique to satisfy the head and the discharge conditions along the phreatic surface, (2) a fictitious source technique to satisfy specified head conditions along the cylindrical boundary, (3) a method of imaging to satisfy the no-flow condition across the impermeable base, (4) the classical analytic solution for a well and (5) spheroidal harmonics to account for the influence of the inhomogeneities in hydraulic conductivity. Temporal variations of the flow system due to time-dependent recharge and pumping are represented by combining the analytic element method with a finite difference method: analytic element method is used to represent spatial changes in head and discharge, while the finite difference method represents temporal variations. The solution provides a very detailed description of local groundwater flow with an arbitrary number of wells of any orientation and an arbitrary number of ellipsoidal inhomogeneities of any size and conductivity. These inhomogeneities may be used to model local hydrogeologic features (such as gravel packs and clay lenses) that significantly influence the flow in the vicinity of partially penetrating wells. Several options for specifying head values along the lateral domain boundary are available. These options allow for inclusion of the model into steady and transient regional groundwater models. The head values along the lateral domain boundary may be specified directly (as time series). The head values along the lateral boundary may also be assigned by specifying the water-table gradient and a head value at a single point (as time series). A case study is included to demonstrate the application of the model in local modeling of the groundwater flow. Transient three-dimensional capture zones are delineated for a site on Prairie Island, MN. Prairie Island is located on the Mississippi River 40 miles south of the Twin Cities metropolitan area. The case study focuses on a well that has been known to contain viral DNA. The objective of the study was to assess the potential for pathogen migration toward the well.
Artificial Intelligence Methods in Pursuit Evasion Differential Games
1990-07-30
objectives, sometimes with fuzzy ones. Classical optimization, control or game theoretic methods are insufficient for their resolution. I Solution...OVERALL SATISFACTION WITH SCHOOL 120 FIGURE 5.13 EXAMPLE AHP HIERARCHY FOR CHOOSING MOST APPROPRIATE DIFFERENTIAL GAME AND PARAMETRIZATION 125 FIGURE 5.14...the Analytical Hierarchy Process originated by T.L. Saaty of the Wharton School. The Analytic Hierarchy Process ( AHP ) is a general theory of
Maya, Fernando; Estela, José Manuel; Cerdà, Víctor
2009-07-01
In this work, the hyphenation of the multisyringe flow injection analysis technique with a 100-cm-long pathlength liquid core waveguide has been accomplished. The Cl-/Hg(SCN)2/Fe3+ reaction system for the spectrophotometric determination of chloride (Cl(-)) in waters was used as chemical model. As a result, this classic analytical methodology has been improved, minimizing dramatically the consumption of reagents, in particular, that of the highly biotoxic chemical Hg(SCN)2. The proposed method features a linear dynamic range composed of two steps between (1) 0.2-2 and (2) 2-8 mg Cl- L(-1), thus extended applicability due to on-line sample dilution (up to 400 mg Cl- L(-1)). It also presents improved limits of detection and quantification of 0.06 and 0.20 mg Cl- L(-1), respectively. The coefficient of variation and the injection throughput were 1.3% (n = 10, 2 mg Cl- L(-1)) and 21 h(-1). Furthermore, a very low consumption of reagents per Cl- determination of 0.2 microg Hg(II) and 28 microg Fe3+ has been achieved. The method was successfully applied to the determination of Cl- in different types of water samples. Finally, the proposed system is critically compared from a green analytical chemistry point of view against other flow systems for the same purpose.
Cho, Il-Hoon; Ku, Seockmo
2017-09-30
The development of novel and high-tech solutions for rapid, accurate, and non-laborious microbial detection methods is imperative to improve the global food supply. Such solutions have begun to address the need for microbial detection that is faster and more sensitive than existing methodologies (e.g., classic culture enrichment methods). Multiple reviews report the technical functions and structures of conventional microbial detection tools. These tools, used to detect pathogens in food and food homogenates, were designed via qualitative analysis methods. The inherent disadvantage of these analytical methods is the necessity for specimen preparation, which is a time-consuming process. While some literature describes the challenges and opportunities to overcome the technical issues related to food industry legal guidelines, there is a lack of reviews of the current trials to overcome technological limitations related to sample preparation and microbial detection via nano and micro technologies. In this review, we primarily explore current analytical technologies, including metallic and magnetic nanomaterials, optics, electrochemistry, and spectroscopy. These techniques rely on the early detection of pathogens via enhanced analytical sensitivity and specificity. In order to introduce the potential combination and comparative analysis of various advanced methods, we also reference a novel sample preparation protocol that uses microbial concentration and recovery technologies. This technology has the potential to expedite the pre-enrichment step that precedes the detection process.
Statistical correlation analysis for comparing vibration data from test and analysis
NASA Technical Reports Server (NTRS)
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
A novel approach to signal normalisation in atmospheric pressure ionisation mass spectrometry.
Vogeser, Michael; Kirchhoff, Fabian; Geyer, Roland
2012-07-01
The aim of our study was to test an alternative principle of signal normalisation in LC-MS/MS. During analyses, post column infusion of the target analyte is done via a T-piece, generating an "area under the analyte peak" (AUP). The ratio of peak area to AUP is assessed as assay response. Acceptable analytical performance of this principle was found for an exemplary analyte. Post-column infusion may allow normalisation of ion suppression not requiring any additional standard compound. This approach can be useful in situations where no appropriate compound is available for classical internal standardisation. Copyright © 2012 Elsevier B.V. All rights reserved.
Ota, Sarada; Singh, Arjun; Srikanth, Narayana; Sreedhar, Bojja; Ruknuddin, Galib; Dhiman, Kartar Singh
2017-01-01
Herbo-mineral formulations of Ayurveda contain specified metals or minerals as composition, which have their beneficial effects on biological systems. These metals or minerals are transformed into non-toxic forms through meticulous procedures explained in Ayurveda. Though literature is available on quality aspects of such herbo-mineral formulations; contemporary science is raising concerns at regular intervals on such formulations. Thus, it becomes mandate to develop quality profiles of all formulations that contain metals or minerals in their composition. Considering this, it is planned to evaluate analytical profile of Vasantakusumākara Rasa . To prepare Vasantakusumākara Rasa as per Standard operating Procedures (SoP) mentioned in classical text and to characterize it chemically using modern analytical techniques. The drug ( Vasantakusumākara Rasa ) in three batches was prepared in GMP certified pharmacy. Physico-chemical analysis, Assay of elements and HPTLC were carried out as per API. XRD was conducted using Rigaku Ultima-IV X-ray diffractometer. The analysis shown the presence of Mercury, Tin, Gold, Silver, Iron, Zinc and Calcium etc., and HPTLC revealed presence of organic constituents from plant material. The XRD indicated the presence of cinnabar (mercury sulphide from Rasa Sindhura ), cassiterite (tin oxide from Vaṅga Bhasma ), massicot (lead oxide from Nāga bhasma ) and Magnetite (di-iron oxide from Loha bhasma ). The physico chemical analysis reveals that VKR prepared by following classical guidelines is very effective in converting the macro elements into therapeutically effective medicines in micro form. Well prepared herbo-mineral drugs offer many advantages over plant medicines due to their longer shelf life, lesser doses, easy storing facilities, better palatability etc. The inferences and the standards laid down in this study certainly can be utilized as baseline data of standardization and QC.
Creemers, E; Nijs, M; Vanheusden, E; Ombelet, W
2011-12-01
Preservation of spermatozoa is an important aspect of assisted reproductive medicine. The aim of this study was to investigate the efficacy and use of a recently developed liquid nitrogen and cryogen-free controlled rate freezer and this compared with the classical liquid nitrogen vapour freezing method for the cryopreservation of human spermatozoa. Ten patients entering the IVF programme donated semen samples for the study. Samples were analysed according to the World Health Organization guidelines. No significant difference in total sperm motility after freeze-thawing between the new technique and classical technique was demonstrated. The advantage of the new freezing technique is that it uses no liquid nitrogen during the freezing process, hence being safer to use and clean room compatible. Investment costs are higher for the apparatus but running costs are only 1% in comparison with classical liquid nitrogen freezing. In conclusion, post-thaw motility of samples frozen with the classical liquid nitrogen vapour technique was comparable with samples frozen with the new nitrogen-free freezing technique. This latter technique can thus be a very useful asset to the sperm cryopreservation laboratory. © 2011 Blackwell Verlag GmbH.
Carbon Nanotubes Application in the Extraction Techniques of Pesticides: A Review.
Jakubus, Aleksandra; Paszkiewicz, Monika; Stepnowski, Piotr
2017-01-02
Carbon nanotubes (CNTs) are currently one of the most promising groups of materials with some interesting properties, such as lightness, rigidity, high surface area, high mechanical strength in tension, good thermal conductivity or resistance to mechanical damage. These unique properties make CNTs a competitive alternative to conventional sorbents used in analytical chemistry, especially in extraction techniques. The amount of work that discusses the usefulness of CNTs as a sorbent in a variety of extraction techniques has increased significantly in recent years. In this review article, the most important feature and different applications of solid-phase extraction (SPE), including, classical SPE and dispersive SPE using CNTs for pesticides isolation from different matrices, are summarized. Because of high number of articles concerning the applicability of carbon materials to extraction of pesticides, the main aim of proposed publication is to provide updated review of the latest uses of CNTs by covering the period 2006-2015. Moreover, in this review, the recent papers and this one, which are covered in previous reviews, will be addressed and particular attention has been paid on the division of publications in terms of classes of pesticides, in order to systematize the available literature reports.
NASA Technical Reports Server (NTRS)
DeChant, Lawrence Justin
1998-01-01
In spite of rapid advances in both scalar and parallel computational tools, the large number of variables involved in both design and inverse problems make the use of sophisticated fluid flow models impractical, With this restriction, it is concluded that an important family of methods for mathematical/computational development are reduced or approximate fluid flow models. In this study a combined perturbation/numerical modeling methodology is developed which provides a rigorously derived family of solutions. The mathematical model is computationally more efficient than classical boundary layer but provides important two-dimensional information not available using quasi-1-d approaches. An additional strength of the current methodology is its ability to locally predict static pressure fields in a manner analogous to more sophisticated parabolized Navier Stokes (PNS) formulations. To resolve singular behavior, the model utilizes classical analytical solution techniques. Hence, analytical methods have been combined with efficient numerical methods to yield an efficient hybrid fluid flow model. In particular, the main objective of this research has been to develop a system of analytical and numerical ejector/mixer nozzle models, which require minimal empirical input. A computer code, DREA Differential Reduced Ejector/mixer Analysis has been developed with the ability to run sufficiently fast so that it may be used either as a subroutine or called by an design optimization routine. Models are of direct use to the High Speed Civil Transport Program (a joint government/industry project seeking to develop an economically.viable U.S. commercial supersonic transport vehicle) and are currently being adopted by both NASA and industry. Experimental validation of these models is provided by comparison to results obtained from open literature and Limited Exclusive Right Distribution (LERD) sources, as well as dedicated experiments performed at Texas A&M. These experiments have been performed using a hydraulic/gas flow analog. Results of comparisons of DREA computations with experimental data, which include entrainment, thrust, and local profile information, are overall good. Computational time studies indicate that DREA provides considerably more information at a lower computational cost than contemporary ejector nozzle design models. Finally. physical limitations of the method, deviations from experimental data, potential improvements and alternative formulations are described. This report represents closure to the NASA Graduate Researchers Program. Versions of the DREA code and a user's guide may be obtained from the NASA Lewis Research Center.
Connor, Thomas H; Smith, Jerome P
2016-09-01
At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.
[Analysis of triterpenoids in Ganoderma lucidum by microwave-assisted continuous extraction].
Lu, Yan-fang; An, Jing; Jiang, Ye
2015-04-01
For further improving the extraction efficiency of microwave extraction, a microwave-assisted contijuous extraction (MACE) device has been designed and utilized. By contrasting with the traditional methods, the characteristics and extraction efficiency of MACE has also been studied. The method was validated by the analysis of the triterpenoids in Ganoderma lucidum. The extraction conditions of MACE were: using 95% ethanol as solvent, microwave power 200 W and radiation time 14.5 min (5 cycles). The extraction results were subsequently compared with traditional heat reflux extraction ( HRE) , soxhlet extraction (SE), ultrasonic extraction ( UE) as well as the conventional microwave extraction (ME). For triterpenoids, the two methods based on the microwaves (ME and MACE) were in general capable of finishing the extraction in 10, 14.5 min, respectively, while other methods should consume 60 min and even more than 100 min. Additionally, ME can produce comparable extraction results as the classical HRE and higher extraction yield than both SE and UE, however, notably lower extraction yield than MASE. More importantly, the purity of the crud extract by MACE is far better than the other methods. MACE can effectively combine the advantages of microwave extraction and soxhlet extraction, thus enabling a more complete extraction of the analytes of TCMs in comparison with ME. And therefore makes the analytic result more accurate. It provides a novel, high efficient, rapid and reliable pretreatment technique for the analysis of TCMs, and it could potentially be extended to ingredient preparation or extracting techniques of TCMs.
Yu, Daxiong; Ma, Ruijie; Fang, Jianqiao
2015-05-01
There are many eminent acupuncture masters in modern times in the regions of Zhejiang province, which has developed the acupuncture schools of numerous characteristics and induces the important impacts at home and abroad. Through the literature collection on the acupuncture schools in Zhejiang and the interviews to the parties involved, it has been discovered that the acupuncture manipulation techniques of acupuncture masters in modern times are specifically featured. Those techniques are developed on the basis of Neijing (Internal Classic), Jinzhenfu (Ode to Gold Needle) and Zhenjiu Dacheng (Great Compendium of Acupuncture and Moxibustion). No matter to obey the old maxim or study by himself, every master lays the emphasis on the research and interpretation of classical theories and integrates the traditional with the modern. In the paper, the acupuncture manipulation techniques of Zhejiang acupuncture masters in modern times are stated from four aspects, named needling techniques in Internal Classic, feijingzouqi needling technique, penetrating needling technique and innovation of acupuncture manipulation.
The role of analytical chemistry in Niger Delta petroleum exploration: a review.
Akinlua, Akinsehinwa
2012-06-12
Petroleum and organic matter from which the petroleum is derived are composed of organic compounds with some trace elements. These compounds give an insight into the origin, thermal maturity and paleoenvironmental history of petroleum, which are essential elements in petroleum exploration. The main tool to acquire the geochemical data is analytical techniques. Due to progress in the development of new analytical techniques, many hitherto petroleum exploration problems have been resolved. Analytical chemistry has played a significant role in the development of petroleum resources of Niger Delta. Various analytical techniques that have aided the success of petroleum exploration in the Niger Delta are discussed. The analytical techniques that have helped to understand the petroleum system of the basin are also described. Recent and emerging analytical methodologies including green analytical methods as applicable to petroleum exploration particularly Niger Delta petroleum province are discussed in this paper. Analytical chemistry is an invaluable tool in finding the Niger Delta oils. Copyright © 2011 Elsevier B.V. All rights reserved.
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
Sound Emission of Rotor Induced Deformations of Generator Casings
NASA Technical Reports Server (NTRS)
Polifke, W.; Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
The casing of large electrical generators can be deformed slightly by the rotor's magnetic field. The sound emission produced by these periodic deformations, which could possibly exceed guaranteed noise emission limits, is analysed analytically and numerically. From the deformation of the casing, the normal velocity of the generator's surface is computed. Taking into account the corresponding symmetry, an analytical solution for the acoustic pressure outside the generator is round in terms of the Hankel function of second order. The normal velocity or the generator surface provides the required boundary condition for the acoustic pressure and determines the magnitude of pressure oscillations. For the numerical simulation, the nonlinear 2D Euler equations are formulated In a perturbation form for low Mach number Computational Aeroacoustics (CAA). The spatial derivatives are discretized by the classical sixth-order central interior scheme and a third-order boundary scheme. Spurious high frequency oscillations are damped by a characteristic-based artificial compression method (ACM) filter. The time derivatives are approximated by the classical 4th-order Runge-Kutta method. The numerical results are In excellent agreement with the analytical solution.
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
The role of mechanics during brain development
NASA Astrophysics Data System (ADS)
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-12-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated with neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism.
The role of mechanics during brain development
Budday, Silvia; Steinmann, Paul; Kuhl, Ellen
2014-01-01
Convolutions are a classical hallmark of most mammalian brains. Brain surface morphology is often associated with intelligence and closely correlated to neurological dysfunction. Yet, we know surprisingly little about the underlying mechanisms of cortical folding. Here we identify the role of the key anatomic players during the folding process: cortical thickness, stiffness, and growth. To establish estimates for the critical time, pressure, and the wavelength at the onset of folding, we derive an analytical model using the Föppl-von-Kármán theory. Analytical modeling provides a quick first insight into the critical conditions at the onset of folding, yet it fails to predict the evolution of complex instability patterns in the post-critical regime. To predict realistic surface morphologies, we establish a computational model using the continuum theory of finite growth. Computational modeling not only confirms our analytical estimates, but is also capable of predicting the formation of complex surface morphologies with asymmetric patterns and secondary folds. Taken together, our analytical and computational models explain why larger mammalian brains tend to be more convoluted than smaller brains. Both models provide mechanistic interpretations of the classical malformations of lissencephaly and polymicrogyria. Understanding the process of cortical folding in the mammalian brain has direct implications on the diagnostics of neurological disorders including severe retardation, epilepsy, schizophrenia, and autism. PMID:25202162
NASA Astrophysics Data System (ADS)
Zaslavsky, M.
1996-06-01
The phenomena of dynamical localization, both classical and quantum, are studied in the Fermi accelerator model. The model consists of two vertical oscillating walls and a ball bouncing between them. The classical localization boundary is calculated in the case of ``sinusoidal velocity transfer'' [A. J. Lichtenberg and M. A. Lieberman, Regular and Stochastic Motion (Springer-Verlag, Berlin, 1983)] on the basis of the analysis of resonances. In the case of the ``sawtooth'' wall velocity we show that the quantum localization is determined by the analytical properties of the canonical transformations to the action and angle coordinates of the unperturbed Hamiltonian, while the existence of the classical localization is determined by the number of continuous derivatives of the distance between the walls with respect to time.
Metrology in physics, chemistry, and biology: differing perceptions.
Iyengar, Venkatesh
2007-04-01
The association of physics and chemistry with metrology (the science of measurements) is well documented. For practical purposes, basic metrological measurements in physics are governed by two components, namely, the measure (i.e., the unit of measurement) and the measurand (i.e., the entity measured), which fully account for the integrity of a measurement process. In simple words, in the case of measuring the length of a room (the measurand), the SI unit meter (the measure) provides a direct answer sustained by metrological concepts. Metrology in chemistry, as observed through physical chemistry (measures used to express molar relationships, volume, pressure, temperature, surface tension, among others) follows the same principles of metrology as in physics. The same basis percolates to classical analytical chemistry (gravimetry for preparing high-purity standards, related definitive analytical techniques, among others). However, certain transition takes place in extending the metrological principles to chemical measurements in complex chemical matrices (e.g., food samples), as it adds a third component, namely, indirect measurements (e.g., AAS determination of Zn in foods). This is a practice frequently used in field assays, and calls for additional steps to account for traceability of such chemical measurements for safeguarding reliability concerns. Hence, the assessment that chemical metrology is still evolving.
Analysis of latency performance of bluetooth low energy (BLE) networks.
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2014-12-23
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes.
Analysis of Latency Performance of Bluetooth Low Energy (BLE) Networks
Cho, Keuchul; Park, Woojin; Hong, Moonki; Park, Gisu; Cho, Wooseong; Seo, Jihoon; Han, Kijun
2015-01-01
Bluetooth Low Energy (BLE) is a short-range wireless communication technology aiming at low-cost and low-power communication. The performance evaluation of classical Bluetooth device discovery have been intensively studied using analytical modeling and simulative methods, but these techniques are not applicable to BLE, since BLE has a fundamental change in the design of the discovery mechanism, including the usage of three advertising channels. Recently, there several works have analyzed the topic of BLE device discovery, but these studies are still far from thorough. It is thus necessary to develop a new, accurate model for the BLE discovery process. In particular, the wide range settings of the parameters introduce lots of potential for BLE devices to customize their discovery performance. This motivates our study of modeling the BLE discovery process and performing intensive simulation. This paper is focused on building an analytical model to investigate the discovery probability, as well as the expected discovery latency, which are then validated via extensive experiments. Our analysis considers both continuous and discontinuous scanning modes. We analyze the sensitivity of these performance metrics to parameter settings to quantitatively examine to what extent parameters influence the performance metric of the discovery processes. PMID:25545266
Li, Shuo; Fu, Haiyan; Ju, Baozhao
2015-10-01
Huangdi Neijing (Yellow Emperor's Internal Classic) is the earliest medical classic work existing at present in Chinese medical treasures and is the foundation of TCM. It not only contains rich medical words, but also supplements the new meanings of seven words, i. e. Wang, Xiu, Yuan, Fang, Xu, Jiu and Bian for removing needle, retaining needling, reinforcing technique, reducing technique, slow needling, moxibustion and stone-needle puncturing, respectively.
Nikitin, E E; Troe, J
2010-09-16
Approximate analytical expressions are derived for the low-energy rate coefficients of capture of two identical dipolar polarizable rigid rotors in their lowest nonresonant (j(1) = 0 and j(2) = 0) and resonant (j(1) = 0,1 and j(2) = 1,0) states. The considered range extends from the quantum, ultralow energy regime, characterized by s-wave capture, to the classical regime described within fly wheel and adiabatic channel approaches, respectively. This is illustrated by the table of contents graphic (available on the Web) that shows the scaled rate coefficients for the mutual capture of rotors in the resonant state versus the reduced wave vector between the Bethe zero-energy (left arrows) and classical high-energy (right arrow) limits for different ratios δ of the dipole-dipole to dispersion interaction.
NASA Astrophysics Data System (ADS)
Alam, Muhammad Ashraful; Khan, M. Ryyan
2016-10-01
Bifacial tandem cells promise to reduce three fundamental losses (i.e., above-bandgap, below bandgap, and the uncollected light between panels) inherent in classical single junction photovoltaic (PV) systems. The successive filtering of light through the bandgap cascade and the requirement of current continuity make optimization of tandem cells difficult and accessible only to numerical solution through computer modeling. The challenge is even more complicated for bifacial design. In this paper, we use an elegantly simple analytical approach to show that the essential physics of optimization is intuitively obvious, and deeply insightful results can be obtained with a few lines of algebra. This powerful approach reproduces, as special cases, all of the known results of conventional and bifacial tandem cells and highlights the asymptotic efficiency gain of these technologies.
Quantum Discord for d⊗2 Systems
Ma, Zhihao; Chen, Zhihua; Fanchini, Felipe Fernandes; Fei, Shao-Ming
2015-01-01
We present an analytical solution for classical correlation, defined in terms of linear entropy, in an arbitrary system when the second subsystem is measured. We show that the optimal measurements used in the maximization of the classical correlation in terms of linear entropy, when used to calculate the quantum discord in terms of von Neumann entropy, result in a tight upper bound for arbitrary systems. This bound agrees with all known analytical results about quantum discord in terms of von Neumann entropy and, when comparing it with the numerical results for 106 two-qubit random density matrices, we obtain an average deviation of order 10−4. Furthermore, our results give a way to calculate the quantum discord for arbitrary n-qubit GHZ and W states evolving under the action of the amplitude damping noisy channel. PMID:26036771
Wunderli, S; Fortunato, G; Reichmuth, A; Richard, Ph
2003-06-01
A new method to correct for the largest systematic influence in mass determination-air buoyancy-is outlined. A full description of the most relevant influence parameters is given and the combined measurement uncertainty is evaluated according to the ISO-GUM approach [1]. A new correction method for air buoyancy using an artefact is presented. This method has the advantage that only a mass artefact is used to correct for air buoyancy. The classical approach demands the determination of the air density and therefore suitable equipment to measure at least the air temperature, the air pressure and the relative air humidity within the demanded uncertainties (i.e. three independent measurement tasks have to be performed simultaneously). The calculated uncertainty is lower for the classical method. However a field laboratory may not always be in possession of fully traceable measurement systems for these room climatic parameters.A comparison of three approaches applied to the calculation of the combined uncertainty of mass values is presented. Namely the classical determination of air buoyancy, the artefact method, and the neglecting of this systematic effect as proposed in the new EURACHEM/CITAC guide [2]. The artefact method is suitable for high-precision measurement in analytical chemistry and especially for the production of certified reference materials, reference values and analytical chemical reference materials. The method could also be used either for volume determination of solids or for air density measurement by an independent method.
Two Dimensional Processing Of Speech And Ecg Signals Using The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Abeysekera, Saman S.
1986-12-01
The Wigner-Ville Distribution (WVD) has been shown to be a valuable tool for the analysis of non-stationary signals such as speech and Electrocardiogram (ECG) data. The one-dimensional real data are first transformed into a complex analytic signal using the Hilbert Transform and then a 2-dimensional image is formed using the Wigner-Ville Transform. For speech signals, a contour plot is determined and used as a basic feature. for a pattern recognition algorithm. This method is compared with the classical Short Time Fourier Transform (STFT) and is shown, to be able to recognize isolated words better in a noisy environment. The same method together with the concept of instantaneous frequency of the signal is applied to the analysis of ECG signals. This technique allows one to classify diseased heart-beat signals. Examples are shown.
NASA Technical Reports Server (NTRS)
Noor, A. K. (Editor); Housner, J. M.
1983-01-01
The mechanics of materials and material characterization are considered, taking into account micromechanics, the behavior of steel structures at elevated temperatures, and an anisotropic plasticity model for inelastic multiaxial cyclic deformation. Other topics explored are related to advances and trends in finite element technology, classical analytical techniques and their computer implementation, interactive computing and computational strategies for nonlinear problems, advances and trends in numerical analysis, database management systems and CAD/CAM, space structures and vehicle crashworthiness, beams, plates and fibrous composite structures, design-oriented analysis, artificial intelligence and optimization, contact problems, random waves, and lifetime prediction. Earthquake-resistant structures and other advanced structural applications are also discussed, giving attention to cumulative damage in steel structures subjected to earthquake ground motions, and a mixed domain analysis of nuclear containment structures using impulse functions.
On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.
1990-01-01
An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.
NASA Astrophysics Data System (ADS)
Hart, Brian K.; Griffiths, Peter R.
1998-06-01
Partial least squares (PLS) regression has been evaluated as a robust calibration technique for over 100 hazardous air pollutants (HAPs) measured by open path Fourier transform infrared (OP/FT-IR) spectrometry. PLS has the advantage over the current recommended calibration method of classical least squares (CLS), in that it can look at the whole useable spectrum (700-1300 cm-1, 2000-2150 cm-1, and 2400-3000 cm-1), and detect several analytes simultaneously. Up to one hundred HAPs synthetically added to OP/FT-IR backgrounds have been simultaneously calibrated and detected using PLS. PLS also has the advantage in requiring less preprocessing of spectra than that which is required in CLS calibration schemes, allowing PLS to provide user independent real-time analysis of OP/FT-IR spectra.
Directivity analysis of meander-line-coil EMATs with a wholly analytical method.
Xie, Yuedong; Liu, Zenghua; Yin, Liyuan; Wu, Jiande; Deng, Peng; Yin, Wuliang
2017-01-01
This paper presents the simulation and experimental study of the radiation pattern of a meander-line-coil EMAT. A wholly analytical method, which involves the coupling of two models: an analytical EM model and an analytical UT model, has been developed to build EMAT models and analyse the Rayleigh waves' beam directivity. For a specific sensor configuration, Lorentz forces are calculated using the EM analytical method, which is adapted from the classic Deeds and Dodd solution. The calculated Lorentz force density are imported to an analytical ultrasonic model as driven point sources, which produce the Rayleigh waves within a layered medium. The effect of the length of the meander-line-coil on the Rayleigh waves' beam directivity is analysed quantitatively and verified experimentally. Copyright © 2016 Elsevier B.V. All rights reserved.
New insights into classical solutions of the local instability of the sandwich panels problem
NASA Astrophysics Data System (ADS)
Pozorska, Jolanta; Pozorski, Zbigniew
2016-06-01
The paper concerns the problem of local instability of thin facings of a sandwich panel. The classic analytical solutions are compared and examined. The Airy stress function is applied in the case of the state of plane stress and the state of plane strain. Wrinkling stress values are presented. The differences between the results obtained using the differential equations method and energy method are discussed. The relations between core strain energies are presented.
Suvarapu, Lakshmi Narayana; Baek, Sung-Ok
2015-01-01
This paper reviews the speciation and determination of mercury by various analytical techniques such as atomic absorption spectrometry, voltammetry, inductively coupled plasma techniques, spectrophotometry, spectrofluorometry, high performance liquid chromatography, and gas chromatography. Approximately 126 research papers on the speciation and determination of mercury by various analytical techniques published in international journals since 2013 are reviewed. PMID:26236539
Experimental Validation of the Transverse Shear Behavior of a Nomex Core for Sandwich Panels
NASA Astrophysics Data System (ADS)
Farooqi, M. I.; Nasir, M. A.; Ali, H. M.; Ali, Y.
2017-05-01
This work deals with determination of the transverse shear moduli of a Nomex® honeycomb core of sandwich panels. Their out-of-plane shear characteristics depend on the transverse shear moduli of the honeycomb core. These moduli were determined experimentally, numerically, and analytically. Numerical simulations were performed by using a unit cell model and three analytical approaches. Analytical calculations showed that two of the approaches provided reasonable predictions for the transverse shear modulus as compared with experimental results. However, the approach based upon the classical lamination theory showed large deviations from experimental data. Numerical simulations also showed a trend similar to that resulting from the analytical models.
Pinon, J M; Thoannes, H; Gruson, N
1985-02-28
Enzyme-linked immuno-filtration assay is carried out on a micropore membrane. This doubly analytical technique permits simultaneous study of antibody specificity by immunoprecipitation and characterisation of antibody isotypes by immuno-filtration with enzyme-labelled antibodies. Recognition of the same T. gondii antigenic constituent by IgG, IgA, IgM or IgE antibodies produces couplets (IgG-IgM; IgG-IgA) or triplets (IgG-IgM-IgA; IgG-IgM-IgE) which identify the functional fractions of the toxoplasmosis antigen. In acquired toxoplasmosis, the persistence of IgM antibody long after infestation puts in question the implication of recent infestation normally linked to detection of this isotype. For sera of comparable titres, comparison of immunological profiles by the method described demonstrates disparities in the composition of the specific antibody content as expressed in international units. Use of the same method to detect IgM antibodies or distinguish between transmitted maternal IgG and IgG antibodies synthesised by the foetus or neonate makes a diagnosis of congenital toxoplasmosis possible in 85% of cases during the first few days of life. With the method described the diagnosis may be made on average 5 months earlier than with classical techniques. In the course of surveillance for latent congenital toxoplasmosis, the appearance of IgM or IgE antibodies raises the possibility of complications (hydrocephalus, chorioretinitis). After cessation of treatment, a rise in IgG antibodies indicating persistence of infection is detected earlier by the present than by classical methods.
Causanilles, Ana; Kinyua, Juliet; Ruttkies, Christoph; van Nuijs, Alexander L N; Emke, Erik; Covaci, Adrian; de Voogt, Pim
2017-10-01
The inclusion of new psychoactive substances (NPS) in the wastewater-based epidemiology approach presents challenges, such as the reduced number of users that translates into low concentrations of residues and the limited pharmacokinetics information available, which renders the choice of target biomarker difficult. The sampling during special social settings, the analysis with improved analytical techniques, and data processing with specific workflow to narrow the search, are required approaches for a successful monitoring. This work presents the application of a qualitative screening technique to wastewater samples collected during a city festival, where likely users of recreational substances gather and consequently higher residual concentrations of used NPS are expected. The analysis was performed using liquid chromatography coupled to high-resolution mass spectrometry. Data were processed using an algorithm that involves the extraction of accurate masses (calculated based on molecular formula) of expected m/z from an in-house database containing about 2,000 entries, including NPS and transformation products. We positively identified eight NPS belonging to the classes of synthetic cathinones, phenethylamines and opioids. In addition, the presence of benzodiazepine analogues, classical drugs and other licit substances with potential for abuse was confirmed. The screening workflow based on a database search was useful in the identification of NPS biomarkers in wastewater. The findings highlight the specific classical drugs and low NPS use in the Netherlands. Additionally, meta-chlorophenylpiperazine (mCPP), 2,5-dimethoxy-4-bromophenethylamine (2C-B), and 4-fluoroamphetamine (FA) were identified in wastewater for the first time. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hough, Susan L.; Hall, Bruce W.
The meta-analytic techniques of G. V. Glass (1976) and J. E. Hunter and F. L. Schmidt (1977) were compared through their application to three meta-analytic studies from education literature. The following hypotheses were explored: (1) the overall mean effect size would be larger in a Hunter-Schmidt meta-analysis (HSMA) than in a Glass…
Control system design for flexible structures using data models
NASA Technical Reports Server (NTRS)
Irwin, R. Dennis; Frazier, W. Garth; Mitchell, Jerrel R.; Medina, Enrique A.; Bukley, Angelia P.
1993-01-01
The dynamics and control of flexible aerospace structures exercises many of the engineering disciplines. In recent years there has been considerable research in the developing and tailoring of control system design techniques for these structures. This problem involves designing a control system for a multi-input, multi-output (MIMO) system that satisfies various performance criteria, such as vibration suppression, disturbance and noise rejection, attitude control and slewing control. Considerable progress has been made and demonstrated in control system design techniques for these structures. The key to designing control systems for these structures that meet stringent performance requirements is an accurate model. It has become apparent that theoretically and finite-element generated models do not provide the needed accuracy; almost all successful demonstrations of control system design techniques have involved using test results for fine-tuning a model or for extracting a model using system ID techniques. This paper describes past and ongoing efforts at Ohio University and NASA MSFC to design controllers using 'data models.' The basic philosophy of this approach is to start with a stabilizing controller and frequency response data that describes the plant; then, iteratively vary the free parameters of the controller so that performance measures become closer to satisfying design specifications. The frequency response data can be either experimentally derived or analytically derived. One 'design-with-data' algorithm presented in this paper is called the Compensator Improvement Program (CIP). The current CIP designs controllers for MIMO systems so that classical gain, phase, and attenuation margins are achieved. The center-piece of the CIP algorithm is the constraint improvement technique which is used to calculate a parameter change vector that guarantees an improvement in all unsatisfied, feasible performance metrics from iteration to iteration. The paper also presents a recently demonstrated CIP-type algorithm, called the Model and Data Oriented Computer-Aided Design System (MADCADS), developed for achieving H(sub infinity) type design specifications using data models. Control system design for the NASA/MSFC Single Structure Control Facility are demonstrated for both CIP and MADCADS. Advantages of design-with-data algorithms over techniques that require analytical plant models are also presented.
Ultra-small dye-doped silica nanoparticles via modified sol-gel technique
NASA Astrophysics Data System (ADS)
Riccò, R.; Nizzero, S.; Penna, E.; Meneghello, A.; Cretaio, E.; Enrichi, F.
2018-05-01
In modern biosensing and imaging, fluorescence-based methods constitute the most diffused approach to achieve optimal detection of analytes, both in solution and on the single-particle level. Despite the huge progresses made in recent decades in the development of plasmonic biosensors and label-free sensing techniques, fluorescent molecules remain the most commonly used contrast agents to date for commercial imaging and detection methods. However, they exhibit low stability, can be difficult to functionalise, and often result in a low signal-to-noise ratio. Thus, embedding fluorescent probes into robust and bio-compatible materials, such as silica nanoparticles, can substantially enhance the detection limit and dramatically increase the sensitivity. In this work, ultra-small fluorescent silica nanoparticles (NPs) for optical biosensing applications were doped with a fluorescent dye, using simple water-based sol-gel approaches based on the classical Stöber procedure. By systematically modulating reaction parameters, controllable size tuning of particle diameters as low as 10 nm was achieved. Particles morphology and optical response were evaluated showing a possible single-molecule behaviour, without employing microemulsion methods to achieve similar results. [Figure not available: see fulltext.
Estimating and Comparing Dam Deformation Using Classical and GNSS Techniques
Barzaghi, Riccardo; De Gaetani, Carlo Iapige
2018-01-01
Global Navigation Satellite Systems (GNSS) receivers are nowadays commonly used in monitoring applications, e.g., in estimating crustal and infrastructure displacements. This is basically due to the recent improvements in GNSS instruments and methodologies that allow high-precision positioning, 24 h availability and semiautomatic data processing. In this paper, GNSS-estimated displacements on a dam structure have been analyzed and compared with pendulum data. This study has been carried out for the Eleonora D’Arborea (Cantoniera) dam, which is in Sardinia. Time series of pendulum and GNSS over a time span of 2.5 years have been aligned so as to be comparable. Analytical models fitting these time series have been estimated and compared. Those models were able to properly fit pendulum data and GNSS data, with standard deviation of residuals smaller than one millimeter. These encouraging results led to the conclusion that GNSS technique can be profitably applied to dam monitoring allowing a denser description, both in space and time, of the dam displacements than the one based on pendulum observations. PMID:29498650
NASA Astrophysics Data System (ADS)
García, Isaac A.; Llibre, Jaume; Maza, Susanna
2018-06-01
In this work we consider real analytic functions , where , Ω is a bounded open subset of , is an interval containing the origin, are parameters, and ε is a small parameter. We study the branching of the zero-set of at multiple points when the parameter ε varies. We apply the obtained results to improve the classical averaging theory for computing T-periodic solutions of λ-families of analytic T-periodic ordinary differential equations defined on , using the displacement functions defined by these equations. We call the coefficients in the Taylor expansion of in powers of ε the averaged functions. The main contribution consists in analyzing the role that have the multiple zeros of the first non-zero averaged function. The outcome is that these multiple zeros can be of two different classes depending on whether the zeros belong or not to the analytic set defined by the real variety associated to the ideal generated by the averaged functions in the Noetheriang ring of all the real analytic functions at . We bound the maximum number of branches of isolated zeros that can bifurcate from each multiple zero z 0. Sometimes these bounds depend on the cardinalities of minimal bases of the former ideal. Several examples illustrate our results and they are compared with the classical theory, branching theory and also under the light of singularity theory of smooth maps. The examples range from polynomial vector fields to Abel differential equations and perturbed linear centers.
NASA Technical Reports Server (NTRS)
1974-01-01
Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.
Student Support for Research in Hierarchical Control and Trajectory Planning
NASA Technical Reports Server (NTRS)
Martin, Clyde F.
1999-01-01
Generally, classical polynomial splines tend to exhibit unwanted undulations. In this work, we discuss a technique, based on control principles, for eliminating these undulations and increasing the smoothness properties of the spline interpolants. We give a generalization of the classical polynomial splines and show that this generalization is, in fact, a family of splines that covers the broad spectrum of polynomial, trigonometric and exponential splines. A particular element in this family is determined by the appropriate control data. It is shown that this technique is easy to implement. Several numerical and curve-fitting examples are given to illustrate the advantages of this technique over the classical approach. Finally, we discuss the convergence properties of the interpolant.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
2017-01-01
Questions in data analysis involving the concepts of time and measurement are often pushed into the background or reserved for a philosophical discussion. Some examples are: a) Is causality a consequence of the laws of physics, or can the arrow of time be reversed? b) Can we determine the arrow of time of an event? c) Do we need the continuum hypothesis for the underlying function in any measurement process? d) Can we say anything about the analyticity of the underlying process of an event? e) Would it be valid to model a non-analytical process as function of time? f) What are the implications of all these questions for classical Fourier techniques? However, in the age of big data gathered either from space missions supplying ultra-precise long time series, or e.g. LIGO data from the ground, the moment to bring these questions to the foreground seems arrived. The limitations of our understanding of some fundamental processes is emphasized by the lack of solution for problems open for more than 2 decades, such as the non-detection of solar g-modes, or the modal identification of main sequence stellar pulsators like delta Scuti stars. Flicker noise or 1/f noise, for example, attributed in solar-like stars to granulation, is analyzed mostly only to apply noise reduction techniques, neither considering the classical problem of 1/f noise that was introduced a 100 years ago, nor taking into account ergodic or non-ergodic solutions that make inapplicable spectral analysis techniques in practice. This topic was discussed by Nicholas W. Watkins during the ITISE meeting held in Granada in 2016. There he presented preliminary results of his research on Mandelbrot's related work. We reproduce here his quotation of Mandelbrot (1999) "There is a sharp contrast between a highly anomalous ("non-white") noise that proceeds in ordinary clock time and a noise whose principal anomaly is that it is restricted to fractal time", suggesting a connection with the above proposed topics that could be phrased as the following additional questions:a) Is self-organized criticality (SOC) frequent in astrophysical phenomena? b) Could all fractals in nature be considered stochastic? c) Could we establish mathematical/physical relationships between chaotic and fractal behaviors in time series? d) Could the differences between fractals and chaos in terms of analyticity be used to understand the residuals of the fitting of stellar light curves? In this meeting we would like to approximate these problems from a holistic and multidisciplinary perspective, taking into account not only technical issues but also the deeper implications. In particular the concept of connectivity (introduced in Pascual-Granado et al. A&A, 2015) could be used to implement, within the framework of ARMA processes, an "arrow of time" (see attached document), and so studying the possible implications in the concept of time as envisaged by Watkins.
Quantum calculus of classical vortex images, integrable models and quantum states
NASA Astrophysics Data System (ADS)
Pashaev, Oktay K.
2016-10-01
From two circle theorem described in terms of q-periodic functions, in the limit q→1 we have derived the strip theorem and the stream function for N vortex problem. For regular N-vortex polygon we find compact expression for the velocity of uniform rotation and show that it represents a nonlinear oscillator. We describe q-dispersive extensions of the linear and nonlinear Schrodinger equations, as well as the q-semiclassical expansions in terms of Bernoulli and Euler polynomials. Different kind of q-analytic functions are introduced, including the pq-analytic and the golden analytic functions.
Simple analytical model of a thermal diode
NASA Astrophysics Data System (ADS)
Kaushik, Saurabh; Kaushik, Sachin; Marathe, Rahul
2018-05-01
Recently there is a lot of attention given to manipulation of heat by constructing thermal devices such as thermal diodes, transistors and logic gates. Many of the models proposed have an asymmetry which leads to the desired effect. Presence of non-linear interactions among the particles is also essential. But, such models lack analytical understanding. Here we propose a simple, analytically solvable model of a thermal diode. Our model consists of classical spins in contact with multiple heat baths and constant external magnetic fields. Interestingly the magnetic field is the only parameter required to get the effect of heat rectification.
NASA Astrophysics Data System (ADS)
Difilippo, Felix C.
2012-09-01
Within the context of general relativity theory we calculate, analytically, scattering signatures around a gravitational singularity: angular and time distributions of scattered massive objects and photons and the time and space modulation of Doppler effects. Additionally, the scattering and absorption cross sections for the gravitational interactions are calculated. The results of numerical simulations of the trajectories are compared with the analytical results.
Extended Rindler spacetime and a new multiverse structure
NASA Astrophysics Data System (ADS)
Araya, Ignacio J.; Bars, Itzhak
2018-04-01
This is the first of a series of papers in which we use analyticity properties of quantum fields propagating on a spacetime to uncover a new multiverse geometry when the classical geometry has horizons and/or singularities. The nature and origin of the "multiverse" idea presented in this paper, that is shared by the fields in the standard model coupled to gravity, are different from other notions of a multiverse. Via analyticity we are able to establish definite relations among the universes. In this paper we illustrate these properties for the extended Rindler space, while black hole spacetime and the cosmological geometry of mini-superspace (see Appendix B) will appear in later papers. In classical general relativity, extended Rindler space is equivalent to flat Minkowski space; it consists of the union of the four wedges in (u ,v ) light-cone coordinates as in Fig. 1. In quantum mechanics, the wavefunction is an analytic function of (u ,v ) that is sensitive to branch points at the horizons u =0 or v =0 , with branch cuts attached to them. The wave function is uniquely defined by analyticity on an infinite number of sheets in the cut analytic (u ,v ) spacetime. This structure is naturally interpreted as an infinite stack of identical Minkowski geometries, or "universes", connected to each other by analyticity across branch cuts, such that each sheet represents a different Minkowski universe when (u ,v ) are analytically continued to the real axis on any sheet. We show in this paper that, in the absence of interactions, information does not flow from one Rindler sheet to another. By contrast, for an eternal black hole spacetime, which may be viewed as a modification of Rindler that includes gravitational interactions, analyticity shows how information is "lost" due to a flow to other universes, enabled by an additional branch point and cut due to the black hole singularity.
Torsion of a Cosserat elastic bar with square cross section: theory and experiment
NASA Astrophysics Data System (ADS)
Drugan, W. J.; Lakes, R. S.
2018-04-01
An approximate analytical solution for the displacement and microrotation vector fields is derived for pure torsion of a prismatic bar with square cross section comprised of homogeneous, isotropic linear Cosserat elastic material. This is accomplished by analytical simplification coupled with use of the principle of minimum potential energy together with polynomial representations for the desired field components. Explicit approximate expressions are derived for cross section warp and for applied torque versus angle of twist of the bar. These show that torsional rigidity exceeds the classical elasticity value, the difference being larger for slender bars, and that cross section warp is less than the classical amount. Experimental measurements on two sets of 3D printed square cross section polymeric bars, each set having a different microstructure and four different cross section sizes, revealed size effects not captured by classical elasticity but consistent with the present analysis for physically sensible values of the Cosserat moduli. The warp can allow inference of Cosserat elastic constants independently of any sensitivity the material may have to dilatation gradients; warp also facilitates inference of Cosserat constants that are difficult to obtain via size effects.
Quench dynamics of a dissipative Rydberg gas in the classical and quantum regimes
NASA Astrophysics Data System (ADS)
Gribben, Dominic; Lesanovsky, Igor; Gutiérrez, Ricardo
2018-01-01
Understanding the nonequilibrium behavior of quantum systems is a major goal of contemporary physics. Much research is currently focused on the dynamics of many-body systems in low-dimensional lattices following a quench, i.e., a sudden change of parameters. Already such a simple setting poses substantial theoretical challenges for the investigation of the real-time postquench quantum dynamics. In classical many-body systems, the Kolmogorov-Mehl-Johnson-Avrami model describes the phase transformation kinetics of a system that is quenched across a first-order phase transition. Here, we show that a similar approach can be applied for shedding light on the quench dynamics of an interacting gas of Rydberg atoms, which has become an important experimental platform for the investigation of quantum nonequilibrium effects. We are able to gain an analytical understanding of the time evolution following a sudden quench from an initial state devoid of Rydberg atoms and identify strikingly different behaviors of the excitation growth in the classical and quantum regimes. Our approach allows us to describe quenches near a nonequilibrium phase transition and provides an approximate analytical solution deep in the quantum domain.
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
1991-01-01
The analytical derivations of the non-axial thrust divergence losses for convergent-divergent nozzles are described as well as how these calculations are embodied in the Navy/NASA engine computer program. The convergent-divergent geometries considered are simple classic axisymmetric nozzles, two dimensional rectangular nozzles, and axisymmetric and two dimensional plug nozzles. A simple, traditional, inviscid mathematical approach is used to deduce the influence of the ineffectual non-axial thrust as a function of the nozzle exit divergence angle.
Cell-model prediction of the melting of a Lennard-Jones solid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holian, B.L.
The classical free energy of the Lennard-Jones 6-12 solid is computed from a single-particle anharmonic cell model with a correction to the entropy given by the classical correlational entropy of quasiharmonic lattice dynamics. The free energy of the fluid is obtained from the Hansen-Ree analytic fit to Monte Carlo equation-of-state calculations. The resulting predictions of the solid-fluid coexistence curves by this corrected cell model of the solid are in excellent agreement with the computer experiments.
The classical equation of state of fully ionized plasmas
NASA Astrophysics Data System (ADS)
Eisa, Dalia Ahmed
2011-03-01
The aim of this paper is to calculate the analytical form of the equation of state until the third virial coefficient of a classical system interacting via an effective potential of fully Ionized Plasmas. The excess osmotic pressure is represented in the forms of a convergent series expansions in terms of the plasma Parameter μ _{ab} = {{{e_a e_b χ } over {DKT}}}, where χ2 is the square of the inverse Debye radius. We consider only the thermal equilibrium plasma.
Deriving Earth Science Data Analytics Tools/Techniques Requirements
NASA Astrophysics Data System (ADS)
Kempler, S. J.
2015-12-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists. Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics tools/techniques requirements that would support specific ESDA type goals. Representative existing data analytics tools/techniques relevant to ESDA will also be addressed.
Improved gel electrophoresis matrix for hydrophobic protein separation and identification.
Tokarski, Caroline; Fillet, Marianne; Rolando, Christian
2011-03-01
We propose an improved acrylamide gel for the separation of hydrophobic proteins. The separation strategy is based on the incorporation of N-alkylated and N,N'-dialkylated acrylamide monomers in the gel composition in order to increase hydrophobic interactions between the gel matrix and the membrane proteins. Focusing on the most efficient monomer, N,N'-dimethylacrylamide, the potentiality of the new matrix was evaluated on membrane proteins of the human colon HCT-116 cell line. Protein analysis was performed using an adapted analytical strategy based on FT-ICR tandem mass spectrometry. As a result of this comparative study, including advanced reproducibility experiments, more hydrophobic proteins were identified in the new gel (average GRAVY: -0.085) than in the classical gel (average GRAVY: -0.411). Highly hydrophobic peptides were identified reaching a GRAVY value up to 1.450, therefore indicating their probable locations in the membrane. Focusing on predicted transmembrane domains, it can be pointed out that 27 proteins were identified in the hydrophobic gel containing up to 11 transmembrane domains; in the classical gel, only 5 proteins containing 1 transmembrane domain were successfully identified. For example, multiple ionic channels and receptors were characterized in the hydrophobic gel such as the sodium/potassium channel and the glutamate or the transferrin receptors whereas they are traditionally detected using specific enrichment techniques such as immunoprecipitation. In total, membrane proteins identified in the classical gel are well documented in the literature, while most of the membrane proteins only identified on the hydrophobic gel have rarely or never been described using a proteomic-based approach. 2010 Elsevier Inc. All rights reserved.
Green analytical chemistry--theory and practice.
Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek
2010-08-01
This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.
Sarkar, Sahotra
2015-10-01
This paper attempts a critical reappraisal of Nagel's (1961, 1970) model of reduction taking into account both traditional criticisms and recent defenses. This model treats reduction as a type of explanation in which a reduced theory is explained by a reducing theory after their relevant representational items have been suitably connected. In accordance with the deductive-nomological model, the explanation is supposed to consist of a logical deduction. Nagel was a pluralist about both the logical form of the connections between the reduced and reducing theories (which could be conditionals or biconditionals) and their epistemological status (as analytic connections, conventions, or synthetic claims). This paper defends Nagel's pluralism on both counts and, in the process, argues that the multiple realizability objection to reductionism is misplaced. It also argues that the Nagel model correctly characterizes reduction as a type of explanation. However, it notes that logical deduction must be replaced by a broader class of inferential techniques that allow for different types of approximation. Whereas Nagel (1970), in contrast to his earlier position (1961), recognized the relevance of approximation, he did not realize its full import for the model. Throughout the paper two case studies are used to illustrate the arguments: the putative reduction of classical thermodynamics to the kinetic theory of matter and that of classical genetics to molecular biology. Copyright © 2015. Published by Elsevier Ltd.
Classical topological paramagnetism
NASA Astrophysics Data System (ADS)
Bondesan, R.; Ringel, Z.
2017-05-01
Topological phases of matter are one of the hallmarks of quantum condensed matter physics. One of their striking features is a bulk-boundary correspondence wherein the topological nature of the bulk manifests itself on boundaries via exotic massless phases. In classical wave phenomena, analogous effects may arise; however, these cannot be viewed as equilibrium phases of matter. Here, we identify a set of rules under which robust equilibrium classical topological phenomena exist. We write simple and analytically tractable classical lattice models of spins and rotors in two and three dimensions which, at suitable parameter ranges, are paramagnetic in the bulk but nonetheless exhibit some unusual long-range or critical order on their boundaries. We point out the role of simplicial cohomology as a means of classifying, writing, and analyzing such models. This opens an experimental route for studying strongly interacting topological phases of spins.
Quantum dressing orbits on compact groups
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Šťovíček, Pavel
1993-02-01
The quantum double is shown to imply the dressing transformation on quantum compact groups and the quantum Iwasawa decompositon in the general case. Quantum dressing orbits are described explicitly as *-algebras. The dual coalgebras consisting of differential operators are related to the quantum Weyl elements. Besides, the differential geometry on a quantum leaf allows a remarkably simple construction of irreducible *-representations of the algebras of quantum functions. Representation spaces then consist of analytic functions on classical phase spaces. These representations are also interpreted in the framework of quantization in the spirit of Berezin applied to symplectic leaves on classical compact groups. Convenient “coherent states” are introduced and a correspondence between classical and quantum observables is given.
Dam break problem for the focusing nonlinear Schrödinger equation and the generation of rogue waves
NASA Astrophysics Data System (ADS)
El, G. A.; Khamis, E. G.; Tovbis, A.
2016-09-01
We propose a novel, analytically tractable, scenario of the rogue wave formation in the framework of the small-dispersion focusing nonlinear Schrödinger (NLS) equation with the initial condition in the form of a rectangular barrier (a ‘box’). We use the Whitham modulation theory combined with the nonlinear steepest descent for the semi-classical inverse scattering transform, to describe the evolution and interaction of two counter-propagating nonlinear wave trains—the dispersive dam break flows—generated in the NLS box problem. We show that the interaction dynamics results in the emergence of modulated large-amplitude quasi-periodic breather lattices whose amplitude profiles are closely approximated by the Akhmediev and Peregrine breathers within certain space-time domain. Our semi-classical analytical results are shown to be in excellent agreement with the results of direct numerical simulations of the small-dispersion focusing NLS equation.
NASA Astrophysics Data System (ADS)
Chen, Jiahui; Zhou, Hui; Duan, Changkui; Peng, Xinhua
2017-03-01
Entanglement, a unique quantum resource with no classical counterpart, remains at the heart of quantum information. The Greenberger-Horne-Zeilinger (GHZ) and W states are two inequivalent classes of multipartite entangled states which cannot be transformed into each other by means of local operations and classic communication. In this paper, we present the methods to prepare the GHZ and W states via global controls on a long-range Ising spin model. For the GHZ state, general solutions are analytically obtained for an arbitrary-size spin system, while for the W state, we find a standard way to prepare the W state that is analytically illustrated in three- and four-spin systems and numerically demonstrated for larger-size systems. The number of parameters required in the numerical search increases only linearly with the size of the system.
Darwish, Hany W; Bakheit, Ahmed H; Abdelhameed, Ali S
2016-03-01
Simultaneous spectrophotometric analysis of a multi-component dosage form of olmesartan, amlodipine and hydrochlorothiazide used for the treatment of hypertension has been carried out using various chemometric methods. Multivariate calibration methods include classical least squares (CLS) executed by net analyte processing (NAP-CLS), orthogonal signal correction (OSC-CLS) and direct orthogonal signal correction (DOSC-CLS) in addition to multivariate curve resolution-alternating least squares (MCR-ALS). Results demonstrated the efficiency of the proposed methods as quantitative tools of analysis as well as their qualitative capability. The three analytes were determined precisely using the aforementioned methods in an external data set and in a dosage form after optimization of experimental conditions. Finally, the efficiency of the models was validated via comparison with the partial least squares (PLS) method in terms of accuracy and precision.
The effect of damping on a quantum system containing a Kerr-like medium
NASA Astrophysics Data System (ADS)
Mohamed, A.-B. A.; Sebawe Abdalla, M.; Obada, A.-S. F.
2018-05-01
An analytical description is given for a model which represents the interaction between Su(1,1) and Su(2) quantum systems taking into account Su(1,1)-cavity damping and Kerr medium properties. The analytic solution for the master equation of the density matrix is obtained. The examination of the effects of the damping parameter as well as the Kerr-like medium features is performed. The atomic inversion is discussed where the revivals and collapses phenomenon is realized at the considered period of time. Our study is extended to include the degree of entanglement where the system shows partial entanglement in all cases, however, disentanglement is also observed. The death and rebirth is seen in the system provided one selects the suitable values of the parameters. The correlation function of the system shows non-classical as well as classical behavior.
Analytical Electrochemistry: Methodology and Applications of Dynamic Techniques.
ERIC Educational Resources Information Center
Heineman, William R.; Kissinger, Peter T.
1980-01-01
Reports developments involving the experimental aspects of finite and current analytical electrochemistry including electrode materials (97 cited references), hydrodynamic techniques (56), spectroelectrochemistry (62), stripping voltammetry (70), voltammetric techniques (27), polarographic techniques (59), and miscellany (12). (CS)
NASA Astrophysics Data System (ADS)
Chien, Chih-Chun; Kouachi, Said; Velizhanin, Kirill A.; Dubi, Yonatan; Zwolak, Michael
2017-01-01
We present a method for calculating analytically the thermal conductance of a classical harmonic lattice with both alternating masses and nearest-neighbor couplings when placed between individual Langevin reservoirs at different temperatures. The method utilizes recent advances in analytic diagonalization techniques for certain classes of tridiagonal matrices. It recovers the results from a previous method that was applicable for alternating on-site parameters only, and extends the applicability to realistic systems in which masses and couplings alternate simultaneously. With this analytic result in hand, we show that the thermal conductance is highly sensitive to the modulation of the couplings. This is due to the existence of topologically induced edge modes at the lattice-reservoir interface and is also a reflection of the symmetries of the lattice. We make a connection to a recent work that demonstrates thermal transport is analogous to chemical reaction rates in solution given by Kramers' theory [Velizhanin et al., Sci. Rep. 5, 17506 (2015)], 10.1038/srep17506. In particular, we show that the turnover behavior in the presence of edge modes prevents calculations based on single-site reservoirs from coming close to the natural—or intrinsic—conductance of the lattice. Obtaining the correct value of the intrinsic conductance through simulation of even a small lattice where ballistic effects are important requires quite large extended reservoir regions. Our results thus offer a route for both the design and proper simulation of thermal conductance of nanoscale devices.
Lewczuk, Piotr; Riederer, Peter; O'Bryant, Sid E; Verbeek, Marcel M; Dubois, Bruno; Visser, Pieter Jelle; Jellinger, Kurt A; Engelborghs, Sebastiaan; Ramirez, Alfredo; Parnetti, Lucilla; Jack, Clifford R; Teunissen, Charlotte E; Hampel, Harald; Lleó, Alberto; Jessen, Frank; Glodzik, Lidia; de Leon, Mony J; Fagan, Anne M; Molinuevo, José Luis; Jansen, Willemijn J; Winblad, Bengt; Shaw, Leslie M; Andreasson, Ulf; Otto, Markus; Mollenhauer, Brit; Wiltfang, Jens; Turner, Martin R; Zerr, Inga; Handels, Ron; Thompson, Alexander G; Johansson, Gunilla; Ermann, Natalia; Trojanowski, John Q; Karaca, Ilker; Wagner, Holger; Oeckl, Patrick; van Waalwijk van Doorn, Linda; Bjerke, Maria; Kapogiannis, Dimitrios; Kuiperij, H Bea; Farotti, Lucia; Li, Yi; Gordon, Brian A; Epelbaum, Stéphane; Vos, Stephanie J B; Klijn, Catharina J M; Van Nostrand, William E; Minguillon, Carolina; Schmitz, Matthias; Gallo, Carla; Lopez Mato, Andrea; Thibaut, Florence; Lista, Simone; Alcolea, Daniel; Zetterberg, Henrik; Blennow, Kaj; Kornhuber, Johannes
2018-06-01
In the 12 years since the publication of the first Consensus Paper of the WFSBP on biomarkers of neurodegenerative dementias, enormous advancement has taken place in the field, and the Task Force takes now the opportunity to extend and update the original paper. New concepts of Alzheimer's disease (AD) and the conceptual interactions between AD and dementia due to AD were developed, resulting in two sets for diagnostic/research criteria. Procedures for pre-analytical sample handling, biobanking, analyses and post-analytical interpretation of the results were intensively studied and optimised. A global quality control project was introduced to evaluate and monitor the inter-centre variability in measurements with the goal of harmonisation of results. Contexts of use and how to approach candidate biomarkers in biological specimens other than cerebrospinal fluid (CSF), e.g. blood, were precisely defined. Important development was achieved in neuroimaging techniques, including studies comparing amyloid-β positron emission tomography results to fluid-based modalities. Similarly, development in research laboratory technologies, such as ultra-sensitive methods, raises our hopes to further improve analytical and diagnostic accuracy of classic and novel candidate biomarkers. Synergistically, advancement in clinical trials of anti-dementia therapies energises and motivates the efforts to find and optimise the most reliable early diagnostic modalities. Finally, the first studies were published addressing the potential of cost-effectiveness of the biomarkers-based diagnosis of neurodegenerative disorders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahoo, Satiprasad; Dhar, Anirban, E-mail: anirban.dhar@gmail.com; Kar, Amlanjyoti
Environmental management of an area describes a policy for its systematic and sustainable environmental protection. In the present study, regional environmental vulnerability assessment in Hirakud command area of Odisha, India is envisaged based on Grey Analytic Hierarchy Process method (Grey–AHP) using integrated remote sensing (RS) and geographic information system (GIS) techniques. Grey–AHP combines the advantages of classical analytic hierarchy process (AHP) and grey clustering method for accurate estimation of weight coefficients. It is a new method for environmental vulnerability assessment. Environmental vulnerability index (EVI) uses natural, environmental and human impact related factors, e.g., soil, geology, elevation, slope, rainfall, temperature, windmore » speed, normalized difference vegetation index, drainage density, crop intensity, agricultural DRASTIC value, population density and road density. EVI map has been classified into four environmental vulnerability zones (EVZs) namely: ‘low’, ‘moderate’ ‘high’, and ‘extreme’ encompassing 17.87%, 44.44%, 27.81% and 9.88% of the study area, respectively. EVI map indicates that the northern part of the study area is more vulnerable from an environmental point of view. EVI map shows close correlation with elevation. Effectiveness of the zone classification is evaluated by using grey clustering method. General effectiveness is in between “better” and “common classes”. This analysis demonstrates the potential applicability of the methodology. - Highlights: • Environmental vulnerability zone identification based on Grey Analytic Hierarchy Process (AHP) • The effectiveness evaluation by means of a grey clustering method with support from AHP • Use of grey approach eliminates the excessive dependency on the experience of experts.« less
USDA-ARS?s Scientific Manuscript database
Integrating classical biological control with other management techniques such as herbicide, fire, mechanical control, grazing, or plant competition, can be the most effective way to manage invasive weeds in natural areas and rangelands. Biological control agents can be protected from potential nega...
Hahn, David W; Omenetto, Nicoló
2010-12-01
Laser-induced breakdown spectroscopy (LIBS) has become a very popular analytical method in the last decade in view of some of its unique features such as applicability to any type of sample, practically no sample preparation, remote sensing capability, and speed of analysis. The technique has a remarkably wide applicability in many fields, and the number of applications is still growing. From an analytical point of view, the quantitative aspects of LIBS may be considered its Achilles' heel, first due to the complex nature of the laser-sample interaction processes, which depend upon both the laser characteristics and the sample material properties, and second due to the plasma-particle interaction processes, which are space and time dependent. Together, these may cause undesirable matrix effects. Ways of alleviating these problems rely upon the description of the plasma excitation-ionization processes through the use of classical equilibrium relations and therefore on the assumption that the laser-induced plasma is in local thermodynamic equilibrium (LTE). Even in this case, the transient nature of the plasma and its spatial inhomogeneity need to be considered and overcome in order to justify the theoretical assumptions made. This first article focuses on the basic diagnostics aspects and presents a review of the past and recent LIBS literature pertinent to this topic. Previous research on non-laser-based plasma literature, and the resulting knowledge, is also emphasized. The aim is, on one hand, to make the readers aware of such knowledge and on the other hand to trigger the interest of the LIBS community, as well as the larger analytical plasma community, in attempting some diagnostic approaches that have not yet been fully exploited in LIBS.
Merging OLTP and OLAP - Back to the Future
NASA Astrophysics Data System (ADS)
Lehner, Wolfgang
When the terms "Data Warehousing" and "Online Analytical Processing" were coined in the 1990s by Kimball, Codd, and others, there was an obvious need for separating data and workload for operational transactional-style processing and decision-making implying complex analytical queries over large and historic data sets. Large data warehouse infrastructures have been set up to cope with the special requirements of analytical query answering for multiple reasons: For example, analytical thinking heavily relies on predefined navigation paths to guide the user through the data set and to provide different views on different aggregation levels.Multi-dimensional queries exploiting hierarchically structured dimensions lead to complex star queries at a relational backend, which could hardly be handled by classical relational systems.
Galvão, Elson Silva; Santos, Jane Meri; Lima, Ana Teresa; Reis, Neyval Costa; Orlando, Marcos Tadeu D'Azeredo; Stuetz, Richard Michael
2018-05-01
Epidemiological studies have shown the association of airborne particulate matter (PM) size and chemical composition with health problems affecting the cardiorespiratory and central nervous systems. PM also act as cloud condensation nuclei (CNN) or ice nuclei (IN), taking part in the clouds formation process, and therefore can impact the climate. There are several works using different analytical techniques in PM chemical and physical characterization to supply information to source apportionment models that help environmental agencies to assess damages accountability. Despite the numerous analytical techniques described in the literature available for PM characterization, laboratories are normally limited to the in-house available techniques, which raises the question if a given technique is suitable for the purpose of a specific experimental work. The aim of this work consists of summarizing the main available technologies for PM characterization, serving as a guide for readers to find the most appropriate technique(s) for their investigation. Elemental analysis techniques like atomic spectrometry based and X-ray based techniques, organic and carbonaceous techniques and surface analysis techniques are discussed, illustrating their main features as well as their advantages and drawbacks. We also discuss the trends in analytical techniques used over the last two decades. The choice among all techniques is a function of a number of parameters such as: the relevant particles physical properties, sampling and measuring time, access to available facilities and the costs associated to equipment acquisition, among other considerations. An analytical guide map is presented as a guideline for choosing the most appropriated technique for a given analytical information required. Copyright © 2018 Elsevier Ltd. All rights reserved.
Moazami, Hamid Reza; Hosseiny Davarani, Saied Saeed; Mohammadi, Jamil; Nojavan, Saeed; Abrari, Masoud
2015-09-03
The distribution of electric field vectors was first calculated for electromembrane extraction (EME) systems in classical and cylindrical electrode geometries. The results showed that supported liquid membrane (SLM) has a general field amplifying effect due to its lower dielectric constant in comparison with aqueous donor/acceptor solutions. The calculated norms of the electric field vector showed that a DC voltage of 50 V can create huge electric field strengths up to 64 kV m(-1) and 111 kV m(-1) in classical and cylindrical geometries respectively. In both cases, the electric field strength reached its peak value on the inner wall of the SLM. In the case of classical geometry, the field strength was a function of the polar position of the SLM whereas the field strength in cylindrical geometry was angularly uniform. In order to investigate the effect of the electrode geometry on the performance of real EME systems, the analysis was carried out in three different geometries including classical, helical and cylindrical arrangements using naproxen and sodium diclofenac as the model analytes. Despite higher field strength and extended cross sectional area, the helical and cylindrical geometries gave lower recoveries with respect to the classical EME. The observed decline of the signal was proved to be against the relations governing migration and diffusion processes, which means that a third driving force is involved in EME. The third driving force is the interaction between the radially inhomogeneous electric field and the analyte in its neutral form. Copyright © 2015 Elsevier B.V. All rights reserved.
Speed and heart-rate profiles in skating and classical cross-country skiing competitions.
Bolger, Conor M; Kocbach, Jan; Hegge, Ann Magdalen; Sandbakk, Øyvind
2015-10-01
To compare the speed and heart-rate profiles during international skating and classical competitions in male and female world-class cross-country skiers. Four male and 5 female skiers performed individual time trials of 15 km (men) and 10 km (women) in the skating and classical techniques on 2 consecutive days. Races were performed on the same 5-km course. The course was mapped with GPS and a barometer to provide a valid course and elevation profile. Time, speed, and heart rate were determined for uphill, flat, and downhill terrains throughout the entire competition by wearing a GPS and a heart-rate monitor. Times in uphill, flat, and downhill terrain were ~55%, 15-20%, and 25-30%, respectively, of the total race time for both techniques and genders. The average speed differences between skating and classical skiing were 9% and 11% for men and women, respectively, and these values were 12% and 15% for uphill, 8% and 13% for flat (all P < .05), and 2% and 1% for downhill terrain. The average speeds for men were 9% and 11% faster than for women in skating and classical, respectively, with corresponding numbers of 11% and 14% for uphill, 6% and 11% for flat, and 4% and 5% for downhill terrain (all P < .05). Heart-rate profiles were relatively independent of technique and gender. The greatest performance differences between the skating and classical techniques and between the 2 genders were found on uphill terrain. Therefore, these speed differences could not be explained by variations in exercise intensity.
NASA Astrophysics Data System (ADS)
Wollocko, Arthur; Danczyk, Jennifer; Farry, Michael; Jenkins, Michael; Voshell, Martin
2015-05-01
The proliferation of sensor technologies continues to impact Intelligence Analysis (IA) work domains. Historical procurement focus on sensor platform development and acquisition has resulted in increasingly advanced collection systems; however, such systems often demonstrate classic data overload conditions by placing increased burdens on already overtaxed human operators and analysts. Support technologies and improved interfaces have begun to emerge to ease that burden, but these often focus on single modalities or sensor platforms rather than underlying operator and analyst support needs, resulting in systems that do not adequately leverage their natural human attentional competencies, unique skills, and training. One particular reason why emerging support tools often fail is due to the gap between military applications and their functions, and the functions and capabilities afforded by cutting edge technology employed daily by modern knowledge workers who are increasingly "digitally native." With the entry of Generation Y into these workplaces, "net generation" analysts, who are familiar with socially driven platforms that excel at giving users insight into large data sets while keeping cognitive burdens at a minimum, are creating opportunities for enhanced workflows. By using these ubiquitous platforms, net generation analysts have trained skills in discovering new information socially, tracking trends among affinity groups, and disseminating information. However, these functions are currently under-supported by existing tools. In this paper, we describe how socially driven techniques can be contextualized to frame complex analytical threads throughout the IA process. This paper focuses specifically on collaborative support technology development efforts for a team of operators and analysts. Our work focuses on under-supported functions in current working environments, and identifies opportunities to improve a team's ability to discover new information and disseminate insightful analytic findings. We describe our Cognitive Systems Engineering approach to developing a novel collaborative enterprise IA system that combines modern collaboration tools with familiar contemporary social technologies. Our current findings detail specific cognitive and collaborative work support functions that defined the design requirements for a prototype analyst collaborative support environment.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
Limb Lengthening and Then Insertion of an Intramedullary Nail: A Case-matched Comparison
Kleinman, Dawn; Fragomen, Austin T.; Ilizarov, Svetlana
2008-01-01
Distraction osteogenesis is an effective method for lengthening, deformity correction, and treatment of nonunions and bone defects. The classic method uses an external fixator for both distraction and consolidation leading to lengthy times in frames and there is a risk of refracture after frame removal. We suggest a new technique: lengthening and then nailing (LATN) technique in which the frame is used for gradual distraction and then a reamed intramedullary nail inserted to support the bone during the consolidation phase, allowing early removal of the external fixator. We performed a retrospective case-matched comparison of patients lengthened with LATN (39 limbs in 27 patients) technique versus the classic (34 limbs in 27 patients). The LATN group wore the external fixator for less time than the classic group (12 versus 29 weeks). The LATN group had a lower external fixation index (0.5 versus 1.9) and a lower bone healing index (0.8 versus 1.9) than the classic group. LATN confers advantages over the classic method including shorter times needed in external fixation, quicker bone healing, and protection against refracture. There are also advantages over the lengthening over a nail and internal lengthening nail techniques. Level of Evidence: Level III, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:18800209
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Apparent Mass Nonlinearity for Paired Oscillating Plates
NASA Astrophysics Data System (ADS)
Granlund, Kenneth; Ol, Michael
2014-11-01
The classical potential-flow problem of a plate oscillating sinusoidally at small amplitude, in a direction normal to its plane, has a well-known analytical solution of a fluid ``mass,'' multiplied by plate acceleration, being equal to the force on the plate. This so-called apparent-mass is analytically equal to that of a cylinder of fluid, with diameter equal to plate chord. The force is directly proportional to frequency squared. Here we consider experimentally a generalization, where two coplanar plates of equal chord are placed at some lateral distance apart. For spacing of ~0.5 chord and larger between the two plates, the analytical solution for a single plate can simply be doubled. Zero spacing means a plate of twice the chord and therefore a heuristic cylinder of fluid of twice the cross-sectional area. This limit is approached for plate spacing <0.5c. For a spacing of 0.1-0.2c, the force due to apparent mass was found to increase with frequency, when normalized by frequency squared; this is a nonlinearity and a departure from the classical theory. Flow visualization in a water-tank suggests that such departure can be imputed to vortex shedding from the plates' edges inside the inter-plate gap.
Lane, Darius J. R.; Lawen, Alfons
2014-01-01
Vitamin C (ascorbate) plays numerous important roles in cellular metabolism, many of which have only come to light in recent years. For instance, within the brain, ascorbate acts in a neuroprotective and neuromodulatory manner that involves ascorbate cycling between neurons and vicinal astrocytes - a relationship that appears to be crucial for brain ascorbate homeostasis. Additionally, emerging evidence strongly suggests that ascorbate has a greatly expanded role in regulating cellular and systemic iron metabolism than is classically recognized. The increasing recognition of the integral role of ascorbate in normal and deregulated cellular and organismal physiology demands a range of medium-throughput and high-sensitivity analytic techniques that can be executed without the need for highly expensive specialist equipment. Here we provide explicit instructions for a medium-throughput, specific and relatively inexpensive microplate assay for the determination of both intra- and extracellular ascorbate in cell culture. PMID:24747535
Late-time structure of the Bunch-Davies de Sitter wavefunction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anninos, Dionysios; Anous, Tarek; Freedman, Daniel Z.
2015-11-30
We examine the late time behavior of the Bunch-Davies wavefunction for interacting light fields in a de Sitter background. We use perturbative techniques developed in the framework of AdS/CFT, and analytically continue to compute tree and loop level contributions to the Bunch-Davies wavefunction. We consider self-interacting scalars of general mass, but focus especially on the massless and conformally coupled cases. We show that certain contributions grow logarithmically in conformal time both at tree and loop level. We also consider gauge fields and gravitons. The four-dimensional Fefferman-Graham expansion of classical asymptotically de Sitter solutions is used to show that the wavefunctionmore » contains no logarithmic growth in the pure graviton sector at tree level. Finally, assuming a holographic relation between the wavefunction and the partition function of a conformal field theory, we interpret the logarithmic growths in the language of conformal field theory.« less
Drevinskas, Tomas; Mickienė, Rūta; Maruška, Audrius; Stankevičius, Mantas; Tiso, Nicola; Mikašauskaitė, Jurgita; Ragažinskienė, Ona; Levišauskas, Donatas; Bartkuvienė, Violeta; Snieškienė, Vilija; Stankevičienė, Antanina; Polcaro, Chiara; Galli, Emanuela; Donati, Enrica; Tekorius, Tomas; Kornyšova, Olga; Kaškonienė, Vilma
2016-02-01
The miniaturization and optimization of a white rot fungal bioremediation experiment is described in this paper. The optimized procedure allows determination of the degradation kinetics of anthracene. The miniaturized procedure requires only 2.5 ml of culture medium. The experiment is more precise, robust, and better controlled comparing it to classical tests in flasks. Using this technique, different parts, i.e., the culture medium, the fungi, and the cotton seal, can be analyzed. A simple sample preparation speeds up the analytical process. Experiments performed show degradation of anthracene up to approximately 60% by Irpex lacteus and up to approximately 40% by Pleurotus ostreatus in 25 days. Bioremediation of anthracene by the consortium of I. lacteus and P. ostreatus shows the biodegradation of anthracene up to approximately 56% in 23 days. At the end of the experiment, the surface tension of culture medium decreased comparing it to the blank, indicating generation of surfactant compounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harstad, E. N.; Harlow, Francis Harvey,; Schreyer, H. L.
Our goal is to develop constitutive relations for the behavior of a solid polymer during high-strain-rate deformations. In contrast to the classic thermodynamic techniques for deriving stress-strain response in static (equilibrium) circumstances, we employ a statistical-mechanics approach, in which we evolve a probability distribution function (PDF) for the velocity fluctuations of the repeating units of the chain. We use a Langevin description for the dynamics of a single repeating unit and a Lioville equation to describe the variations of the PDF. Moments of the PDF give the conservation equations for a single polymer chain embedded in other similar chains. Tomore » extract single-chain analytical constitutive relations these equations have been solved for representative loading paths. By this process we discover that a measure of nonuniform chain link displacement serves this purpose very well. We then derive an evolution equation for the descriptor function, with the result being a history-dependent constitutive relation.« less
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
NMR relaxation rate in quasi one-dimensional antiferromagnets
NASA Astrophysics Data System (ADS)
Capponi, Sylvain; Dupont, Maxime; Laflorencie, Nicolas; Sengupta, Pinaki; Shao, Hui; Sandvik, Anders W.
We compare results of different numerical approaches to compute the NMR relaxation rate 1 /T1 in quasi one-dimensional (1d) antiferromagnets. In the purely 1d regime, recent numerical simulations using DMRG have provided the full crossover behavior from classical regime at high temperature to universal Tomonaga-Luttinger liquid at low-energy (in the gapless case) or activated behavior (in the gapped case). For quasi 1d models, we can use mean-field approaches to reduce the problem to a 1d one that can be studied using DMRG. But in some cases, we can also simulate the full microscopic model using quantum Monte-Carlo techniques. This allows to compute dynamical correlations in imaginary time and we will discuss recent advances to perform stochastic analytic continuation to get real frequency spectra. Finally, we connect our results to experiments on various quasi 1d materials.
Are the classic diagnostic methods in mycology still state of the art?
Wiegand, Cornelia; Bauer, Andrea; Brasch, Jochen; Nenoff, Pietro; Schaller, Martin; Mayser, Peter; Hipler, Uta-Christina; Elsner, Peter
2016-05-01
The diagnostic workup of cutaneous fungal infections is traditionally based on microscopic KOH preparations as well as culturing of the causative organism from sample material. Another possible option is the detection of fungal elements by dermatohistology. If performed correctly, these methods are generally suitable for the diagnosis of mycoses. However, the advent of personalized medicine and the tasks arising therefrom require new procedures marked by simplicity, specificity, and swiftness. The additional use of DNA-based molecular techniques further enhances sensitivity and diagnostic specificity, and reduces the diagnostic interval to 24-48 hours, compared to weeks required for conventional mycological methods. Given the steady evolution in the field of personalized medicine, simple analytical PCR-based systems are conceivable, which allow for instant diagnosis of dermatophytes in the dermatology office (point-of-care tests). © 2016 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.
Discovering Hidden Controlling Parameters using Data Analytics and Dimensional Analysis
NASA Astrophysics Data System (ADS)
Del Rosario, Zachary; Lee, Minyong; Iaccarino, Gianluca
2017-11-01
Dimensional Analysis is a powerful tool, one which takes a priori information and produces important simplifications. However, if this a priori information - the list of relevant parameters - is missing a relevant quantity, then the conclusions from Dimensional Analysis will be incorrect. In this work, we present novel conclusions in Dimensional Analysis, which provide a means to detect this failure mode of missing or hidden parameters. These results are based on a restated form of the Buckingham Pi theorem that reveals a ridge function structure underlying all dimensionless physical laws. We leverage this structure by constructing a hypothesis test based on sufficient dimension reduction, allowing for an experimental data-driven detection of hidden parameters. Both theory and examples will be presented, using classical turbulent pipe flow as the working example. Keywords: experimental techniques, dimensional analysis, lurking variables, hidden parameters, buckingham pi, data analysis. First author supported by the NSF GRFP under Grant Number DGE-114747.
The Parker-Sochacki Method of Solving Differential Equations: Applications and Limitations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2006-11-01
The Parker-Sochacki method is a powerful but simple technique of solving systems of differential equations, giving either analytical or numerical results. It has been in use for about 10 years now since its discovery by G. Edgar Parker and James Sochacki of the James Madison University Dept. of Mathematics and Statistics. It is being presented here because it is still not widely known and can benefit the listeners. It is a method of rapidly generating the Maclauren series to high order, non-iteratively. It has been successfully applied to more than a hundred systems of equations, including the classical many-body problem. Its advantages include its speed of calculation, its simplicity, and the fact that it uses only addition, subtraction and multiplication. It is not just a polynomial approximation, because it yields the Maclaurin series, and therefore exhibits the advantages and disadvantages of that series. A few applications will be presented.
Ammar, T A; Abid, K Y; El-Bindary, A A; El-Sonbati, A Z
2015-12-01
Most drinking water industries are closely examining options to maintain a certain level of disinfectant residual through the entire distribution system. Chlorine dioxide is one of the promising disinfectants that is usually used as a secondary disinfectant, whereas the selection of the proper monitoring analytical technique to ensure disinfection and regulatory compliance has been debated within the industry. This research endeavored to objectively compare the performance of commercially available analytical techniques used for chlorine dioxide measurements (namely, chronoamperometry, DPD (N,N-diethyl-p-phenylenediamine), Lissamine Green B (LGB WET) and amperometric titration), to determine the superior technique. The commonly available commercial analytical techniques were evaluated over a wide range of chlorine dioxide concentrations. In reference to pre-defined criteria, the superior analytical technique was determined. To discern the effectiveness of such superior technique, various factors, such as sample temperature, high ionic strength, and other interferences that might influence the performance were examined. Among the four techniques, chronoamperometry technique indicates a significant level of accuracy and precision. Furthermore, the various influencing factors studied did not diminish the technique's performance where it was fairly adequate in all matrices. This study is a step towards proper disinfection monitoring and it confidently assists engineers with chlorine dioxide disinfection system planning and management.
Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette
2014-08-15
The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Sanan, Reshu; Mahajan, Rakesh Kumar
2013-03-15
With an aim to characterize the micellar aggregates of imidazolium based ionic liquids, a new potentiometric PVC sensor based on neutral ion-pair complexes of dodecylmethylimidazolium bromide-sodium dodecylsulfate (C12MeIm(+)DS(-)) has been developed. The electrode exhibited a linear response for the concentration range of 7.9×10(-5)-9.8×10(-3) M with a super-Nernstian slope of 92.94 mV/decade, a response time of 5 s and critical micellar concentration (cmc) of 10.09 mM for C12MeImBr. The performance of the electrode in investigating the cmc of C12MeImBr in the presence of two drugs [promazine hydrochloride (PMZ) and promethazine hydrochloride (PMT)] and three triblock copolymers (P123, L64 and F68) has been found to be satisfactory on comparison with conductivity measurements. Various micellar parameters have been evaluated for the binary mixtures of C12MeImBr with drugs and triblock copolymers using Clint's, Rubingh's, and Motomura's approach. Thus the electrode offers a simple, straightforward and relatively fast technique for the characterization of micellar aggregates of C12MeImBr, complementing existing conventional techniques. Further, the analytical importance of proposed C12MeIm(+)-ISE as end point indicator in potentiometric titrations and for direct determination of cationic surfactants [cetylpyridinium chloride (CPC), tetradecyltrimethylammonium bromide (TTAB), benzalkonium chloride (BC)] in some commercial products was judged by comparing statistically with classical two-phase titration methods. Copyright © 2013 Elsevier Inc. All rights reserved.
Martyna, Agnieszka; Zadora, Grzegorz; Neocleous, Tereza; Michalska, Aleksandra; Dean, Nema
2016-08-10
Many chemometric tools are invaluable and have proven effective in data mining and substantial dimensionality reduction of highly multivariate data. This becomes vital for interpreting various physicochemical data due to rapid development of advanced analytical techniques, delivering much information in a single measurement run. This concerns especially spectra, which are frequently used as the subject of comparative analysis in e.g. forensic sciences. In the presented study the microtraces collected from the scenarios of hit-and-run accidents were analysed. Plastic containers and automotive plastics (e.g. bumpers, headlamp lenses) were subjected to Fourier transform infrared spectrometry and car paints were analysed using Raman spectroscopy. In the forensic context analytical results must be interpreted and reported according to the standards of the interpretation schemes acknowledged in forensic sciences using the likelihood ratio approach. However, for proper construction of LR models for highly multivariate data, such as spectra, chemometric tools must be employed for substantial data compression. Conversion from classical feature representation to distance representation was proposed for revealing hidden data peculiarities and linear discriminant analysis was further applied for minimising the within-sample variability while maximising the between-sample variability. Both techniques enabled substantial reduction of data dimensionality. Univariate and multivariate likelihood ratio models were proposed for such data. It was shown that the combination of chemometric tools and the likelihood ratio approach is capable of solving the comparison problem of highly multivariate and correlated data after proper extraction of the most relevant features and variance information hidden in the data structure. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Parvathi, S. P.; Ramanan, R. V.
2018-06-01
An iterative analytical trajectory design technique that includes perturbations in the departure phase of the interplanetary orbiter missions is proposed. The perturbations such as non-spherical gravity of Earth and the third body perturbations due to Sun and Moon are included in the analytical design process. In the design process, first the design is obtained using the iterative patched conic technique without including the perturbations and then modified to include the perturbations. The modification is based on, (i) backward analytical propagation of the state vector obtained from the iterative patched conic technique at the sphere of influence by including the perturbations, and (ii) quantification of deviations in the orbital elements at periapsis of the departure hyperbolic orbit. The orbital elements at the sphere of influence are changed to nullify the deviations at the periapsis. The analytical backward propagation is carried out using the linear approximation technique. The new analytical design technique, named as biased iterative patched conic technique, does not depend upon numerical integration and all computations are carried out using closed form expressions. The improved design is very close to the numerical design. The design analysis using the proposed technique provides a realistic insight into the mission aspects. Also, the proposed design is an excellent initial guess for numerical refinement and helps arrive at the four distinct design options for a given opportunity.
Refined genetic algorithm -- Economic dispatch example
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheble, G.B.; Brittig, K.
1995-02-01
A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
Juárez, M; Polvillo, O; Contò, M; Ficco, A; Ballico, S; Failla, S
2008-05-09
Four different extraction-derivatization methods commonly used for fatty acid analysis in meat (in situ or one-step method, saponification method, classic method and a combination of classic extraction and saponification derivatization) were tested. The in situ method had low recovery and variation. The saponification method showed the best balance between recovery, precision, repeatability and reproducibility. The classic method had high recovery and acceptable variation values, except for the polyunsaturated fatty acids, showing higher variation than the former methods. The combination of extraction and methylation steps had great recovery values, but the precision, repeatability and reproducibility were not acceptable. Therefore the saponification method would be more convenient for polyunsaturated fatty acid analysis, whereas the in situ method would be an alternative for fast analysis. However the classic method would be the method of choice for the determination of the different lipid classes.
Stress analysis in curved composites due to thermal loading
NASA Astrophysics Data System (ADS)
Polk, Jared Cornelius
Many structures in aircraft, cars, trucks, ships, machines, tools, bridges, and buildings, consist of curved sections. These sections vary from straight line segments that have curvature at either one or both ends, segments with compound curvatures, segments with two mutually perpendicular curvatures or Gaussian curvatures, and segments with a simple curvature. With the advancements made in multi-purpose composites over the past 60 years, composites slowly but steadily have been appearing in these various vehicles, compound structures, and buildings. These composite sections provide added benefits over isotropic, polymeric, and ceramic materials by generally having a higher specific strength, higher specific stiffnesses, longer fatigue life, lower density, possibilities in reduction of life cycle and/or acquisition cost, and greater adaptability to intended function of structure via material composition and geometry. To be able to design and manufacture a safe composite laminate or structure, it is imperative that the stress distributions, their causes, and effects are thoroughly understood in order to successfully accomplish mission objectives and manufacture a safe and reliable composite. The objective of the thesis work is to expand upon the knowledge of simply curved composite structures by exploring and ascertaining all pertinent parameters, phenomenon, and trends in stress variations in curved laminates due to thermal loading. The simply curved composites consist of composites with one radius of curvature throughout the span of the specimen about only one axis. Analytical beam theory, classical lamination theory, and finite element analysis were used to ascertain stress variations in a flat, isotropic beam. An analytical method was developed to ascertain the stress variations in an isotropic, simply curved beam under thermal loading that is under both free-free and fixed-fixed constraint conditions. This is the first such solution to Author's best knowledge of such a problem. It was ascertained and proven that the general, non-modified (original) version of classical lamination theory cannot be used for an analytical solution for a simply curved beam or any other structure that would require rotations of laminates out their planes in space. Finite element analysis was used to ascertain stress variations in a simply curved beam. It was verified that these solutions reduce to the flat beam solutions as the radius of curvature of the beams tends to infinity. MATLAB was used to conduct the classical lamination theory numerical analysis. A MATLAB program was written to conduct the finite element analysis for the flat and curved beams, isotropic and composite. It does not require incompatibility techniques used in mechanics of isotropic materials for indeterminate structures that are equivalent to fixed-beam problems. Finally, it has the ability to enable the user to define and create unique elements not accessible in commercial software, and modify finite element procedures to take advantage of new paradigms.
NASA Astrophysics Data System (ADS)
Barsan, Victor
2018-05-01
Several classes of transcendental equations, mainly eigenvalue equations associated to non-relativistic quantum mechanical problems, are analyzed. Siewert's systematic approach of such equations is discussed from the perspective of the new results recently obtained in the theory of generalized Lambert functions and of algebraic approximations of various special or elementary functions. Combining exact and approximate analytical methods, quite precise analytical outputs are obtained for apparently untractable problems. The results can be applied in quantum and classical mechanics, magnetism, elasticity, solar energy conversion, etc.
Structural analysis at aircraft conceptual design stage
NASA Astrophysics Data System (ADS)
Mansouri, Reza
In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions. Considering the strength and limitations of both methodologies, the question to be answered in this thesis is: How valuable and compatible are the classical analytical methods in today's conceptual design environment? And can these methods complement each other? To answer these questions, this thesis investigates the pros and cons of classical analytical structural analysis methods during the conceptual design stage through the following objectives: Illustrate structural design methodology of these methods within the framework of Aerospace Vehicle Design (AVD) lab's design lifecycle. Demonstrate the effectiveness of moment distribution method through four case studies. This will be done by considering and evaluating the strength and limitation of these methods. In order to objectively quantify the limitation and capabilities of the analytical method at the conceptual design stage, each case study becomes more complex than the one before.
Depth-resolved monitoring of analytes diffusion in ocular tissues
NASA Astrophysics Data System (ADS)
Larin, Kirill V.; Ghosn, Mohamad G.; Tuchin, Valery V.
2007-02-01
Optical coherence tomography (OCT) is a noninvasive imaging technique with high in-depth resolution. We employed OCT technique for monitoring and quantification of analyte and drug diffusion in cornea and sclera of rabbit eyes in vitro. Different analytes and drugs such as metronidazole, dexamethasone, ciprofloxacin, mannitol, and glucose solution were studied and whose permeability coefficients were calculated. Drug diffusion monitoring was performed as a function of time and as a function of depth. Obtained results suggest that OCT technique might be used for analyte diffusion studies in connective and epithelial tissues.
Energy expenditure for massage therapists during performing selected classical massage techniques.
Więcek, Magdalena; Szymura, Jadwiga; Maciejczyk, Marcin; Szyguła, Zbigniew; Cempla, Jerzy; Borkowski, Mateusz
2018-04-11
The aim of the study is to evaluate the intensity of the effort and energy expenditure in the course of performing selected classical massage techniques and to assess the workload of a massage therapist during a work shift. Thirteen massage therapists (age: 21.9±1.9 years old, body mass index: 24.5±2.8 kg×m-2, maximal oxygen consumption × body mass-1 (VO2 max×BM-1): 42.3±7 ml×kg-1×min-1) were involved in the study. The stress test consisted in performing selected classical massage techniques in the following order: stroking, kneading, shaking, beating, rubbing and direct vibration, during which the cardio-respiratory responses and the subjective rating of perceived exertion (RPE) were assessed. Intensity of exercise during each massage technique was expressed as % VO2 max, % maximal heart rate (HRmax) and % heart rate reserve (HRR). During each massage technique, net energy expenditure (EE) and energy cost of work using metabolic equivalent of task (MET) were determined. The intensity of exercise was 47.2±6.2% as expressed in terms of % VO2 max, and 74.7±3.2% as expressed in terms of % HRmax, while it was 47.8±1.7% on average when expressed in terms of % HRR during the whole procedure. While performing the classical massage, the average EE and MET were 5.6±0.9 kcal×min-1 and 5.6±0.2, respectively. The average RPE calculated for the entire procedure was 12.1±1.4. During the performance of a classical massage technique for a single treatment during the study, the average total EE was 176.5±29.6 kcal, resulting in an energy expenditure of 336.2±56.4 kcal×h-1. In the case of the classical massage technique, rubbing was the highest intensity exercise for the masseur who performed the massage (%VO2 max = 57.4±13.1%, HRmax = 79.6±7.7%, HRR = 58.5±13.1%, MET = 6.7±1.1, EE = 7.1±1.4 kcal×min-1, RPE = 13.4±1.3). In the objective assessment, physical exercise while performing a single classical massage is characterized by hard work. The technique of classical massage during which the masseur performs the highest exercise intensity is rubbing. According to the classification of work intensity based on energy expenditure, the masseur's work is considered heavy during the whole work shift. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Quantum kinetic theory of the filamentation instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bret, A.; Haas, F.
2011-07-15
The quantum electromagnetic dielectric tensor for a multi-species plasma is re-derived from the gauge-invariant Wigner-Maxwell system and presented under a form very similar to the classical one. The resulting expression is then applied to a quantum kinetic theory of the electromagnetic filamentation instability. Comparison is made with the quantum fluid theory including a Bohm pressure term and with the cold classical plasma result. A number of analytical expressions are derived for the cutoff wave vector, the largest growth rate, and the most unstable wave vector.
Ultrasonic waves in classical gases
NASA Astrophysics Data System (ADS)
Magner, A. G.; Gorenstein, M. I.; Grygoriev, U. V.
2017-12-01
The velocity and absorption coefficient for the plane sound waves in a classical gas are obtained by solving the Boltzmann kinetic equation, which describes the reaction of the single-particle distribution function to a periodic external field. Within the linear response theory, the nonperturbative dispersion equation valid for all sound frequencies is derived and solved numerically. The results are in agreement with the approximate analytical solutions found for both the frequent- and rare-collision regimes. These results are also in qualitative agreement with the experimental data for ultrasonic waves in dilute gases.
Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.
Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.
CHAPTER 7: Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages
Zhu, Rui; Zacharias, Lauren; Wooding, Kerry M.; Peng, Wenjing; Mechref, Yehia
2017-01-01
Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins while automated software tools started replacing manual processing to improve the reliability and throughout of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. PMID:28109440
NASA Astrophysics Data System (ADS)
Perrier, C.; Breysacher, J.; Rauw, G.
2009-09-01
Aims: We present a technique to determine the orbital and physical parameters of eclipsing eccentric Wolf-Rayet + O-star binaries, where one eclipse is produced by the absorption of the O-star light by the stellar wind of the W-R star. Methods: Our method is based on the use of the empirical moments of the light curve that are integral transforms evaluated from the observed light curves. The optical depth along the line of sight and the limb darkening of the W-R star are modelled by simple mathematical functions, and we derive analytical expressions for the moments of the light curve as a function of the orbital parameters and the key parameters of the transparency and limb-darkening functions. These analytical expressions are then inverted in order to derive the values of the orbital inclination, the stellar radii, the fractional luminosities, and the parameters of the wind transparency and limb-darkening laws. Results: The method is applied to the SMC W-R eclipsing binary HD 5980, a remarkable object that underwent an LBV-like event in August 1994. The analysis refers to the pre-outburst observational data. A synthetic light curve based on the elements derived for the system allows a quality assessment of the results obtained.
Yang, Heejung; Kim, Hyun Woo; Kwon, Yong Soo; Kim, Ho Kyong; Sung, Sang Hyun
2017-09-01
Anthocyanins are potent antioxidant agents that protect against many degenerative diseases; however, they are unstable because they are vulnerable to external stimuli including temperature, pH and light. This vulnerability hinders the quality control of anthocyanin-containing berries using classical high-performance liquid chromatography (HPLC) analytical methodologies based on UV or MS chromatograms. To develop an alternative approach for the quality assessment and discrimination of anthocyanin-containing berries, we used MS spectral data acquired in a short analytical time rather than UV or MS chromatograms. Mixtures of anthocyanins were separated from other components in a short gradient time (5 min) due to their higher polarity, and the representative MS spectrum was acquired from the MS chromatogram corresponding to the mixture of anthocyanins. The chemometric data from the representative MS spectra contained reliable information for the identification and relative quantification of anthocyanins in berries with good precision and accuracy. This fast and simple methodology, which consists of a simple sample preparation method and short gradient analysis, could be applied to reliably discriminate the species and geographical origins of different anthocyanin-containing berries. These features make the technique useful for the food industry. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Emotion Recognition From Singing Voices Using Contemporary Commercial Music and Classical Styles.
Hakanpää, Tua; Waaramaa, Teija; Laukkanen, Anne-Maria
2018-02-22
This study examines the recognition of emotion in contemporary commercial music (CCM) and classical styles of singing. This information may be useful in improving the training of interpretation in singing. This is an experimental comparative study. Thirteen singers (11 female, 2 male) with a minimum of 3 years' professional-level singing studies (in CCM or classical technique or both) participated. They sang at three pitches (females: a, e1, a1, males: one octave lower) expressing anger, sadness, joy, tenderness, and a neutral state. Twenty-nine listeners listened to 312 short (0.63- to 4.8-second) voice samples, 135 of which were sung using a classical singing technique and 165 of which were sung in a CCM style. The listeners were asked which emotion they heard. Activity and valence were derived from the chosen emotions. The percentage of correct recognitions out of all the answers in the listening test (N = 9048) was 30.2%. The recognition percentage for the CCM-style singing technique was higher (34.5%) than for the classical-style technique (24.5%). Valence and activation were better perceived than the emotions themselves, and activity was better recognized than valence. A higher pitch was more likely to be perceived as joy or anger, and a lower pitch as sorrow. Both valence and activation were better recognized in the female CCM samples than in the other samples. There are statistically significant differences in the recognition of emotions between classical and CCM styles of singing. Furthermore, in the singing voice, pitch affects the perception of emotions, and valence and activity are more easily recognized than emotions. Copyright © 2018 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Simulation and statistics: Like rhythm and song
NASA Astrophysics Data System (ADS)
Othman, Abdul Rahman
2013-04-01
Simulation has been introduced to solve problems in the form of systems. By using this technique the following two problems can be overcome. First, a problem that has an analytical solution but the cost of running an experiment to solve is high in terms of money and lives. Second, a problem exists but has no analytical solution. In the field of statistical inference the second problem is often encountered. With the advent of high-speed computing devices, a statistician can now use resampling techniques such as the bootstrap and permutations to form pseudo sampling distribution that will lead to the solution of the problem that cannot be solved analytically. This paper discusses how a Monte Carlo simulation was and still being used to verify the analytical solution in inference. This paper also discusses the resampling techniques as simulation techniques. The misunderstandings about these two techniques are examined. The successful usages of both techniques are also explained.
Analytical Techniques and Pharmacokinetics of Gastrodia elata Blume and Its Constituents.
Wu, Jinyi; Wu, Bingchu; Tang, Chunlan; Zhao, Jinshun
2017-07-08
Gastrodia elata Blume ( G. elata ), commonly called Tianma in Chinese, is an important and notable traditional Chinese medicine (TCM), which has been used in China as an anticonvulsant, analgesic, sedative, anti-asthma, anti-immune drug since ancient times. The aim of this review is to provide an overview of the abundant efforts of scientists in developing analytical techniques and performing pharmacokinetic studies of G. elata and its constituents, including sample pretreatment methods, analytical techniques, absorption, distribution, metabolism, excretion (ADME) and influence factors to its pharmacokinetics. Based on the reported pharmacokinetic property data of G. elata and its constituents, it is hoped that more studies will focus on the development of rapid and sensitive analytical techniques, discovering new therapeutic uses and understanding the specific in vivo mechanisms of action of G. elata and its constituents from the pharmacokinetic viewpoint in the near future. The present review discusses analytical techniques and pharmacokinetics of G. elata and its constituents reported from 1985 onwards.
Using qubits to reveal quantum signatures of an oscillator
NASA Astrophysics Data System (ADS)
Agarwal, Shantanu
In this thesis, we seek to study the qubit-oscillator system with the aim to identify and quantify inherent quantum features of the oscillator. We show that the quantum signatures of the oscillator get imprinted on the dynamics of the joint system. The two key features which we explore are the quantized energy spectrum of the oscillator and the non-classicality of the oscillator's wave function. To investigate the consequences of the oscillator's discrete energy spectrum, we consider the qubit to be coupled to the oscillator through the Rabi Hamiltonian. Recent developments in fabrication technology have opened up the possibility to explore parameter regimes which were conventionally inaccessible. Motivated by these advancements, we investigate in this thesis a parameter space where the qubit frequency is much smaller than the oscillator frequency and the Rabi frequency is allowed to be an appreciable fraction of the bare frequency of the oscillator. We use the adiabatic approximation to understand the dynamics in this quasi-degenerate qubit regime. By deriving a dressed master equation, we systematically investigate the effects of the environment on the system dynamics. We develop a spectroscopic technique, using which one can probe the steady state response of the driven and damped system. The spectroscopic signal clearly reveals the quantized nature of the oscillator's energy spectrum. We extend the adiabatic approximation, earlier developed only for the single qubit case, to a scenario where multiple qubits interact with the oscillator. Using the extended adiabatic approximation, we study the collapse and revival of multi-qubit observables. We develop analytic expressions for the revival signals which are in good agreement with the numerically evaluated results. Within the quantum restriction imposed by Heisenberg's uncertainty principle, the uncertainty in the position and momentum of an oscillator is minimum and shared equally when the oscillator is prepared in a coherent state. For this reason, coherent states and states which can be thought of as a statistical mixture of coherent states are categorized as classical; whereas states which are not valid coherent state mixtures are classified as non-classical. In this thesis, we propose a new non-classicality witness operation which does not require a tomography of the oscillator's state. We show that by coupling a qubit longitudinally to the oscillator, one can infer about the non-classical nature of the initial state of the oscillator. Using a qubit observable, we derive a non-classicality witness inequality, a violation of which definitively indicates the non-classical nature of an oscillator's state.
Rapid sequence induction has no use in pediatric anesthesia.
Engelhardt, Thomas
2015-01-01
(Classic) rapid sequence induction and intubation (RSII) has been considered fundamental to the provision of safe anesthesia. This technique consists of a combination of drugs and techniques and is intended to prevent pulmonary aspiration of gastric content with catastrophic outcomes to the patient. This review investigates aspects of this technique and highlights dangers and frauds if this technique is transferred directly into pediatric anesthesia practice. The author recommends a controlled anesthesia induction by trained pediatric anesthesiologist with suitable equipment for the children considered at risk of pulmonary aspiration. RSSI is a dangerous technique if adopted without modification into pediatric anesthesia and has in its 'classic' form no use. © 2014 John Wiley & Sons Ltd.
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
Iontophoresis and Flame Photometry: A Hybrid Interdisciplinary Experiment
ERIC Educational Resources Information Center
Sharp, Duncan; Cottam, Linzi; Bradley, Sarah; Brannigan, Jeanie; Davis, James
2010-01-01
The combination of reverse iontophoresis and flame photometry provides an engaging analytical experiment that gives first-year undergraduate students a flavor of modern drug delivery and analyte extraction techniques while reinforcing core analytical concepts. The experiment provides a highly visual demonstration of the iontophoresis technique and…
Baldo, Matías N; Angeli, Emmanuel; Gareis, Natalia C; Hunzicker, Gabriel A; Murguía, Marcelo C; Ortega, Hugo H; Hein, Gustavo J
2018-04-01
A relative bioavailability study (RBA) of two phenytoin (PHT) formulations was conducted in rabbits, in order to compare the results obtained from different matrices (plasma and blood from dried blood spot (DBS) sampling) and different experimental designs (classic and block). The method was developed by liquid chromatography tandem-mass spectrometry (LC-MS/MS) in plasma and blood samples. The different sample preparation techniques, plasma protein precipitation and DBS, were validated according to international requirements. The analytical method was validated with ranges 0.20-50.80 and 0.12-20.32 µg ml -1 , r > 0.999 for plasma and blood, respectively. Accuracy and precision were within acceptance criteria for bioanalytical assay validation (< 15 for bias and CV% and < 20 for limit of quantification (LOQ)). PHT showed long-term stability, both for plasma and blood, and under refrigerated and room temperature conditions. Haematocrit values were measured during the validation process and RBA study. Finally, the pharmacokinetic parameters (C max , T max and AUC 0-t ) obtained from the RBA study were tested. Results were highly comparable for matrices and experimental designs. A matrix correlation higher than 0.975 and a ratio of (PHT blood) = 1.158 (PHT plasma) were obtained. The results obtained herein show that the use of classic experimental design and DBS sampling for animal pharmacokinetic studies should be encouraged as they could help to prevent the use of a large number of animals and also animal euthanasia. Finally, the combination of DBS sampling with LC-MS/MS technology showed to be an excellent tool not only for therapeutic drug monitoring but also for RBA studies.
New technologies in treatment of atrial fibrillation in cardiosurgical patients
NASA Astrophysics Data System (ADS)
Evtushenko, A. V.; Evtushenko, V. V.; Bykov, A. N.; Sergeev, V. S.; Syryamkin, V. I.; Kistenev, Yu. V.; Anfinogenova, Ya. D.; Smyshlyaev, K. A.; Kurlov, I. O.
2015-11-01
The article is devoted to the evaluation of the results of clinical application of penetrating radiofrequency ablation techniques on atrial myocardium. Total operated on 241 patients with valvular heart disease and coronary heart disease complicated with atrial fibrillation. All operations were performed under cardiopulmonary bypass and cardioplegia. The main group consists of 141 patients which were operated using penetrating technique radiofrequency exposure. The control group consisted of 100 patients who underwent surgery with the use of "classical" monopolar RF-ablation technique. Both groups were not significantly different on all counts before surgery. Patients with previous heart surgery were excluded during the selection of candidates for the procedure, due to the presence of adhesions in the pericardium, that do not allow good visualization of left atrium, sufficient to perform this procedure. Penetrating technique has significantly higher efficiency compared to the "classic" technique in the early and long-term postoperative periods. In the early postoperative period, its efficiency is 93%, and in the long term is 88%. The efficacy of "classical" monopolar procedure is below: 86% and 68% respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Tongsong, E-mail: jiangtongsong@sina.com; Department of Mathematics, Heze University, Heze, Shandong 274015; Jiang, Ziwu
In the study of the relation between complexified classical and non-Hermitian quantum mechanics, physicists found that there are links to quaternionic and split quaternionic mechanics, and this leads to the possibility of employing algebraic techniques of split quaternions to tackle some problems in complexified classical and quantum mechanics. This paper, by means of real representation of a split quaternion matrix, studies the problem of diagonalization of a split quaternion matrix and gives algebraic techniques for diagonalization of split quaternion matrices in split quaternionic mechanics.
Perspectives on making big data analytics work for oncology.
El Naqa, Issam
2016-12-01
Oncology, with its unique combination of clinical, physical, technological, and biological data provides an ideal case study for applying big data analytics to improve cancer treatment safety and outcomes. An oncology treatment course such as chemoradiotherapy can generate a large pool of information carrying the 5Vs hallmarks of big data. This data is comprised of a heterogeneous mixture of patient demographics, radiation/chemo dosimetry, multimodality imaging features, and biological markers generated over a treatment period that can span few days to several weeks. Efforts using commercial and in-house tools are underway to facilitate data aggregation, ontology creation, sharing, visualization and varying analytics in a secure environment. However, open questions related to proper data structure representation and effective analytics tools to support oncology decision-making need to be addressed. It is recognized that oncology data constitutes a mix of structured (tabulated) and unstructured (electronic documents) that need to be processed to facilitate searching and subsequent knowledge discovery from relational or NoSQL databases. In this context, methods based on advanced analytics and image feature extraction for oncology applications will be discussed. On the other hand, the classical p (variables)≫n (samples) inference problem of statistical learning is challenged in the Big data realm and this is particularly true for oncology applications where p-omics is witnessing exponential growth while the number of cancer incidences has generally plateaued over the past 5-years leading to a quasi-linear growth in samples per patient. Within the Big data paradigm, this kind of phenomenon may yield undesirable effects such as echo chamber anomalies, Yule-Simpson reversal paradox, or misleading ghost analytics. In this work, we will present these effects as they pertain to oncology and engage small thinking methodologies to counter these effects ranging from incorporating prior knowledge, using information-theoretic techniques to modern ensemble machine learning approaches or combination of these. We will particularly discuss the pros and cons of different approaches to improve mining of big data in oncology. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Fine tuning classical and quantum molecular dynamics using a generalized Langevin equation
NASA Astrophysics Data System (ADS)
Rossi, Mariana; Kapil, Venkat; Ceriotti, Michele
2018-03-01
Generalized Langevin Equation (GLE) thermostats have been used very effectively as a tool to manipulate and optimize the sampling of thermodynamic ensembles and the associated static properties. Here we show that a similar, exquisite level of control can be achieved for the dynamical properties computed from thermostatted trajectories. We develop quantitative measures of the disturbance induced by the GLE to the Hamiltonian dynamics of a harmonic oscillator, and show that these analytical results accurately predict the behavior of strongly anharmonic systems. We also show that it is possible to correct, to a significant extent, the effects of the GLE term onto the corresponding microcanonical dynamics, which puts on more solid grounds the use of non-equilibrium Langevin dynamics to approximate quantum nuclear effects and could help improve the prediction of dynamical quantities from techniques that use a Langevin term to stabilize dynamics. Finally we address the use of thermostats in the context of approximate path-integral-based models of quantum nuclear dynamics. We demonstrate that a custom-tailored GLE can alleviate some of the artifacts associated with these techniques, improving the quality of results for the modeling of vibrational dynamics of molecules, liquids, and solids.
NASA Astrophysics Data System (ADS)
Gloster, Jonathan; Diep, Michael; Dredden, David; Mix, Matthew; Olsen, Mark; Price, Brian; Steil, Betty
2014-06-01
Small-to-medium sized businesses lack resources to deploy and manage high-end advanced solutions to deter sophisticated threats from well-funded adversaries, but evidence shows that these types of businesses are becoming key targets. As malicious code and network attacks become more sophisticated, classic signature-based virus and malware detection methods are less effective. To augment the current malware methods of detection, we developed a proactive approach to detect emerging malware threats using open source tools and intelligence to discover patterns and behaviors of malicious attacks and adversaries. Technical and analytical skills are combined to track adversarial behavior, methods and techniques. We established a controlled (separated domain) network to identify, monitor, and track malware behavior to increase understanding of the methods and techniques used by cyber adversaries. We created a suite of tools that observe the network and system performance looking for anomalies that may be caused by malware. The toolset collects information from open-source tools and provides meaningful indicators that the system was under or has been attacked. When malware is discovered, we analyzed and reverse engineered it to determine how it could be detected and prevented. Results have shown that with minimum resources, cost effective capabilities can be developed to detect abnormal behavior that may indicate malicious software.
Generating constrained randomized sequences: item frequency matters.
French, Robert M; Perruchet, Pierre
2009-11-01
All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.
Drawing cure: children's drawings as a psychoanalytic instrument.
Wittmann, Barbara
2010-01-01
This essay deals with the special case of drawings as psychoanalytical instruments. It aims at a theoretical understanding of the specific contribution made by children's drawings as a medium of the psychical. In the influential play technique developed by Melanie Klein, drawing continuously interacts with other symptomatic (play) actions. Nonetheless, specific functions of drawing within the play technique can be identified. The essay will discuss four crucial aspects in-depth: 1) the strengthening of the analysis's recursivity associated with the graphic artifact; 2) the opening of the analytic process facilitated by drawing; 3) the creation of a genuinely graphic mode of producing meaning that allows the child to develop a "theory" of the workings of his own psychic apparatus; and 4) the new possibilities of symbolization associated with the latter. In contrast to classical definitions of the psychological instrument, the child's drawing is a weakly structured tool that does not serve to reproduce psychic processes in an artificial, controlled setting. The introduction of drawing into the psychoanalytic cure is by no means interested in replaying past events, but in producing events suited to effecting a transformation of the synchronic structures of the unconscious.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
Garlicki, Miroslaw; Roguski, K; Puchniewicz, M; Ehrlich, Marek P
2006-08-01
We report in this study our results with composite aortic root replacement (CVR) using the classic or modified Cabrol coronary implantation technique. From October 2001 to March 2005, 25 patients underwent aortic root replacement. In all cases, the indication for surgery was a degenerative aneurysm with a diameter of more than 6 cm. Seven patients had undergone a previous aortic operation on the ascending aorta. Mean age was 53+/-13 years and 22 patients were male. Mean Euroscore was 5.2+/-2.4. Aortic insufficiency was present in all patients. Two patients had Marfan syndrome. The 30-day mortality was 0%. Two patients required profound hypothermic circulatory arrest. Mean aortic cross-clamp time was 91+/-24 minutes and the mean circulatory arrest time was 24+/-15 minutes. No patients developed a pseudoaneurysm after the operation. We conclude that composite aortic root replacement with the classic or modified Cabrol technique results in a low operative mortality. However, it should be only used when a "button" technique is not feasible.
Statistical mechanics in the context of special relativity. II.
Kaniadakis, G
2005-09-01
The special relativity laws emerge as one-parameter (light speed) generalizations of the corresponding laws of classical physics. These generalizations, imposed by the Lorentz transformations, affect both the definition of the various physical observables (e.g., momentum, energy, etc.), as well as the mathematical apparatus of the theory. Here, following the general lines of [Phys. Rev. E 66, 056125 (2002)], we show that the Lorentz transformations impose also a proper one-parameter generalization of the classical Boltzmann-Gibbs-Shannon entropy. The obtained relativistic entropy permits us to construct a coherent and self-consistent relativistic statistical theory, preserving the main features of the ordinary statistical theory, which is recovered in the classical limit. The predicted distribution function is a one-parameter continuous deformation of the classical Maxwell-Boltzmann distribution and has a simple analytic form, showing power law tails in accordance with the experimental evidence. Furthermore, this statistical mechanics can be obtained as the stationary case of a generalized kinetic theory governed by an evolution equation obeying the H theorem and reproducing the Boltzmann equation of the ordinary kinetics in the classical limit.
Signatures of bifurcation on quantum correlations: Case of the quantum kicked top
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.; Santhanam, M. S.
2017-01-01
Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.
Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Lance J.; /SLAC; Drummond, James M.
2012-02-15
We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parametersmore » uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.« less
Overview Experimental Diagnostics for Rarefied Flows - Selected Topics
2011-01-01
flows occurring e.g. in electrical thrusters or plasma wind tunnels. Classical intrusive techniques like Pitot, heat flux, and enthalpy probe as well as...and applied at the IRS, especially designed for the characterisation of flows produced by electrical thrusters and within the plasma wind tunnels for...occurring e.g. in electrical thrusters or plasma wind tunnels. Classical intrusive techniques like Pitot, heat flux, and enthalpy probe as well as mass
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, A. P.
2009-06-15
In the referenced paper an analytical approach was introduced, which allows one to demonstrate the instability in linearly stable systems, specifically, in a classical three-body problem. These considerations are disproved here.
New bis(alkythio) fatty acid methyl esters
USDA-ARS?s Scientific Manuscript database
The addition reaction of dimethyl disulfide (DMDS) to mono-unsaturated fatty acid methyl esters is well-known for analytical purposes to determine the position of double bonds by mass spectrometry. In this work, the classical iodine-catalyzed reaction is expanded to other dialkyl disulfides (RSSR), ...
Soltani, Amin; Gebauer, Denis; Duschek, Lennart; Fischer, Bernd M; Cölfen, Helmut; Koch, Martin
2017-10-12
Crystal formation is a highly debated problem. This report shows that the crystallization of l-(+)-tartaric acid from water follows a non-classical path involving intermediate hydrated states. Analytical ultracentrifugation indicates solution clusters of the initial stages aggregate to form an early intermediate. Terahertz spectroscopy performed during water evaporation highlights a transient increase in the absorption during nucleation; this indicates the recurrence of water molecules that are expelled from the intermediate phase. Besides, a transient resonance at 750 GHz, which can be assigned to a natural vibration of large hydrated aggregates, vanishes after the final crystal has formed. Furthermore, THz data reveal the vibration of nanosized clusters in the dilute solution indicated by analytical ultracentrifugation. Infrared spectroscopy and wide-angle X-ray scattering highlight that the intermediate is not a crystalline hydrate. These results demonstrate that nanoscopic intermediate units assemble to form the first solvent-free crystalline nuclei upon dehydration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods
NASA Astrophysics Data System (ADS)
S Saha, Ray
2016-04-01
In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.
Modeling the free energy surfaces of electron transfer in condensed phases
NASA Astrophysics Data System (ADS)
Matyushov, Dmitry V.; Voth, Gregory A.
2000-10-01
We develop a three-parameter model of electron transfer (ET) in condensed phases based on the Hamiltonian of a two-state solute linearly coupled to a harmonic, classical solvent mode with different force constants in the initial and final states (a classical limit of the quantum Kubo-Toyozawa model). The exact analytical solution for the ET free energy surfaces demonstrates the following features: (i) the range of ET reaction coordinates is limited by a one-sided fluctuation band, (ii) the ET free energies are infinite outside the band, and (iii) the free energy surfaces are parabolic close to their minima and linear far from the minima positions. The model provides an analytical framework to map physical phenomena conflicting with the Marcus-Hush two-parameter model of ET. Nonlinear solvation, ET in polarizable charge-transfer complexes, and configurational flexibility of donor-acceptor complexes are successfully mapped onto the model. The present theory leads to a significant modification of the energy gap law for ET reactions.
On oscillatory convection with the Cattaneo–Christov hyperbolic heat-flow model
Bissell, J. J.
2015-01-01
Adoption of the hyperbolic Cattaneo–Christov heat-flow model in place of the more usual parabolic Fourier law is shown to raise the possibility of oscillatory convection in the classic Bénard problem of a Boussinesq fluid heated from below. By comparing the critical Rayleigh numbers for stationary and oscillatory convection, Rc and RS respectively, oscillatory convection is found to represent the preferred form of instability whenever the Cattaneo number C exceeds a threshold value CT≥8/27π2≈0.03. In the case of free boundaries, analytical approaches permit direct treatment of the role played by the Prandtl number P1, which—in contrast to the classical stationary scenario—can impact on oscillatory modes significantly owing to the non-zero frequency of convection. Numerical investigation indicates that the behaviour found analytically for free boundaries applies in a qualitatively similar fashion for fixed boundaries, while the threshold Cattaneo number CT is computed as a function of P1∈[10−2,10+2] for both boundary regimes. PMID:25792960
NASA Astrophysics Data System (ADS)
Descartes, R.; Rota, G.-C.; Euler, L.; Bernoulli, J. D.; Siegel, Edward Carl-Ludwig
2011-03-01
Quantum-statistics Dichotomy: Fermi-Dirac(FDQS) Versus Bose-Einstein(BEQS), respectively with contact-repulsion/non-condensation(FDCR) versus attraction/ condensationBEC are manifestly-demonstrated by Taylor-expansion ONLY of their denominator exponential, identified BOTH as Descartes analytic-geometry conic-sections, FDQS as Elllipse (homotopy to rectangle FDQS distribution-function), VIA Maxwell-Boltzmann classical-statistics(MBCS) to Parabola MORPHISM, VS. BEQS to Hyperbola, Archimedes' HYPERBOLICITY INEVITABILITY, and as well generating-functions[Abramowitz-Stegun, Handbook Math.-Functions--p. 804!!!], respectively of Euler-numbers/functions, (via Riemann zeta-function(domination of quantum-statistics: [Pathria, Statistical-Mechanics; Huang, Statistical-Mechanics]) VS. Bernoulli-numbers/ functions. Much can be learned about statistical-physics from Euler-numbers/functions via Riemann zeta-function(s) VS. Bernoulli-numbers/functions [Conway-Guy, Book of Numbers] and about Euler-numbers/functions, via Riemann zeta-function(s) MORPHISM, VS. Bernoulli-numbers/ functions, visa versa!!! Ex.: Riemann-hypothesis PHYSICS proof PARTLY as BEQS BEC/BEA!!!
Chandrasekhar's dynamical friction and non-extensive statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J.M.; Lima, J.A.S.; De Souza, R.E.
2016-05-01
The motion of a point like object of mass M passing through the background potential of massive collisionless particles ( m || M ) suffers a steady deceleration named dynamical friction. In his classical work, Chandrasekhar assumed a Maxwellian velocity distribution in the halo and neglected the self gravity of the wake induced by the gravitational focusing of the mass M . In this paper, by relaxing the validity of the Maxwellian distribution due to the presence of long range forces, we derive an analytical formula for the dynamical friction in the context of the q -nonextensive kinetic theory. Inmore » the extensive limiting case ( q = 1), the classical Gaussian Chandrasekhar result is recovered. As an application, the dynamical friction timescale for Globular Clusters spiraling to the galactic center is explicitly obtained. Our results suggest that the problem concerning the large timescale as derived by numerical N -body simulations or semi-analytical models can be understood as a departure from the standard extensive Maxwellian regime as measured by the Tsallis nonextensive q -parameter.« less
Deflection of cross-ply composite laminates induced by piezoelectric actuators.
Her, Shiuh-Chuan; Lin, Chi-Sheng
2010-01-01
The coupling effects between the mechanical and electric properties of piezoelectric materials have drawn significant attention for their potential applications as sensors and actuators. In this investigation, two piezoelectric actuators are symmetrically surface bonded on a cross-ply composite laminate. Electric voltages with the same amplitude and opposite sign are applied to the two symmetric piezoelectric actuators, resulting in the bending effect on the laminated plate. The bending moment is derived by using the classical laminate theory and piezoelectricity. The analytical solution of the flexural displacement of the simply supported composite plate subjected to the bending moment is solved by using the plate theory. The analytical solution is compared with the finite element solution to show the validation of present approach. The effects of the size and location of the piezoelectric actuators on the response of the composite laminate are presented through a parametric study. A simple model incorporating the classical laminate theory and plate theory is presented to predict the deformed shape of the simply supported laminate plate.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
Amarante Andrade, Pedro; Švec, Jan G
2016-07-01
Differences in classical and non-classical singing are due primarily to aesthetic style requirements. The head position can affect the sound quality. This study aimed at comparing the head position for famous classical and non-classical male singers performing high notes. Images of 39 Western classical and 34 non-classical male singers during live performances were obtained from YouTube. Ten raters evaluated the frontal rotational head position (depression versus elevation) and transverse head position (retraction versus protraction) visually using a visual analogue scale. The results showed a significant difference for frontal rotational head position. Most non-classical singers in the sample elevated their heads for high notes while the classical singers were observed to keep it around the neutral position. This difference may be attributed to different singing techniques and phonatory system adjustments utilized by each group.
de Ville de Goyet, J; di Francesco, F; Sottani, V; Grimaldi, C; Tozzi, A E; Monti, L; Muiesan, P
2015-08-01
Controversy remains about the best line of division for liver splitting, through Segment IV or through the umbilical fissure. Both techniques are currently used, with the choice varying between surgical teams in the absence of an evidence-based choice. We conducted a single-center retrospective analysis of 47 left split liver grafts that were procured with two different division techniques: "classical" (N = 28, Group A) or through the umbilical fissure and plate (N = 19, Group B). The allocation of recipients to each group was at random; a single transplant team performed all transplantations. Demographics, characteristics, technical aspects, and outcomes were similar in both groups. The grafts in Group A, prepared with the classical technique, were procured more often with a single BD orifice compared with the grafts in Group B; however, this was not associated with a higher incidence of biliary problems in this series of transplants (96% actual graft survival rate [median ± s.d. 26 ± 20 months]). Both techniques provide good quality split grafts and an excellent outcome; surgical expertise with a given technique is more relevant than the technique itself. The classical technique, however, seems to be more flexible in various ways, and surgeons may find it to be preferable. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Spietelun, Agata; Marcinkowski, Łukasz; de la Guardia, Miguel; Namieśnik, Jacek
2013-12-20
Solid phase microextraction find increasing applications in the sample preparation step before chromatographic determination of analytes in samples with a complex composition. These techniques allow for integrating several operations, such as sample collection, extraction, analyte enrichment above the detection limit of a given measuring instrument and the isolation of analytes from sample matrix. In this work the information about novel methodological and instrumental solutions in relation to different variants of solid phase extraction techniques, solid-phase microextraction (SPME), stir bar sorptive extraction (SBSE) and magnetic solid phase extraction (MSPE) is presented, including practical applications of these techniques and a critical discussion about their advantages and disadvantages. The proposed solutions fulfill the requirements resulting from the concept of sustainable development, and specifically from the implementation of green chemistry principles in analytical laboratories. Therefore, particular attention was paid to the description of possible uses of novel, selective stationary phases in extraction techniques, inter alia, polymeric ionic liquids, carbon nanotubes, and silica- and carbon-based sorbents. The methodological solutions, together with properly matched sampling devices for collecting analytes from samples with varying matrix composition, enable us to reduce the number of errors during the sample preparation prior to chromatographic analysis as well as to limit the negative impact of this analytical step on the natural environment and the health of laboratory employees. Copyright © 2013 Elsevier B.V. All rights reserved.
Making Classical Conditioning Understandable through a Demonstration Technique.
ERIC Educational Resources Information Center
Gibb, Gerald D.
1983-01-01
One lemon, an assortment of other fruits and vegetables, a tennis ball, and a Galvanic Skin Response meter are needed to implement this approach to teaching about classical conditioning in introductory psychology courses. (RM)
One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.
Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz
2009-07-15
The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.
Pereira, Jorge; Câmara, José S; Colmsjö, Anders; Abdel-Rehim, Mohamed
2014-06-01
Sample preparation is an important analytical step regarding the isolation and concentration of desired components from complex matrices and greatly influences their reliable and accurate analysis and data quality. It is the most labor-intensive and error-prone process in analytical methodology and, therefore, may influence the analytical performance of the target analytes quantification. Many conventional sample preparation methods are relatively complicated, involving time-consuming procedures and requiring large volumes of organic solvents. Recent trends in sample preparation include miniaturization, automation, high-throughput performance, on-line coupling with analytical instruments and low-cost operation through extremely low volume or no solvent consumption. Micro-extraction techniques, such as micro-extraction by packed sorbent (MEPS), have these advantages over the traditional techniques. This paper gives an overview of MEPS technique, including the role of sample preparation in bioanalysis, the MEPS description namely MEPS formats (on- and off-line), sorbents, experimental and protocols, factors that affect the MEPS performance, and the major advantages and limitations of MEPS compared with other sample preparation techniques. We also summarize MEPS recent applications in bioanalysis. Copyright © 2014 John Wiley & Sons, Ltd.
Ghost imaging of phase objects with classical incoherent light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirai, Tomohiro; Setaelae, Tero; Friberg, Ari T.
2011-10-15
We describe an optical setup for performing spatial Fourier filtering in ghost imaging with classical incoherent light. This is achieved by a modification of the conventional geometry for lensless ghost imaging. It is shown on the basis of classical coherence theory that with this technique one can realize what we call phase-contrast ghost imaging to visualize pure phase objects.
Structure of the classical scrape-off layer of a tokamak
NASA Astrophysics Data System (ADS)
Rozhansky, V.; Kaveeva, E.; Senichenkov, I.; Vekshina, E.
2018-03-01
The structure of the scrape-off layer (SOL) of a tokamak with little or no turbulent transport is analyzed. The analytical estimates of the density and electron temperature fall-off lengths of the SOL are put forward. It is demonstrated that the SOL width could be of the order of the ion poloidal gyroradius, as suggested in Goldston (2012 Nuclear Fusion 52 013009). The analytical results are supported by the results of the 2D simulations of the edge plasma with reduced transport coefficients performed by SOLPS-ITER transport code.
Electrochemistry in hollow-channel paper analytical devices.
Renault, Christophe; Anderson, Morgan J; Crooks, Richard M
2014-03-26
In the present article we provide a detailed analysis of fundamental electrochemical processes in a new class of paper-based analytical devices (PADs) having hollow channels (HCs). Voltammetry and amperometry were applied under flow and no flow conditions yielding reproducible electrochemical signals that can be described by classical electrochemical theory as well as finite-element simulations. The results shown here provide new and quantitative insights into the flow within HC-PADs. The interesting new result is that despite their remarkable simplicity these HC-PADs exhibit electrochemical and hydrodynamic behavior similar to that of traditional microelectrochemical devices.
Using Qualitative Inquiry to Promote Organizational Intelligence
ERIC Educational Resources Information Center
Kimball, Ezekiel; Loya, Karla I.
2017-01-01
Framed by Terenzini's revision of his classic "On the nature of institutional research" article, this chapter offers concluding thoughts on the way in which technical/analytical, issues, and contextual types of awarenesses appeared across chapters in this volume. Moreover, it outlines how each chapter demonstrated how qualitative inquiry…
The Initial Flow of Classical Gluon Fields in Heavy Ion Collisions
NASA Astrophysics Data System (ADS)
Fries, Rainer J.; Chen, Guangyao
2015-03-01
Using analytic solutions of the Yang-Mills equations we calculate the initial flow of energy of the classical gluon field created in collisions of large nuclei at high energies. We find radial and elliptic flow which follows gradients in the initial energy density, similar to a simple hydrodynamic behavior. In addition we find a rapidity-odd transverse flow field which implies the presence of angular momentum and should lead to directed flow in final particle spectra. We trace those energy flow terms to transverse fields from the non-abelian generalization of Gauss' Law and Ampere's and Faraday's Laws.
Evaluating Moving Target Defense with PLADD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Stephen T.; Outkin, Alexander V.; Gearhart, Jared Lee
This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. Wemore » show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems.« less
The Concept of Command Leadership in the Military Classics: Ardant du Picq and Foch.
1986-04-01
counseling techniques. These lessons are then linked to some form of socio- psychological model designed to provide the officer with a list of leadership...the military has substituted contemporary quasi- psychology and business leadership models for the classical combat models" (12:1). In response to this...leadership, the military seems to- concentrate far more heavily on socio- psychological factors and modern managerial techniques than on the traits and
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.; ...
2016-07-05
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrell, Jack R.; Olarte, Mariefel V.; Christensen, Earl D.
Here, we discuss the standardization of analytical techniques for pyrolysis bio-oils, including the current status of methods, and our opinions on future directions. First, the history of past standardization efforts is summarized, and both successful and unsuccessful validation of analytical techniques highlighted. The majority of analytical standardization studies to-date has tested only physical characterization techniques. In this paper, we present results from an international round robin on the validation of chemical characterization techniques for bio-oils. Techniques tested included acid number, carbonyl titrations using two different methods (one at room temperature and one at 80 °C), 31P NMR for determination ofmore » hydroxyl groups, and a quantitative gas chromatography–mass spectrometry (GC-MS) method. Both carbonyl titration and acid number methods have yielded acceptable inter-laboratory variabilities. 31P NMR produced acceptable results for aliphatic and phenolic hydroxyl groups, but not for carboxylic hydroxyl groups. As shown in previous round robins, GC-MS results were more variable. Reliable chemical characterization of bio-oils will enable upgrading research and allow for detailed comparisons of bio-oils produced at different facilities. Reliable analytics are also needed to enable an emerging bioenergy industry, as processing facilities often have different analytical needs and capabilities than research facilities. We feel that correlations in reliable characterizations of bio-oils will help strike a balance between research and industry, and will ultimately help to -determine metrics for bio-oil quality. Lastly, the standardization of additional analytical methods is needed, particularly for upgraded bio-oils.« less
Experimental and analytical determination of stability parameters for a balloon tethered in a wind
NASA Technical Reports Server (NTRS)
Redd, L. T.; Bennett, R. M.; Bland, S. R.
1973-01-01
Experimental and analytical techniques for determining stability parameters for a balloon tethered in a steady wind are described. These techniques are applied to a particular 7.64-meter-long balloon, and the results are presented. The stability parameters of interest appear as coefficients in linearized stability equations and are derived from the various forces and moments acting on the balloon. In several cases the results from the experimental and analytical techniques are compared and suggestions are given as to which techniques are the most practical means of determining values for the stability parameters.
A theoretical framework for analyzing the effect of external change on tidal dynamics in estuaries
NASA Astrophysics Data System (ADS)
CAI, H.; Savenije, H.; Toffolon, M.
2013-12-01
The most densely populated areas of the world are usually located in coastal areas near estuaries. As a result, estuaries are often subject to intense human interventions, such as dredging for navigation, dam construction and fresh water withdrawal etc., which in some areas has led to serious deterioration of invaluable ecosystems. Hence it is important to understand the influence of such interventions on tidal dynamics in these areas. In this study, we present one consistent theoretical framework for tidal hydrodynamics, which can be used as a rapid assessment technique that assist policy maker and managers to make considered decisions for the protection and management of estuarine environment when assessing the effect of human interventions in estuaries. Analytical solutions to the one-dimensional St. Venant equations for the tidal hydrodynamics in convergent unbounded estuaries with negligible river discharge can be cast in the form of a set of four implicit dimensionless equations for phase lag, velocity amplitude, damping, and wave celerity, as a function of two localized parameters describing friction and convergence. This method allows for the comparison of the different analytical approaches by rewriting the different solutions in the same format. In this study, classical and more recent formulations are compared, showing the differences and similarities associated to their specific simplifications. The envelope method, which is based on the consideration of the dynamics at high water and low water, can be used to derive damping equations that use different friction approximations. This results in as many analytical solutions, and thereby allows one to build a consistent theoretical framework. Analysis of the asymptotic behaviour of the equations shows that an equilibrium tidal amplitude exits reflecting the balance between friction and channel convergence. The framework is subsequently extended to take into account the effect of river discharge. Hence, the analytical solutions are applicable even in the upstream part of an estuary, where the influence of river discharge is remarkable. The proposed analytical solutions are transparent and practical, allowing a quantitative and qualitative assessment of human interventions (e.g., dredging, flow reduction) on tidal dynamics. Moreover, they are rapid assessment techniques that enable the users to set up a simple model and to understand the functioning of the system with a minimum of information required. The analytical model is illustrated in three large-scale estuaries with significant influence by human activities, i.e., the Scheldt estuary in the Netherlands, the Modaomen and the Yangtze estuaries in China. In these estuaries, the correspondence with observations is good, which suggests that the proposed model is a useful, yet realistic and reliable instrument for quick detection of the effect of human interventions on tidal dynamics and subsequent environmental issues, such as salt intrusion.
Woods, Katherine M.; Petron, David J.; Shultz, Barry B.; Hicks-Little, Charlie A.
2015-01-01
Context Chronic exertional compartment syndrome (CECS) is a debilitating condition resulting in loss of function and a decrease in athletic performance. Cases of CECS are increasing among Nordic skiers; therefore, analysis of intracompartmental pressures (ICPs) before and after Nordic skiing is warranted. Objective To determine if lower leg anterior and lateral ICPs and subjective lower leg pain levels increased after a 20-minute Nordic rollerskiing time trial and to examine if differences existed between postexercise ICPs for the 2 Nordic rollerskiing techniques, classic and skate. Design Crossover study. Setting Outdoor paved loop. Patients or Other Participants Seven healthy Division I Nordic skiers (3 men, 4 women; age = 22.71 ± 1.38 y, height = 175.36 ± 6.33 cm, mass = 70.71 ± 6.58 kg). Intervention(s) Participants completed two 20-minute rollerskiing time trials using the classic and skate technique in random order. The time trials were completed 7 days apart. Anterior and lateral ICPs and lower leg pain scores were obtained at baseline and at minutes 1 and 5 after rollerskiing. Main Outcome Measure(s) Anterior and lateral ICPs (mm Hg) were measured using a Stryker Quic STIC handheld monitor. Subjective measures of lower leg pain were recorded using the 11-point Numeric Rating Scale. Results Increases in both anterior (P = .000) and lateral compartment (P = .002) ICPs were observed, regardless of rollerskiing technique used. Subjective lower leg pain increased after the classic technique for the men from baseline to 1 minute postexercise and after the skate technique for the women. Significant 3-way interactions (technique × time × sex) were observed for the anterior (P = .002) and lateral (P = .009) compartment ICPs and lower leg pain (P = .005). Conclusions Postexercise anterior and lateral ICPs increased compared with preexercise ICPs after both classic and skate rollerskiing techniques. Lower leg pain is a primary symptom of CECS. The subjective lower leg pain 11-point Numeric Rating Scale results indicate that increases in lower leg ICPs sustained during Nordic rollerskiing may increase discomfort during activity. Our results therefore suggest that Nordic rollerskiing contributes to increases in ICPs, which may lead to the development of CECS. PMID:26090709
Woods, Katherine M; Petron, David J; Shultz, Barry B; Hicks-Little, Charlie A
2015-08-01
Chronic exertional compartment syndrome (CECS) is a debilitating condition resulting in loss of function and a decrease in athletic performance. Cases of CECS are increasing among Nordic skiers; therefore, analysis of intracompartmental pressures (ICPs) before and after Nordic skiing is warranted. To determine if lower leg anterior and lateral ICPs and subjective lower leg pain levels increased after a 20-minute Nordic rollerskiing time trial and to examine if differences existed between postexercise ICPs for the 2 Nordic rollerskiing techniques, classic and skate. Crossover study. Outdoor paved loop. Seven healthy Division I Nordic skiers (3 men, 4 women; age = 22.71 ± 1.38 y, height = 175.36 ± 6.33 cm, mass = 70.71 ± 6.58 kg). Participants completed two 20-minute rollerskiing time trials using the classic and skate technique in random order. The time trials were completed 7 days apart. Anterior and lateral ICPs and lower leg pain scores were obtained at baseline and at minutes 1 and 5 after rollerskiing. Anterior and lateral ICPs (mm Hg) were measured using a Stryker Quic STIC handheld monitor. Subjective measures of lower leg pain were recorded using the 11-point Numeric Rating Scale. Increases in both anterior (P = .000) and lateral compartment (P = .002) ICPs were observed, regardless of rollerskiing technique used. Subjective lower leg pain increased after the classic technique for the men from baseline to 1 minute postexercise and after the skate technique for the women. Significant 3-way interactions (technique × time × sex) were observed for the anterior (P = .002) and lateral (P = .009) compartment ICPs and lower leg pain (P = .005). Postexercise anterior and lateral ICPs increased compared with preexercise ICPs after both classic and skate rollerskiing techniques. Lower leg pain is a primary symptom of CECS. The subjective lower leg pain 11-point Numeric Rating Scale results indicate that increases in lower leg ICPs sustained during Nordic rollerskiing may increase discomfort during activity. Our results therefore suggest that Nordic rollerskiing contributes to increases in ICPs, which may lead to the development of CECS.
Quantum localization for a kicked rotor with accelerator mode islands.
Iomin, A; Fishman, S; Zaslavsky, G M
2002-03-01
Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.
Karayannis, Miltiades I; Efstathiou, Constantinos E
2012-12-15
In this review the history of chemistry and specifically the history and the significant steps of the evolution of analytical chemistry are presented. In chronological time spans, covering the ancient world, the middle ages, the period of the 19th century, and the three evolutional periods, from the verge of the 19th century to contemporary times, it is given information for the progress of chemistry and analytical chemistry. During this period, analytical chemistry moved gradually from its pure empirical nature to more rational scientific activities, transforming itself to an autonomous branch of chemistry and a separate discipline. It is also shown that analytical chemistry moved gradually from the status of exclusive serving the chemical science, towards serving, the environment, health, law, almost all areas of science and technology, and the overall society. Some recommendations are also directed to analytical chemistry educators concerning the indispensable nature of knowledge of classical analytical chemistry and the associated laboratory exercises and to analysts, in general, why it is important to use the chemical knowledge to make measurements on problems of everyday life. Copyright © 2012 Elsevier B.V. All rights reserved.
Analytical Chemistry: A Literary Approach.
ERIC Educational Resources Information Center
Lucy, Charles A.
2000-01-01
Provides an anthology of references to descriptions of analytical chemistry techniques from history, popular fiction, and film which can be used to capture student interest and frame discussions of chemical techniques. (WRM)
Renormalization of the unitary evolution equation for coined quantum walks
NASA Astrophysics Data System (ADS)
Boettcher, Stefan; Li, Shanshan; Portugal, Renato
2017-03-01
We consider discrete-time evolution equations in which the stochastic operator of a classical random walk is replaced by a unitary operator. Such a problem has gained much attention as a framework for coined quantum walks that are essential for attaining the Grover limit for quantum search algorithms in physically realizable, low-dimensional geometries. In particular, we analyze the exact real-space renormalization group (RG) procedure recently introduced to study the scaling of quantum walks on fractal networks. While this procedure, when implemented numerically, was able to provide some deep insights into the relation between classical and quantum walks, its analytic basis has remained obscure. Our discussion here is laying the groundwork for a rigorous implementation of the RG for this important class of transport and algorithmic problems, although some instances remain unresolved. Specifically, we find that the RG fixed-point analysis of the classical walk, which typically focuses on the dominant Jacobian eigenvalue {λ1} , with walk dimension dw\\text{RW}={{log}2}{λ1} , needs to be extended to include the subdominant eigenvalue {λ2} , such that the dimension of the quantum walk obtains dw\\text{QW}={{log}2}\\sqrt{{λ1}{λ2}} . With that extension, we obtain analytically previously conjectured results for dw\\text{QW} of Grover walks on all but one of the fractal networks that have been considered.
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
Classical mutual information in mean-field spin glass models
NASA Astrophysics Data System (ADS)
Alba, Vincenzo; Inglis, Stephen; Pollet, Lode
2016-03-01
We investigate the classical Rényi entropy Sn and the associated mutual information In in the Sherrington-Kirkpatrick (S-K) model, which is the paradigm model of mean-field spin glasses. Using classical Monte Carlo simulations and analytical tools we investigate the S-K model in the n -sheet booklet. This is achieved by gluing together n independent copies of the model, and it is the main ingredient for constructing the Rényi entanglement-related quantities. We find a glassy phase at low temperatures, whereas at high temperatures the model exhibits paramagnetic behavior, consistent with the regular S-K model. The temperature of the paramagnetic-glassy transition depends nontrivially on the geometry of the booklet. At high temperatures we provide the exact solution of the model by exploiting the replica symmetry. This is the permutation symmetry among the fictitious replicas that are used to perform disorder averages (via the replica trick). In the glassy phase the replica symmetry has to be broken. Using a generalization of the Parisi solution, we provide analytical results for Sn and In and for standard thermodynamic quantities. Both Sn and In exhibit a volume law in the whole phase diagram. We characterize the behavior of the corresponding densities, Sn/N and In/N , in the thermodynamic limit. Interestingly, at the critical point the mutual information does not exhibit any crossing for different system sizes, in contrast with local spin models.
Hermann-Bernoulli-Laplace-Hamilton-Runge-Lenz Vector.
ERIC Educational Resources Information Center
Subramanian, P. R.; And Others
1991-01-01
A way for students to refresh and use their knowledge in both mathematics and physics is presented. By the study of the properties of the "Runge-Lenz" vector the subjects of algebra, analytical geometry, calculus, classical mechanics, differential equations, matrices, quantum mechanics, trigonometry, and vector analysis can be reviewed. (KR)
On the Construction and Dynamics of Knotted Fields
NASA Astrophysics Data System (ADS)
Kedia, Hridesh
Representing a physical field in terms of its field lines has often enabled a deeper understanding of complex physical phenomena, from Faraday's law of magnetic induction, to the Helmholtz laws of vortex motion, to the free energy density of liquid crystals in terms of the distortions of the lines of the director field. At the same time, the application of ideas from topology--the study of properties that are invariant under continuous deformations--has led to robust insights into the nature of complex physical systems from defects in crystal structures, to the earth's magnetic field, to topological conservation laws. The study of knotted fields, physical fields in which the field lines encode knots, emerges naturally from the application of topological ideas to the investigation of the physical phenomena best understood in terms of the lines of a field. A knot--a closed loop tangled with itself which can not be untangled without cutting the loop--is the simplest topologically non-trivial object constructed from a line. Remarkably, knots in the vortex (magnetic field) lines of a dissipationless fluid (plasma), persist forever as they are transported by the flow, stretching and rotating as they evolve. Moreover, deeply entwined with the topology-preserving dynamics of dissipationless fluids and plasmas, is an additional conserved quantity--helicity, a measure of the average linking of the vortex (magnetic field) lines in a fluid (plasma)--which has had far-reaching consequences for fluids and plasmas. Inspired by the persistence of knots in dissipationless flows, and their far-reaching physical consequences, we seek to understand the interplay between the dynamics of a field and the topology of its field lines in a variety of systems. While it is easy to tie a knot in a shoelace, tying a knot in the the lines of a space-filling field requires contorting the lines everywhere to match the knotted region. The challenge of analytically constructing knotted field configurations has impeded a deeper understanding of the interplay between topology and dynamics in fluids and plasmas. We begin by analytically constructing knotted field configurations which encode a desired knot in the lines of the field, and show that their helicity can be tuned independently of the encoded knot. The nonlinear nature of the physical systems in which these knotted field configurations arise, makes their analytical study challenging. We ask if a linear theory such as electromagnetism can allow knotted field configurations to persist with time. We find analytical expressions for an infinite family of knotted solutions to Maxwell's equations in vacuum and elucidate their connections to dissipationless flows. We present a design rule for constructing such persistently knotted electromagnetic fields, which could possibly be used to transfer knottedness to matter such as quantum fluids and plasmas. An important consequence of the persistence of knots in classical dissipationless flows is the existence of an additional conserved quantity, helicity, which has had far-reaching implications. To understand the existence of analogous conserved quantities, we ask if superfluids, which flow without dissipation just like classical dissipationless flows, have an additional conserved quantity akin to helicity. We address this question using an analytical approach based on defining the particle relabeling symmetry--the symmetry underlying helicity conservation--in superfluids, and find that an analogous conserved quantity exists but vanishes identically owing to the intrinsic geometry of complex scalar fields. Furthermore, to address the question of a ``classical limit'' of superfluid vortices which recovers classical helicity conservation, we perform numerical simulations of \\emph{bundles} of superfluid vortices, and find behavior akin to classical viscous flows.
NASA Technical Reports Server (NTRS)
Migneault, Gerard E.
1987-01-01
Emulation techniques can be a solution to a difficulty that arises in the analysis of the reliability of guidance and control computer systems for future commercial aircraft. Described here is the difficulty, the lack of credibility of reliability estimates obtained by analytical modeling techniques. The difficulty is an unavoidable consequence of the following: (1) a reliability requirement so demanding as to make system evaluation by use testing infeasible; (2) a complex system design technique, fault tolerance; (3) system reliability dominated by errors due to flaws in the system definition; and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. Use of emulation techniques for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques is then discussed. Finally several examples of the application of emulation techniques are described.
Analytic proof of the existence of the Lorenz attractor in the extended Lorenz model
NASA Astrophysics Data System (ADS)
Ovsyannikov, I. I.; Turaev, D. V.
2017-01-01
We give an analytic (free of computer assistance) proof of the existence of a classical Lorenz attractor for an open set of parameter values of the Lorenz model in the form of Yudovich-Morioka-Shimizu. The proof is based on detection of a homoclinic butterfly with a zero saddle value and rigorous verification of one of the Shilnikov criteria for the birth of the Lorenz attractor; we also supply a proof for this criterion. The results are applied in order to give an analytic proof for the existence of a robust, pseudohyperbolic strange attractor (the so-called discrete Lorenz attractor) for an open set of parameter values in a 4-parameter family of 3D Henon-like diffeomorphisms.
Quantum decay model with exact explicit analytical solution
NASA Astrophysics Data System (ADS)
Marchewka, Avi; Granot, Er'El
2009-01-01
A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.
Exploration of Antarctic Subglacial environments: a challenge for analytical chemistry
NASA Astrophysics Data System (ADS)
Traversi, R.; Becagli, S.; Castellano, E.; Ghedini, C.; Marino, F.; Rugi, F.; Severi, M.; Udisti, R.
2009-12-01
The large number of subglacial lakes detected in the Dome C area in East Antarctica suggests that this region may be a valuable source of paleo-records essential for understanding the evolution of the Antarctic ice cap and climate changes in the last several millions years. In the framework of the Project on “Exploration and characterization of Concordia Lake, Antarctica”, supported by Italian Program for Antarctic Research (PNRA), a glaciological investigation of the Dome C “Lake District” are planned. Indeed, the glacio-chemical characterisation of the ice column over subglacial lakes will allow to evaluate the fluxes of major and trace chemical species along the ice column and in the accreted ice and, consequently, the availability of nutrients and oligo-elements for possible biological activity in the lake water and sediments. Melting and freezing at the base of the ice sheet should be able to deliver carbon and salts to the lake, as observed for the Vostok subglacial lake, which are thought to be able to support a low concentration of micro-organisms for extended periods of time. Thus, this investigation represents the first step for exploring the subglacial environments including sampling and analysis of accreted ice, lake water and sediments. In order to perform reliable analytical measurements, especially of trace chemical species, clean sub-sampling and analytical techniques are required. For this purpose, the techniques already used by the CHIMPAC laboratory (Florence University) in the framework of international Antarctic drilling Projects (EPICA - European Project for Ice Coring in Antarctica, TALDICE - TALos Dome ICE core, ANDRILL MIS - ANTarctic DRILLing McMurdo Ice Shelf) were optimised and new techniques were developed to ensure a safe sample handling. CHIMPAC laboratory has been involved since several years in the study of Antarctic continent, primarily focused on understanding the bio-geo-chemical cycles of chemical markers and the interpretation of their records in sedimentary archives (ice cores, sediment cores). This activity takes advantage of facilities for storage, decontamination and pre-analysis treatment of ice and sediment strips (cold room equipped with laminar flow hoods and decontamination devices at different automation level, class 10000 clean room, systems for the complete acid digestion of sediment samples, production of ultra-pure acids and sediments’ granulometric selection) and for analytical determination of a wide range of chemical tracers. In particular, the operative instrumental set includes several Ion Chromatographs for inorganic and selected organic ions measurement (by classical Ion Chromatography and Fast Ion Chromatography), Atomic Absorption and Emission Spectrometers (F-AAS, GF-AAS, ICP-AES) and Inductively Coupled Plasma - Sector Field Mass Spectrometry (ICP-SFMS) for the analysis of the soluble or “available” inorganic fraction together with Ion Beam Analysis techniques for elemental composition (PIXE-PIGE, in collaboration with INFN and Physics Institute of Florence University) and geochemical analysis (SEM-EDS).
1988-06-01
Continue on reverse if necessary and identify by block number) FIELD GROUP SUB-GROUP Computer Assisted Instruction; Artificial Intelligence 194...while he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been...he/she tries to perform given tasks. Means-ends analysis, a classic technique for solving search problems in Artificial Intelligence, has been used
NASA Astrophysics Data System (ADS)
Feng, L.; Vaulin, R.; Hewitt, J. N.; Remillard, R.; Kaplan, D. L.; Murphy, Tara; Kudryavtseva, N.; Hancock, P.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Deshpande, A. A.; Gaensler, B. M.; Greenhill, L. J.; Hazelton, B. J.; Johnston-Hollitt, M.; Lonsdale, C. J.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Oberoi, D.; Ord, S. M.; Prabu, T.; Udaya Shankar, N.; Srivani, K. S.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.
2017-03-01
Many astronomical sources produce transient phenomena at radio frequencies, but the transient sky at low frequencies (<300 MHz) remains relatively unexplored. Blind surveys with new wide-field radio instruments are setting increasingly stringent limits on the transient surface density on various timescales. Although many of these instruments are limited by classical confusion noise from an ensemble of faint, unresolved sources, one can in principle detect transients below the classical confusion limit to the extent that the classical confusion noise is independent of time. We develop a technique for detecting radio transients that is based on temporal matched filters applied directly to time series of images, rather than relying on source-finding algorithms applied to individual images. This technique has well-defined statistical properties and is applicable to variable and transient searches for both confusion-limited and non-confusion-limited instruments. Using the Murchison Widefield Array as an example, we demonstrate that the technique works well on real data despite the presence of classical confusion noise, sidelobe confusion noise, and other systematic errors. We searched for transients lasting between 2 minutes and 3 months. We found no transients and set improved upper limits on the transient surface density at 182 MHz for flux densities between ˜20 and 200 mJy, providing the best limits to date for hour- and month-long transients.
Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress
NASA Astrophysics Data System (ADS)
Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed
2013-07-01
The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
Thermoelectrically cooled water trap
Micheels, Ronald H [Concord, MA
2006-02-21
A water trap system based on a thermoelectric cooling device is employed to remove a major fraction of the water from air samples, prior to analysis of these samples for chemical composition, by a variety of analytical techniques where water vapor interferes with the measurement process. These analytical techniques include infrared spectroscopy, mass spectrometry, ion mobility spectrometry and gas chromatography. The thermoelectric system for trapping water present in air samples can substantially improve detection sensitivity in these analytical techniques when it is necessary to measure trace analytes with concentrations in the ppm (parts per million) or ppb (parts per billion) partial pressure range. The thermoelectric trap design is compact and amenable to use in a portable gas monitoring instrumentation.
Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.
Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs
2018-01-01
While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.
McCain, Stephanie L; Flatland, Bente; Schumacher, Juergen P; Clarke Iii, Elsburgh O; Fry, Michael M
2010-12-01
Advantages of handheld and small bench-top biochemical analyzers include requirements for smaller sample volume and practicality for use in the field or in practices, but little has been published on the performance of these instruments compared with standard reference methods in analysis of reptilian blood. The aim of this study was to compare reptilian blood biochemical values obtained using the Abaxis VetScan Classic bench-top analyzer and a Heska i-STAT handheld analyzer with values obtained using a Roche Hitachi 911 chemical analyzer. Reptiles, including 14 bearded dragons (Pogona vitticeps), 4 blue-tongued skinks (Tiliqua gigas), 8 Burmese star tortoises (Geochelone platynota), 10 Indian star tortoises (Geochelone elegans), 5 red-tailed boas (Boa constrictor), and 5 Northern pine snakes (Pituophis melanoleucus melanoleucus), were manually restrained, and a single blood sample was obtained and divided for analysis. Results for concentrations of albumin, bile acids, calcium, glucose, phosphates, potassium, sodium, total protein, and uric acid and activities of aspartate aminotransferase and creatine kinase obtained from the VetScan Classic and Hitachi 911 were compared. Results for concentrations of chloride, glucose, potassium, and sodium obtained from the i-STAT and Hitachi 911 were compared. Compared with results from the Hitachi 911, those from the VetScan Classic and i-STAT had variable correlations, and constant or proportional bias was found for many analytes. Bile acid data could not be evaluated because results for 44 of 45 samples fell below the lower linearity limit of the VetScan Classic. Although the 2 portable instruments might provide measurements with clinical utility, there were significant differences compared with the reference analyzer, and development of analyzer-specific reference intervals is recommended. ©2010 American Society for Veterinary Clinical Pathology.
NASA Astrophysics Data System (ADS)
Gerstmayr, Johannes; Irschik, Hans
2008-12-01
In finite element methods that are based on position and slope coordinates, a representation of axial and bending deformation by means of an elastic line approach has become popular. Such beam and plate formulations based on the so-called absolute nodal coordinate formulation have not yet been verified sufficiently enough with respect to analytical results or classical nonlinear rod theories. Examining the existing planar absolute nodal coordinate element, which uses a curvature proportional bending strain expression, it turns out that the deformation does not fully agree with the solution of the geometrically exact theory and, even more serious, the normal force is incorrect. A correction based on the classical ideas of the extensible elastica and geometrically exact theories is applied and a consistent strain energy and bending moment relations are derived. The strain energy of the solid finite element formulation of the absolute nodal coordinate beam is based on the St. Venant-Kirchhoff material: therefore, the strain energy is derived for the latter case and compared to classical nonlinear rod theories. The error in the original absolute nodal coordinate formulation is documented by numerical examples. The numerical example of a large deformation cantilever beam shows that the normal force is incorrect when using the previous approach, while a perfect agreement between the absolute nodal coordinate formulation and the extensible elastica can be gained when applying the proposed modifications. The numerical examples show a very good agreement of reference analytical and numerical solutions with the solutions of the proposed beam formulation for the case of large deformation pre-curved static and dynamic problems, including buckling and eigenvalue analysis. The resulting beam formulation does not employ rotational degrees of freedom and therefore has advantages compared to classical beam elements regarding energy-momentum conservation.
Sandbakk, Øyvind; Losnegard, Thomas; Skattebo, Øyvind; Hegge, Ann M; Tønnessen, Espen; Kocbach, Jan
2016-01-01
The present study investigated the contribution of performance on uphill, flat, and downhill sections to overall performance in an international 10-km classical time-trial in elite female cross-country skiers, as well as the relationships between performance on snow and laboratory-measured physiological variables in the double poling (DP) and diagonal (DIA) techniques. Ten elite female cross-country skiers were continuously measured by a global positioning system device during an international 10-km cross-country skiing time-trial in the classical technique. One month prior to the race, all skiers performed a 5-min submaximal and 3-min self-paced performance test while roller skiing on a treadmill, both in the DP and DIA techniques. The time spent on uphill (r = 0.98) and flat (r = 0.91) sections of the race correlated most strongly with the overall 10-km performance (both p < 0.05). Approximately 56% of the racing time was spent uphill, and stepwise multiple regression revealed that uphill time explained 95.5% of the variance in overall performance (p < 0.001). Distance covered during the 3-min roller-skiing test and body-mass normalized peak oxygen uptake (VO2peak) in both techniques showed the strongest correlations with overall time-trial performance (r = 0.66-0.78), with DP capacity tending to have greatest impact on the flat and DIA capacity on uphill terrain (all p < 0.05). Our present findings reveal that the time spent uphill most strongly determine classical time-trial performance, and that the major portion of the performance differences among elite female cross-country skiers can be explained by variations in technique-specific aerobic power.
Sandbakk, Øyvind; Losnegard, Thomas; Skattebo, Øyvind; Hegge, Ann M.; Tønnessen, Espen; Kocbach, Jan
2016-01-01
The present study investigated the contribution of performance on uphill, flat, and downhill sections to overall performance in an international 10-km classical time-trial in elite female cross-country skiers, as well as the relationships between performance on snow and laboratory-measured physiological variables in the double poling (DP) and diagonal (DIA) techniques. Ten elite female cross-country skiers were continuously measured by a global positioning system device during an international 10-km cross-country skiing time-trial in the classical technique. One month prior to the race, all skiers performed a 5-min submaximal and 3-min self-paced performance test while roller skiing on a treadmill, both in the DP and DIA techniques. The time spent on uphill (r = 0.98) and flat (r = 0.91) sections of the race correlated most strongly with the overall 10-km performance (both p < 0.05). Approximately 56% of the racing time was spent uphill, and stepwise multiple regression revealed that uphill time explained 95.5% of the variance in overall performance (p < 0.001). Distance covered during the 3-min roller-skiing test and body-mass normalized peak oxygen uptake (VO2peak) in both techniques showed the strongest correlations with overall time-trial performance (r = 0.66–0.78), with DP capacity tending to have greatest impact on the flat and DIA capacity on uphill terrain (all p < 0.05). Our present findings reveal that the time spent uphill most strongly determine classical time-trial performance, and that the major portion of the performance differences among elite female cross-country skiers can be explained by variations in technique-specific aerobic power. PMID:27536245
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Classical analogues of two-photon quantum interference.
Kaltenbaek, R; Lavoie, J; Resch, K J
2009-06-19
Chirped-pulse interferometry (CPI) captures the metrological advantages of quantum Hong-Ou-Mandel (HOM) interferometry in a completely classical system. Modified HOM interferometers are the basis for a number of seminal quantum-interference effects. Here, the corresponding modifications to CPI allow for the first observation of classical analogues to the HOM peak and quantum beating. They also allow a new classical technique for generating phase super-resolution exhibiting a coherence length dramatically longer than that of the laser light, analogous to increased two-photon coherence lengths in entangled states.
Techniques for Forecasting Air Passenger Traffic
NASA Technical Reports Server (NTRS)
Taneja, N.
1972-01-01
The basic techniques of forecasting the air passenger traffic are outlined. These techniques can be broadly classified into four categories: judgmental, time-series analysis, market analysis and analytical. The differences between these methods exist, in part, due to the degree of formalization of the forecasting procedure. Emphasis is placed on describing the analytical method.
Universal scaling for the quantum Ising chain with a classical impurity
NASA Astrophysics Data System (ADS)
Apollaro, Tony J. G.; Francica, Gianluca; Giuliano, Domenico; Falcone, Giovanni; Palma, G. Massimo; Plastina, Francesco
2017-10-01
We study finite-size scaling for the magnetic observables of an impurity residing at the end point of an open quantum Ising chain with transverse magnetic field, realized by locally rescaling the field by a factor μ ≠1 . In the homogeneous chain limit at μ =1 , we find the expected finite-size scaling for the longitudinal impurity magnetization, with no specific scaling for the transverse magnetization. At variance, in the classical impurity limit μ =0 , we recover finite scaling for the longitudinal magnetization, while the transverse one basically does not scale. We provide both analytic approximate expressions for the magnetization and the susceptibility as well as numerical evidences for the scaling behavior. At intermediate values of μ , finite-size scaling is violated, and we provide a possible explanation of this result in terms of the appearance of a second, impurity-related length scale. Finally, by going along the standard quantum-to-classical mapping between statistical models, we derive the classical counterpart of the quantum Ising chain with an end-point impurity as a classical Ising model on a square lattice wrapped on a half-infinite cylinder, with the links along the first circle modified as a function of μ .
Kovarik, Peter; Grivet, Chantal; Bourgogne, Emmanuel; Hopfgartner, Gérard
2007-01-01
The present work investigates various method development aspects for the quantitative analysis of pharmaceutical compounds in human plasma using matrix-assisted laser desorption/ionization and multiple reaction monitoring (MALDI-MRM). Talinolol was selected as a model analyte. Liquid-liquid extraction (LLE) and protein precipitation were evaluated regarding sensitivity and throughput for the MALDI-MRM technique and its applicability without and with chromatographic separation. Compared to classical electrospray liquid chromatography/mass spectrometry (LC/ESI-MS) method development, with MALDI-MRM the tuning of the analyte in single MS mode is more challenging due to interfering matrix background ions. An approach is proposed using background subtraction. With LLE and using a 200 microL human plasma aliquot acceptable precision and accuracy could be obtained in the range of 1 to 1000 ng/mL without any LC separation. Approximately 3 s were required for one analysis. A full calibration curve and its quality control samples (20 samples) can be analyzed within 1 min. Combining LC with the MALDI analysis allowed improving the linearity down to 50 pg/mL, while reducing the throughput potential only by two-fold. Matrix effects are still a significant issue with MALDI but can be monitored in a similar way to that used for LC/ESI-MS analysis.
Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo
2016-09-01
Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.
NASA Astrophysics Data System (ADS)
Yan, David; Bazant, Martin Z.; Biesheuvel, P. M.; Pugh, Mary C.; Dawson, Francis P.
2017-03-01
Linear sweep and cyclic voltammetry techniques are important tools for electrochemists and have a variety of applications in engineering. Voltammetry has classically been treated with the Randles-Sevcik equation, which assumes an electroneutral supported electrolyte. In this paper, we provide a comprehensive mathematical theory of voltammetry in electrochemical cells with unsupported electrolytes and for other situations where diffuse charge effects play a role, and present analytical and simulated solutions of the time-dependent Poisson-Nernst-Planck equations with generalized Frumkin-Butler-Volmer boundary conditions for a 1:1 electrolyte and a simple reaction. Using these solutions, we construct theoretical and simulated current-voltage curves for liquid and solid thin films, membranes with fixed background charge, and cells with blocking electrodes. The full range of dimensionless parameters is considered, including the dimensionless Debye screening length (scaled to the electrode separation), Damkohler number (ratio of characteristic diffusion and reaction times), and dimensionless sweep rate (scaled to the thermal voltage per diffusion time). The analysis focuses on the coupling of Faradaic reactions and diffuse charge dynamics, although capacitive charging of the electrical double layers is also studied, for early time transients at reactive electrodes and for nonreactive blocking electrodes. Our work highlights cases where diffuse charge effects are important in the context of voltammetry, and illustrates which regimes can be approximated using simple analytical expressions and which require more careful consideration.
Thermostatistical description of gas mixtures from space partitions
NASA Astrophysics Data System (ADS)
Rohrmann, R. D.; Zorec, J.
2006-10-01
The new mathematical framework based on the free energy of pure classical fluids presented by Rohrmann [Physica A 347, 221 (2005)] is extended to multicomponent systems to determine thermodynamic and structural properties of chemically complex fluids. Presently, the theory focuses on D -dimensional mixtures in the low-density limit (packing factor η<0.01 ). The formalism combines the free-energy minimization technique with space partitions that assign an available volume v to each particle. v is related to the closeness of the nearest neighbor and provides a useful tool to evaluate the perturbations experimented by particles in a fluid. The theory shows a close relationship between statistical geometry and statistical mechanics. New, unconventional thermodynamic variables and mathematical identities are derived as a result of the space division. Thermodynamic potentials μil , conjugate variable of the populations Nil of particles class i with the nearest neighbors of class l are defined and their relationships with the usual chemical potentials μi are established. Systems of hard spheres are treated as illustrative examples and their thermodynamics functions are derived analytically. The low-density expressions obtained agree nicely with those of scaled-particle theory and Percus-Yevick approximation. Several pair distribution functions are introduced and evaluated. Analytical expressions are also presented for hard spheres with attractive forces due to Kac-tails and square-well potentials. Finally, we derive general chemical equilibrium conditions.
Hertrampf, A; Müller, H; Menezes, J C; Herdling, T
2015-11-10
Pharmaceutical excipients have different functions within a drug formulation, consequently they can influence the manufacturability and/or performance of medicinal products. Therefore, critical to quality attributes should be kept constant. Sometimes it may be necessary to qualify a second supplier, but its product will not be completely equal to the first supplier product. To minimize risks of not detecting small non-similarities between suppliers and to detect lot-to-lot variability for each supplier, multivariate data analysis (MVA) can be used as a more powerful alternative to classical quality control that uses one-parameter-at-a-time monitoring. Such approach is capable of supporting the requirements of a new guideline by the European Parliament and Council (2015/C-95/02) demanding appropriate quality control strategies for excipients based on their criticality and supplier risks in ensuring quality, safety and function. This study compares calcium hydrogen phosphate from two suppliers. It can be assumed that both suppliers use different manufacturing processes. Therefore, possible chemical and physical differences were investigated by using Raman spectroscopy, laser diffraction and X-ray powder diffraction. Afterwards MVA was used to extract relevant information from each analytical technique. Both CaHPO4 could be discriminated by their supplier. The gained knowledge allowed to specify an enhanced strategy for second supplier qualification. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Pyle, Barry H.; Mcfeters, Gordon A.
1992-01-01
A number of microbiological issues are of critical importance to crew health and system performance in spacecraft water systems. This presentation reviews an army of these concerns which include factors that influence water treatment and disinfection in spaceflight such as biofilm formation and the physiological responses of bacteria in clean water systems. Factors associated with spaceflight like aerosol formation under conditions of microgravity are also discussed within the context of airborne infections such as Legionellosis. Finally, a spectrum of analytical approaches is reviewed to provide an evaluation of methodological alternatives that have been suggested or used to detect microorganisms of interest in water systems. These range from classical approaches employing colony formation on specific microbiological growth media to direct (i.e. microscopic) and indirect (e.g. electrochemical) methods as well as the use of molecular approaches and gene probes. These techniques are critically evaluated for their potential utility in determining microbiological water quality through the detection of microorganisms under the influence of ambient environmental stress inherent in spaceflight water systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
Performance constraints and compensation for teleoperation with delay
NASA Technical Reports Server (NTRS)
Mclaughlin, J. S.; Staunton, B. D.
1989-01-01
A classical control perspective is used to characterize performance constraints and evaluate compensation techniques for teleoperation with delay. Use of control concepts such as open and closed loop performance, stability, and bandwidth yield insight to the delay problem. Teleoperator performance constraints are viewed as an open loop time delay lag and as a delay-induced closed loop bandwidth constraint. These constraints are illustrated with a simple analytical tracking example which is corroborated by a real time, 'man-in-the-loop' tracking experiment. The experiment also provides insight to those controller characteristics which are unique to a human operator. Predictive displays and feedforward commands are shown to provide open loop compensation for delay lag. Low pass filtering of telemetry or feedback signals is interpreted as closed loop compensation used to maintain a sufficiently low bandwidth for stability. A new closed loop compensation approach is proposed that uses a reactive (or force feedback) hand controller to restrict system bandwidth by impeding operator inputs.
Perceptual basis of evolving Western musical styles
Rodriguez Zivic, Pablo H.; Shifres, Favio; Cecchi, Guillermo A.
2013-01-01
The brain processes temporal statistics to predict future events and to categorize perceptual objects. These statistics, called expectancies, are found in music perception, and they span a variety of different features and time scales. Specifically, there is evidence that music perception involves strong expectancies regarding the distribution of a melodic interval, namely, the distance between two consecutive notes within the context of another. The recent availability of a large Western music dataset, consisting of the historical record condensed as melodic interval counts, has opened new possibilities for data-driven analysis of musical perception. In this context, we present an analytical approach that, based on cognitive theories of music expectation and machine learning techniques, recovers a set of factors that accurately identifies historical trends and stylistic transitions between the Baroque, Classical, Romantic, and Post-Romantic periods. We also offer a plausible musicological and cognitive interpretation of these factors, allowing us to propose them as data-driven principles of melodic expectation. PMID:23716669
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
NASA Astrophysics Data System (ADS)
Pezzotti, Giuseppe; Adachi, Tetsuya; Gasparutti, Isabella; Vincini, Giulio; Zhu, Wenliang; Boffelli, Marco; Rondinella, Alfredo; Marin, Elia; Ichioka, Hiroaki; Yamamoto, Toshiro; Marunaka, Yoshinori; Kanamura, Narisato
2017-02-01
The Raman spectroscopic method has been applied to quantitatively assess the in vitro degree of demineralization in healthy human teeth. Based on previous evaluations of Raman selection rules (empowered by an orientation distribution function (ODF) statistical algorithm) and on a newly proposed analysis of phonon density of states (PDOS) for selected vibrational modes of the hexagonal structure of hydroxyapatite, a molecular-scale evaluation of the demineralization process upon in vitro exposure to a highly acidic beverage (i.e., CocaCola™ Classic, pH = 2.5) could be obtained. The Raman method proved quite sensitive and spectroscopic features could be directly related to an increase in off-stoichiometry of the enamel surface structure since the very early stage of the demineralization process (i.e., when yet invisible to other conventional analytical techniques). The proposed Raman spectroscopic algorithm might possess some generality for caries risk assessment, allowing a prompt non-contact diagnostic practice in dentistry.
Portable X-ray powder diffractometer for the analysis of art and archaeological materials
NASA Astrophysics Data System (ADS)
Nakai, Izumi; Abe, Yoshinari
2012-02-01
Phase identification based on nondestructive analytical techniques using portable equipment is ideal for the analysis of art and archaeological objects. Portable(p)-XRF and p-Raman are very widely used for this purpose, yet p-XRD is relatively rare despite its importance for the analysis of crystalline materials. This paper overviews 6 types of p-XRD systems developed for analysis of art and archaeological materials. The characteristics of each system are compared. One of the p-XRD systems developed by the authors was brought to many museums as well as many archeological sites in Egypt and Syria to characterize the cultural heritage artifacts, e.g., amulet made of Egyptian blue, blue painted pottery, and Islamic pottery from Egypt, jade from China, variscite from Syria, a Japanese classic painting drawn by Korin Ogata, and oil paintings drawn by Taro Okamoto. Practical application data are shown to demonstrate the potential ability of the method for analysis of various art and archaeological materials.
Pre-Darcy flow in tight and shale formations
NASA Astrophysics Data System (ADS)
Dejam, Morteza; Hassanzadeh, Hassan; Chen, Zhangxin
2017-11-01
There are evidences that the fluid flow in tight and shale formations does not follow Darcy law, which is identified as pre-Darcy flow. Here, the unsteady linear flow of a slightly compressible fluid under the action of pre-Darcy flow is modeled and a generalized Boltzmann transformation technique is used to solve the corresponding highly nonlinear diffusivity equation analytically. The effect of pre-Darcy flow on the pressure diffusion in a homogenous formation is studied in terms of the nonlinear exponent, m, and the threshold pressure gradient, G1. In addition, the pressure gradient, flux, and cumulative production per unit area for different m and G1 are compared with the classical solution of the diffusivity equation based on Darcy flow. Department of Petroleum Engineering in College of Engineering and Applied Science at University of Wyoming and NSERC/AI-EES(AERI)/Foundation CMG and AITF (iCORE) Chairs in Department of Chemical and Petroleum Engineering at University of Calgary.
Improvement of ore recovery efficiency in a flotation column cell using ultra-sonic enhanced bubbles
NASA Astrophysics Data System (ADS)
Filippov, L. O.; Royer, J. J.; Filippova, I. V.
2017-07-01
The ore process flotation technique is enhanced by using external ultra-sonic waves. Compared to the classical flotation method, the application of ultrasounds to flotation fluids generates micro-bubbles by hydrodynamic cavitation. Flotation performances increase was modelled as a result of increased probabilities of the particle-bubble attachment and reduced detachment probability under sonication. A simplified analytical Navier-Stokes model is used to predict the effect of ultrasonic waves on bubble behavior. If the theory is verified by experimentation, it predicts that the ultrasonic waves would create cavitation micro-bubbles, smaller than the flotation bubble added by the gas sparger. This effect leads to increasing the number of small bubbles in the liquid which promote particle-bubble attachment through coalescence between bubbles and micro-bubbles. The decrease in the radius of the flotation bubbles under external vibration forces has an additional effect by enhancing the bubble-particle collision. Preliminary results performed on a potash ore seem to confirm the theory.
Compliance revisited: pharmaceutical drug trials in the era of the contract research organization.
Jonvallen, Petra
2009-12-01
Over the past decade, the management of clinical trials of pharmaceuticals has become a veritable industry, as evidenced by the emergence and proliferation of contract research organizations (CROs) that co-ordinate and monitor trials. This article focuses on work performed by one CRO involved in the introduction of new software, modelled on industrial production processes, into clinical trial practices. It investigates how this new management technique relates to the work performed in the clinic to ensure that trial participants comply with the protocol. Using an analytical distinction between 'classical' management work and invisible work, the article contextualizes the meaning of compliance in the clinic and suggests that the work involved in producing compliance should be taken into consideration by those concerned with validity of trials, as clinical trials are put under private industrial management. The article builds on participant observation at a Swedish university hospital and interviews the nurses, dieticians, doctors and a software engineer, all part of a team involved in pharmaceutical drug trials on a potential obesity drug.
Comparison of airfoil results from an adaptive wall test section and a porous wall test section
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.
1989-01-01
Two wind tunnel investigations were conducted to assess two different wall interference alleviation/correction techniques: adaptive test section walls and classical analytical corrections. The same airfoil model has been tested in the adaptive wall test section of the NASA-Langley 0.3 m Transonic Cryogenic Tunnel (TCT) and in the National Aeronautical Establishment (NAE) High Reynolds Number 2-D facility. The model has a 9 in. chord and a CAST 10-2/DOA 2 airfoil section. The 0.3 m TCT adaptive wall test section has four solid walls with flexible top and bottom walls. The NAE test section has porous top and bottom walls and solid side walls. The aerodynamic results corrected for top and bottom wall interference at Mach numbers from 0.3 to 0.8 at a Reynolds number of 10 by 1,000,000. Movement of the adaptive walls was used to alleviate the top and bottom wall interference in the test results from the NASA tunnel.
Three dimensional iterative beam propagation method for optical waveguide devices
NASA Astrophysics Data System (ADS)
Ma, Changbao; Van Keuren, Edward
2006-10-01
The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.
A reference web architecture and patterns for real-time visual analytics on large streaming data
NASA Astrophysics Data System (ADS)
Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer
2013-12-01
Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.
Ischemia may be the primary cause of the neurologic deficits in classic migraine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skyhoj Olsen, T.; Friberg, L.; Lassen, N.A.
1987-02-01
This study investigates whether the cerebral blood flow reduction occurring in attacks of classic migraine is sufficient to cause neurologic deficits. Regional cerebral blood flow measured with the xenon 133 intracarotid injection technique was analyzed in 11 patients in whom a low-flow area developed during attacks of classic migraine. When measured with this technique, regional cerebral blood flow in focal low-flow areas will be overestimated because of the effect of scattered radiation (Compton scatter) on the recordings. In this study, this effect was particularly taken into account when evaluating the degree of blood flow reduction. During attacks of classic migraine,more » cerebral blood flow reductions averaging 52% were observed focally in the 11 patients. Cerebral blood flow levels known to be insufficient for normal cortical function (less than 16 to 23 mL/100 g/min) were measured in seven patients during the attacks. This was probably also the case in the remaining four patients, but the effect of scattered radiation made a reliable evaluation of blood flow impossible. It is concluded that the blood flow reduction that occurs during attacks of classic migraine is sufficient to cause ischemia and neurologic deficits. Hence, this study suggests a vascular origin of the prodromal neurologic deficits that may accompany attacks of classic migraine.« less
Weiss, Emily; Wilson, Sandra
2003-01-01
A variety of nonhuman animals in zoo and research settings have been the subjects of classical and operant conditioning techniques. Much of the published work has focused on mammals, husbandry training, and veterinary issues. However, several zoos are training reptiles and birds for similar procedures, but there has been little of this work published. Using positive reinforcement techniques enabled the training of 2 male and 2 female Aldabra tortoises (Geochelone gigantea) to approach a target, hold steady on target, and stretch and hold for venipuncture. This article discusses training techniques, venipuncture sight, and future training.
KvN mechanics approach to the time-dependent frequency harmonic oscillator.
Ramos-Prieto, Irán; Urzúa-Pineda, Alejandro R; Soto-Eguibar, Francisco; Moya-Cessa, Héctor M
2018-05-30
Using the Ermakov-Lewis invariants appearing in KvN mechanics, the time-dependent frequency harmonic oscillator is studied. The analysis builds upon the operational dynamical model, from which it is possible to infer quantum or classical dynamics; thus, the mathematical structure governing the evolution will be the same in both cases. The Liouville operator associated with the time-dependent frequency harmonic oscillator can be transformed using an Ermakov-Lewis invariant, which is also time dependent and commutes with itself at any time. Finally, because the solution of the Ermakov equation is involved in the evolution of the classical state vector, we explore some analytical and numerical solutions.
Diffusion Dynamics and Creative Destruction in a Simple Classical Model
2015-01-01
ABSTRACT The article explores the impact of the diffusion of new methods of production on output and employment growth and income distribution within a Classical one‐sector framework. Disequilibrium paths are studied analytically and in terms of simulations. Diffusion by differential growth affects aggregate dynamics through several channels. The analysis reveals the non‐steady nature of economic change and shows that the adaptation pattern depends both on the innovation's factor‐saving bias and on the extent of the bias, which determines the strength of the selection pressure on non‐innovators. The typology of different cases developed shows various aspects of Schumpeter's concept of creative destruction. PMID:27642192
On analytic modeling of lunar perturbations of artificial satellites of the earth
NASA Astrophysics Data System (ADS)
Lane, M. T.
1989-06-01
Two different procedures for analytically modeling the effects of the moon's direct gravitational force on artificial earth satellites are discussed from theoretical and numerical viewpoints. One is developed using classical series expansions of inclination and eccentricity for both the satellite and the moon, and the other employs the method of averaging. Both solutions are seen to have advantages, but it is shown that while the former is more accurate in special situations, the latter is quicker and more practical for the general orbit determination problem where observed data are used to correct the orbit in near real time.
Models of dyadic social interaction.
Griffin, Dale; Gonzalez, Richard
2003-01-01
We discuss the logic of research designs for dyadic interaction and present statistical models with parameters that are tied to psychologically relevant constructs. Building on Karl Pearson's classic nineteenth-century statistical analysis of within-organism similarity, we describe several approaches to indexing dyadic interdependence and provide graphical methods for visualizing dyadic data. We also describe several statistical and conceptual solutions to the 'levels of analytic' problem in analysing dyadic data. These analytic strategies allow the researcher to examine and measure psychological questions of interdependence and social influence. We provide illustrative data from casually interacting and romantic dyads. PMID:12689382
[Blood sampling using "dried blood spot": a clinical biology revolution underway?].
Hirtz, Christophe; Lehmann, Sylvain
2015-01-01
Blood testing using the dried blood spot (DBS) is used since the 1960s in clinical analysis, mainly within the framework of the neonatal screening (Guthrie test). Since then numerous analytes such as nucleic acids, small molecules or lipids, were successfully measured on the DBS. While this pre-analytical method represents an interesting alternative to classic blood sampling, its use in routine is still limited. We review here the different clinical applications of the blood sampling on DBS and estimate its future place, supported by the new methods of analysis as the LC-MS mass spectrometry.
NASA Astrophysics Data System (ADS)
Lin, Ji; Wang, Hou
2013-07-01
We use the classical Lie-group method to study the evolution equation describing a photovoltaic-photorefractive media with the effects of diffusion process and the external electric field. We reduce it to some similarity equations firstly, and then obtain some analytically exact solutions including the soliton solution, the exponential solution and the oscillatory solution. We also obtain the numeric solitons from these similarity equations. Moreover, We show theoretically that these solutions have two types of trajectories. One type is a straight line. The other is a parabolic curve, which indicates these solitons have self-deflection.
Tides and tidal stress: Applications to Europa
NASA Astrophysics Data System (ADS)
Hurford, Terry Anthony, Jr.
A review of analytical techniques and documentation of previously inaccessible mathematical formulations is applied to study of Jupiter's satellite Europa. Compared with numerical codes that are commonly used to model global tidal effects, analytical models of tidal deformation give deeper insight into the mechanics of tides, and can better reveal the nature of the dependence of observable effects on key parameters. I develop analytical models for tidal deformation of multi-layered bodies. Previous studies of Europa, based on numerical computation, only to show isolated examples from parameter space. My results show a systematic dependence of tidal response on the thicknesses and material parameters of Europa's core, rocky mantle, liquid water ocean, and outer layer of ice. As in the earlier work, I restrict these studies to incompressible materials. Any set of Love numbers h 2 and k 2 which describe a planet's tidal deformation, could be fit by a range of ice thickness values, by adjusting other parameters such as mantle rigidity or core size, an important result for mission planning. Inclusion of compression into multilayer models has been addressed analytically, uncovering several issues that are not explicit in the literature. Full evaluation with compression is here restricted to a uniform sphere. A set of singularities in the classical solution, which correspond to instabilities due to self-gravity has been identified and mapped in parameter space. The analytical models of tidal response yield the stresses anywhere within the body, including on its surface. Crack patterns (such as cycloids) on Europa are probably controlled by these stresses. However, in contrast to previous studies which used a thin shell approximation of the tidal stress, I consider how other tidal models compare with the observed tectonic features. In this way the relationship between Europa's surface tectonics and the global tidal distortion can be constrained. While large-scale tidal deformations probe internal structure deep within a body, small-scale deformations can probe internal structure at shallower depths. I have used photoclinometry to obtain topographic profiles across terrain adjacent to Europan ridges to detect the effects of loading on the lithosphere. Lithospheric thicknesses have been determined and correlated with types and ages of terrain.
Long-term detection of methyltestosterone (ab-) use by a yeast transactivation system.
Wolf, Sylvi; Diel, Patrick; Parr, Maria Kristina; Rataj, Felicitas; Schänzer, Willhelm; Vollmer, Günter; Zierau, Oliver
2011-04-01
The routinely used analytical method for detecting the abuse of anabolic steroids only allows the detection of molecules with known analytical properties. In our supplementary approach to structure-independent detection, substances are identified by their biological activity. In the present study, urines excreted after oral methyltestosterone (MT) administration were analyzed by a yeast androgen screen (YAS). The aim was to trace the excretion of MT or its metabolites in human urine samples and to compare the results with those from the established analytical method. MT and its two major metabolites were tested as pure compounds in the YAS. In a second step, the ability of the YAS to detect MT and its metabolites in urine samples was analyzed. For this purpose, a human volunteer ingested of a single dose of 5 mg methyltestosterone. Urine samples were collected after different time intervals (0-307 h) and were analyzed in the YAS and in parallel by GC/MS. Whereas the YAS was able to trace MT in urine samples at least for 14 days, the detection limits of the GC/MS method allowed follow-up until day six. In conclusion, our results demonstrate that the yeast reporter gene system could detect the activity of anabolic steroids like methyltestosterone with high sensitivity even in urine. Furthermore, the YAS was able to detect MT abuse for a longer period of time than classical GC/MS. Obviously, the system responds to long-lasting metabolites yet unidentified. Therefore, the YAS can be a powerful (pre-) screening tool with the potential that to be used to identify persistent or late screening metabolites of anabolic steroids, which could be used for an enhancement of the sensitivity of GC/MS detection techniques.
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
The new version of EPA’s positive matrix factorization (EPA PMF) software, 5.0, includes three error estimation (EE) methods for analyzing factor analytic solutions: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement (BS-DISP)...
Most Wired 2006: measuring value.
Solovy, Alden
2006-07-01
As the Most Wired hospitals incorporate information technology into their strategic plans, they combine a"balanced scorecard"approach with classic business analytics to measure how well IT delivers on their goals. To find out which organizations made this year's 100 Most Wired list, as well as those named in other survey categories, go to the foldout section.
Hypervelocity Aerodynamics and Control
1990-06-06
because of our inability to integrate the costate equiations analytically. Nevertheless, if the solution 5. If fX (T!) - X3, < and 1X4(7T) - -4,d1 < to...domain can be ex- it is independent of vo. This can be explained that actly calculated using the classical equations in astro - although the reachable
Singularities in the classical Rayleigh-Taylor flow - Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1993-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
Singularities in the classical Rayleigh-Taylor flow: Formation and subsequent motion
NASA Technical Reports Server (NTRS)
Tanveer, S.
1992-01-01
The creation and subsequent motion of singularities of solution to classical Rayleigh-Taylor flow (two dimensional inviscid, incompressible fluid over a vacuum) are discussed. For a specific set of initial conditions, we give analytical evidence to suggest the instantaneous formation of one or more singularities at specific points in the unphysical plane, whose locations depend sensitively on small changes in initial conditions in the physical domain. One-half power singularities are created in accordance with an earlier conjecture; however, depending on initial conditions, other forms of singularities are also possible. For a specific initial condition, we follow a numerical procedure in the unphysical plane to compute the motion of a one-half singularity. This computation confirms our previous conjecture that the approach of a one-half singularity towards the physical domain corresponds to the development of a spike at the physical interface. Under some assumptions that appear to be consistent with numerical calculations, we present analytical evidence to suggest that a singularity of the one-half type cannot impinge the physical domain in finite time.
Mechanical Properties of Laminate Materials: From Surface Waves to Bloch Oscillations
NASA Astrophysics Data System (ADS)
Liang, Z.; Willatzen, M.; Christensen, J.
2015-10-01
We propose hitherto unexplored and fully analytical insights into laminate elastic materials in a true condensed-matter-physics spirit. Pure mechanical surface waves that decay as evanescent waves from the interface are discussed, and we demonstrate how these designer Scholte waves are controlled by the geometry as opposed to the material alone. The linear surface wave dispersion is modulated by the crystal filling fraction such that the degree of confinement can be engineered without relying on narrow-band resonances but on effective stiffness moduli. In the same context, we provide a theoretical recipe for designing Bloch oscillations in classical plate structures and show how mechanical Bloch oscillations can be generated in arrays of solid plates when the modal wavelength is gradually reduced. The design recipe describes how Bloch oscillations in classical structures of arbitrary dimensions can be generated, and we demonstrate this numerically for structures with millimeter and centimeter dimensions in the kilohertz to megahertz range. Analytical predictions agree entirely with full wave simulations showing how elastodynamics can mimic quantum-mechanical condensed-matter phenomena.
Chu, Khim Hoong
2017-11-09
Surface diffusion coefficients may be estimated by fitting solutions of a diffusion model to batch kinetic data. For non-linear systems, a numerical solution of the diffusion model's governing equations is generally required. We report here the application of the classic Langmuir kinetics model to extract surface diffusion coefficients from batch kinetic data. The use of the Langmuir kinetics model in lieu of the conventional surface diffusion model allows derivation of an analytical expression. The parameter estimation procedure requires determining the Langmuir rate coefficient from which the pertinent surface diffusion coefficient is calculated. Surface diffusion coefficients within the 10 -9 to 10 -6 cm 2 /s range obtained by fitting the Langmuir kinetics model to experimental kinetic data taken from the literature are found to be consistent with the corresponding values obtained from the traditional surface diffusion model. The virtue of this simplified parameter estimation method is that it reduces the computational complexity as the analytical expression involves only an algebraic equation in closed form which is easily evaluated by spreadsheet computation.
Quantum Hamilton equations of motion for bound states of one-dimensional quantum systems
NASA Astrophysics Data System (ADS)
Köppe, J.; Patzold, M.; Grecksch, W.; Paul, W.
2018-06-01
On the basis of Nelson's stochastic mechanics derivation of the Schrödinger equation, a formal mathematical structure of non-relativistic quantum mechanics equivalent to the one in classical analytical mechanics has been established in the literature. We recently were able to augment this structure by deriving quantum Hamilton equations of motion by finding the Nash equilibrium of a stochastic optimal control problem, which is the generalization of Hamilton's principle of classical mechanics to quantum systems. We showed that these equations allow a description and numerical determination of the ground state of quantum problems without using the Schrödinger equation. We extend this approach here to deliver the complete discrete energy spectrum and related eigenfunctions for bound states of one-dimensional stationary quantum systems. We exemplify this analytically for the one-dimensional harmonic oscillator and numerically by analyzing a quartic double-well potential, a model of broad importance in many areas of physics. We furthermore point out a relation between the tunnel splitting of such models and mean first passage time concepts applied to Nelson's diffusion paths in the ground state.
Dynamics and Novel Mechanisms of SN2 Reactions on ab Initio Analytical Potential Energy Surfaces.
Szabó, István; Czakó, Gábor
2017-11-30
We describe a novel theoretical approach to the bimolecular nucleophilic substitution (S N 2) reactions that is based on analytical potential energy surfaces (PESs) obtained by fitting a few tens of thousands high-level ab initio energy points. These PESs allow computing millions of quasi-classical trajectories thereby providing unprecedented statistical accuracy for S N 2 reactions, as well as performing high-dimensional quantum dynamics computations. We developed full-dimensional ab initio PESs for the F - + CH 3 Y [Y = F, Cl, I] systems, which describe the direct and indirect, complex-forming Walden-inversion, the frontside attack, and the new double-inversion pathways as well as the proton-transfer channels. Reaction dynamics simulations on the new PESs revealed (a) a novel double-inversion S N 2 mechanism, (b) frontside complex formation, (c) the dynamics of proton transfer, (d) vibrational and rotational mode specificity, (e) mode-specific product vibrational distributions, (f) agreement between classical and quantum dynamics, (g) good agreement with measured scattering angle and product internal energy distributions, and (h) significant leaving group effect in accord with experiments.
An Example of a Hakomi Technique Adapted for Functional Analytic Psychotherapy
ERIC Educational Resources Information Center
Collis, Peter
2012-01-01
Functional Analytic Psychotherapy (FAP) is a model of therapy that lends itself to integration with other therapy models. This paper aims to provide an example to assist others in assimilating techniques from other forms of therapy into FAP. A technique from the Hakomi Method is outlined and modified for FAP. As, on the whole, psychotherapy…
NASA Technical Reports Server (NTRS)
Bozeman, Robert E.
1987-01-01
An analytic technique for accounting for the joint effects of Earth oblateness and atmospheric drag on close-Earth satellites is investigated. The technique is analytic in the sense that explicit solutions to the Lagrange planetary equations are given; consequently, no numerical integrations are required in the solution process. The atmospheric density in the technique described is represented by a rotating spherical exponential model with superposed effects of the oblate atmosphere and the diurnal variations. A computer program implementing the process is discussed and sample output is compared with output from program NSEP (Numerical Satellite Ephemeris Program). NSEP uses a numerical integration technique to account for atmospheric drag effects.
ERIC Educational Resources Information Center
Karolides, Nicholas J., Ed.
1983-01-01
The articles in this journal issue suggest techniques for classroom use of literature that has "withstood the test of time." The titles of the articles and their authors are as follows: (1) "The Storytelling Connection for the Classics" (Mary Ellen Martin); (2) "Elizabeth Bennet: A Liberated Woman" (Geneva Marking);…
DOE Office of Scientific and Technical Information (OSTI.GOV)
BAILEY, DAVID H.; BORWEIN, JONATHAN M.
A recent paper by the present authors, together with mathematical physicists David Broadhurst and M. Larry Glasser, explored Bessel moment integrals, namely definite integrals of the general form {integral}{sub 0}{sup {infinity}} t{sup m}f{sup n}(t) dt, where the function f(t) is one of the classical Bessel functions. In that paper, numerous previously unknown analytic evaluations were obtained, using a combination of analytic methods together with some fairly high-powered numerical computations, often performed on highly parallel computers. In several instances, while we were able to numerically discover what appears to be a solid analytic identity, based on extremely high-precision numerical computations, wemore » were unable to find a rigorous proof. Thus we present here a brief list of some of these unproven but numerically confirmed identities.« less
Elsayed, Hany H.; Mostafa, Ahmed M.; Soliman, Saleh; El-Bawab, Hatem Y.; Moharram, Adel A.; El-Nori, Ahmed A.
2016-01-01
OBJECTIVES Airway metal pins are one of the most commonly inhaled foreign bodies in Eastern societies in young females wearing headscarves. We innovated a modified bronchoscopic technique to extract tracheobronchial headscarf pins by the insertion of a magnet to allow an easy and non-traumatic extraction of the pins. The aim of this study was to assess the feasibility and safety of our new technique and compare it with our large previous experience with the classic bronchoscopic method of extraction of tracheobronchial headscarf pins. METHODS We performed a study comparing our retrospective experience of classic bronchoscopic extraction from February 2004 to January 2014 and prospective experience with our modified technique using the magnet from January 2014 to June 2015. An institutional review board and new device approval were obtained. RESULTS Three hundred and twenty-six procedures on 315 patients were performed during our initial 10-year experience. Of them, 304 patients were females. The median age of our group was 13 (0–62). The median time from inhalation to procedure was 1 day (0–1022). After introducing our modified new technique using the magnet, 20 procedures were performed. Nineteen were females. The median time of the procedure and the need to forcefully bend the pin for extraction were in favour of the new technique in comparison with our classic approach (2 vs 6 min; P < 0.001) (2 patients = 20% vs 192 = 58%; P < 0.001). The conversion rate to surgery was also in favour of the modified technique but did not reach statistical significance (0 = 0% vs 15 = 4.8%; P = 0.32). All patients who underwent the modified technique were discharged home on the same day of the procedure. No procedural complications were recorded. All remain well on a follow-up period of up to 14 months. CONCLUSIONS Bronchoscopic extraction of tracheobronchial inhaled headscarf pins using a novel technique using homemade magnets was safer and simpler in comparison with our large experience with the classic approach. We advise the use of this device (or concept) in selected patients in centres dealing with this problem. PMID:26850113
On the anisotropic advection-diffusion equation with time dependent coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez-Coronado, Hector; Coronado, Manuel; Del-Castillo-Negrete, Diego B.
The advection-diffusion equation with time dependent velocity and anisotropic time dependent diffusion tensor is examined in regard to its non-classical transport features and to the use of a non-orthogonal coordinate system. Although this equation appears in diverse physical problems, particularly in particle transport in stochastic velocity fields and in underground porous media, a detailed analysis of its solutions is lacking. In order to study the effects of the time-dependent coefficients and the anisotropic diffusion on transport, we solve analytically the equation for an initial Dirac delta pulse. Here, we discuss the solutions to three cases: one based on power-law correlationmore » functions where the pulse diffuses faster than the classical rate ~t, a second case specically designed to display slower rate of diffusion than the classical one, and a third case to describe hydrodynamic dispersion in porous media« less
On the anisotropic advection-diffusion equation with time dependent coefficients
Hernandez-Coronado, Hector; Coronado, Manuel; Del-Castillo-Negrete, Diego B.
2017-02-01
The advection-diffusion equation with time dependent velocity and anisotropic time dependent diffusion tensor is examined in regard to its non-classical transport features and to the use of a non-orthogonal coordinate system. Although this equation appears in diverse physical problems, particularly in particle transport in stochastic velocity fields and in underground porous media, a detailed analysis of its solutions is lacking. In order to study the effects of the time-dependent coefficients and the anisotropic diffusion on transport, we solve analytically the equation for an initial Dirac delta pulse. Here, we discuss the solutions to three cases: one based on power-law correlationmore » functions where the pulse diffuses faster than the classical rate ~t, a second case specically designed to display slower rate of diffusion than the classical one, and a third case to describe hydrodynamic dispersion in porous media« less
Generation of steady entanglement via unilateral qubit driving in bad cavities.
Jin, Zhao; Su, Shi-Lei; Zhu, Ai-Dong; Wang, Hong-Fu; Shen, Li-Tuo; Zhang, Shou
2017-12-15
We propose a scheme for generating an entangled state for two atoms trapped in two separate cavities coupled to each other. The scheme is based on the competition between the unitary dynamics induced by the classical fields and the collective decays induced by the dissipation of two non-local bosonic modes. In this scheme, only one qubit is driven by external classical fields, whereas the other need not be manipulated via classical driving. This is meaningful for experimental implementation between separate nodes of a quantum network. The steady entanglement can be obtained regardless of the initial state, and the robustness of the scheme against parameter fluctuations is numerically demonstrated. We also give an analytical derivation of the stationary fidelity to enable a discussion of the validity of this regime. Furthermore, based on the dissipative entanglement preparation scheme, we construct a quantum state transfer setup with multiple nodes as a practical application.
NASA Astrophysics Data System (ADS)
Daněk, J.; Klaiber, M.; Hatsagortsyan, K. Z.; Keitel, C. H.; Willenberg, B.; Maurer, J.; Mayer, B. W.; Phillips, C. R.; Gallmann, L.; Keller, U.
2018-06-01
We study strong-field ionization and rescattering beyond the long-wavelength limit of the dipole approximation with elliptically polarized mid-IR laser pulses. Full three-dimensional photoelectron momentum distributions (PMDs) measured with velocity map imaging and tomographic reconstruction revealed an unexpected sharp ridge structure in the polarization plane (2018 Phys. Rev. A 97 013404). This thin line-shaped ridge structure for low-energy photoelectrons is correlated with the ellipticity-dependent asymmetry of the PMD along the beam propagation direction. The peak of the projection of the PMD onto the beam propagation axis is shifted from negative to positive values when the sharp ridge fades away with increasing ellipticity. With classical trajectory Monte Carlo simulations and analytical analysis, we study the underlying physics of this feature. The underlying physics is based on the interplay between the lateral drift of the ionized electron, the laser magnetic field induced drift in the laser propagation direction, and Coulomb focusing. To apply our observations to emerging techniques relying on strong-field ionization processes, including time-resolved holography and molecular imaging, we present a detailed classical trajectory-based analysis of our observations. The analysis leads to the explanation of the fine structure of the ridge and its non-dipole behavior upon rescattering while introducing restrictions on the ellipticity. These restrictions as well as the ionization and recollision phases provide additional observables to gain information on the timing of the ionization and recollision process and non-dipole properties of the ionization process.
Ultra-small dye-doped silica nanoparticles via modified sol-gel technique.
Riccò, R; Nizzero, S; Penna, E; Meneghello, A; Cretaio, E; Enrichi, F
2018-01-01
In modern biosensing and imaging, fluorescence-based methods constitute the most diffused approach to achieve optimal detection of analytes, both in solution and on the single-particle level. Despite the huge progresses made in recent decades in the development of plasmonic biosensors and label-free sensing techniques, fluorescent molecules remain the most commonly used contrast agents to date for commercial imaging and detection methods. However, they exhibit low stability, can be difficult to functionalise, and often result in a low signal-to-noise ratio. Thus, embedding fluorescent probes into robust and bio-compatible materials, such as silica nanoparticles, can substantially enhance the detection limit and dramatically increase the sensitivity. In this work, ultra-small fluorescent silica nanoparticles (NPs) for optical biosensing applications were doped with a fluorescent dye, using simple water-based sol-gel approaches based on the classical Stöber procedure. By systematically modulating reaction parameters, controllable size tuning of particle diameters as low as 10 nm was achieved. Particles morphology and optical response were evaluated showing a possible single-molecule behaviour, without employing microemulsion methods to achieve similar results. Graphical abstractWe report a simple, cheap, reliable protocol for the synthesis and systematic tuning of ultra-small (< 10 nm) dye-doped luminescent silica nanoparticles.
Machine-learning techniques for geochemical discrimination of 2011 Tohoku tsunami deposits
Kuwatani, Tatsu; Nagata, Kenji; Okada, Masato; Watanabe, Takahiro; Ogawa, Yasumasa; Komai, Takeshi; Tsuchiya, Noriyoshi
2014-01-01
Geochemical discrimination has recently been recognised as a potentially useful proxy for identifying tsunami deposits in addition to classical proxies such as sedimentological and micropalaeontological evidence. However, difficulties remain because it is unclear which elements best discriminate between tsunami and non-tsunami deposits. Herein, we propose a mathematical methodology for the geochemical discrimination of tsunami deposits using machine-learning techniques. The proposed method can determine the appropriate combinations of elements and the precise discrimination plane that best discerns tsunami deposits from non-tsunami deposits in high-dimensional compositional space through the use of data sets of bulk composition that have been categorised as tsunami or non-tsunami sediments. We applied this method to the 2011 Tohoku tsunami and to background marine sedimentary rocks. After an exhaustive search of all 262,144 (= 218) combinations of the 18 analysed elements, we observed several tens of combinations with discrimination rates higher than 99.0%. The analytical results show that elements such as Ca and several heavy-metal elements are important for discriminating tsunami deposits from marine sedimentary rocks. These elements are considered to reflect the formation mechanism and origin of the tsunami deposits. The proposed methodology has the potential to aid in the identification of past tsunamis by using other tsunami proxies. PMID:25399750
Yang, Yan-Qin; Yin, Hong-Xu; Yuan, Hai-Bo; Jiang, Yong-Wen; Dong, Chun-Wang; Deng, Yu-Liang
2018-01-01
In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME) followed by gas chromatography-mass spectrometry (GC-MS) was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA) and hierarchical clustering analysis (HCA). Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP) values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA) allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties.
Yin, Hong-Xu; Yuan, Hai-Bo; Jiang, Yong-Wen; Dong, Chun-Wang; Deng, Yu-Liang
2018-01-01
In the present work, a novel infrared-assisted extraction coupled to headspace solid-phase microextraction (IRAE-HS-SPME) followed by gas chromatography-mass spectrometry (GC-MS) was developed for rapid determination of the volatile components in green tea. The extraction parameters such as fiber type, sample amount, infrared power, extraction time, and infrared lamp distance were optimized by orthogonal experimental design. Under optimum conditions, a total of 82 volatile compounds in 21 green tea samples from different geographical origins were identified. Compared with classical water-bath heating, the proposed technique has remarkable advantages of considerably reducing the analytical time and high efficiency. In addition, an effective classification of green teas based on their volatile profiles was achieved by partial least square-discriminant analysis (PLS-DA) and hierarchical clustering analysis (HCA). Furthermore, the application of a dual criterion based on the variable importance in the projection (VIP) values of the PLS-DA models and on the category from one-way univariate analysis (ANOVA) allowed the identification of 12 potential volatile markers, which were considered to make the most important contribution to the discrimination of the samples. The results suggest that IRAE-HS-SPME/GC-MS technique combined with multivariate analysis offers a valuable tool to assess geographical traceability of different tea varieties. PMID:29494626
Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert
2009-01-01
Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...
Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A
2007-01-01
The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.
Analytical Chemistry of Surfaces: Part II. Electron Spectroscopy.
ERIC Educational Resources Information Center
Hercules, David M.; Hercules, Shirley H.
1984-01-01
Discusses two surface techniques: X-ray photoelectron spectroscopy (ESCA) and Auger electron spectroscopy (AES). Focuses on fundamental aspects of each technique, important features of instrumentation, and some examples of how ESCA and AES have been applied to analytical surface problems. (JN)
Diagrammar in classical scalar field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cattaruzza, E., E-mail: Enrico.Cattaruzza@gmail.com; Gozzi, E., E-mail: gozzi@ts.infn.it; INFN, Sezione di Trieste
2011-09-15
In this paper we analyze perturbatively a g{phi}{sup 4}classical field theory with and without temperature. In order to do that, we make use of a path-integral approach developed some time ago for classical theories. It turns out that the diagrams appearing at the classical level are many more than at the quantum level due to the presence of extra auxiliary fields in the classical formalism. We shall show that a universal supersymmetry present in the classical path-integral mentioned above is responsible for the cancelation of various diagrams. The same supersymmetry allows the introduction of super-fields and super-diagrams which considerably simplifymore » the calculations and make the classical perturbative calculations almost 'identical' formally to the quantum ones. Using the super-diagrams technique, we develop the classical perturbation theory up to third order. We conclude the paper with a perturbative check of the fluctuation-dissipation theorem. - Highlights: > We provide the Feynman diagrams of perturbation theory for a classical field theory. > We give a super-formalism which links the quantum diagrams to the classical ones. > We check perturbatively the fluctuation-dissipation theorem.« less
Tachyon field in loop quantum cosmology: Inflation and evolution picture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong Huaui; Zhu Jianyang
2007-04-15
Loop quantum cosmology (LQC) predicts a nonsingular evolution of the universne through a bounce in the high energy region. We show that this is always true in tachyon matter LQC. Differing from the classical Friedman-Robertson-Walker (FRW) cosmology, the super inflation can appear in the tachyon matter LQC; furthermore, the inflation can be extended to the region where classical inflation stops. Using the numerical method, we give an evolution picture of the tachyon field with an exponential potential in the context of LQC. It indicates that the quantum dynamical solutions have the same attractive behavior as the classical solutions do. Themore » whole evolution of the tachyon field is that in the distant past, the tachyon field--being in the contracting cosmology--accelerates to climb up the potential hill with a negative velocity; then at the boundary the tachyon field is bounced into an expanding universe with positive velocity rolling down to the bottom of the potential. In the slow roll limit, we compare the quantum inflation with the classical case in both an analytic and a numerical way.« less
Cosmic Experiments: Remaking Materialism and Daoist Ethic "Outside of the Establishment".
Zhan, Mei
2016-01-01
In this article, I discuss recent experiments in 'classical' (gudian) Chinese medicine. As the marketization and privatization of health care deepens and enters uncharted territories in China, a cohort of young practitioners and entrepreneurs have begun their quest for the 'primordial spirit' of traditional Chinese medicine by setting up their own businesses where they engage in clinical, pedagogical, and entrepreneurial practices outside of state-run institutions. I argue that these explorations in classical Chinese medicine, which focus on classical texts and Daoist analytics, do not aim to restore spirituality to the scientized and secularized theory of traditional Chinese medicine. Nor are they symptomatic of withdrawals from the modern world. Rather, these 'cosmic experiments' need to be understood in relation to dialectical and historical materialisms as modes of knowledge production and political alliance. In challenging the status of materialist theory and the process of theorization in traditional Chinese medicine and postsocialist life more broadly speaking, advocates of classical Chinese medicine imagine nondialectical materialisms as immanent ways of thinking, doing, and being in the world.
Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.
Verde, Pablo E
2010-12-30
In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Selected topics from classical bacterial genetics.
Raleigh, Elisabeth A; Elbing, Karen; Brent, Roger
2002-08-01
Current cloning technology exploits many facts learned from classical bacterial genetics. This unit covers those that are critical to understanding the techniques described in this book. Topics include antibiotics, the LAC operon, the F factor, nonsense suppressors, genetic markers, genotype and phenotype, DNA restriction, modification and methylation and recombination.
Classical-to-Quantum Transition with Broadband Four-Wave Mixing
NASA Astrophysics Data System (ADS)
Vered, Rafi Z.; Shaked, Yaakov; Ben-Or, Yelena; Rosenbluh, Michael; Pe'er, Avi
2015-02-01
A key question of quantum optics is how nonclassical biphoton correlations at low power evolve into classical coherence at high power. Direct observation of the crossover from quantum to classical behavior is desirable, but difficult due to the lack of adequate experimental techniques that cover the ultrawide dynamic range in photon flux from the single photon regime to the classical level. We investigate biphoton correlations within the spectrum of light generated by broadband four-wave mixing over a large dynamic range of ˜80 dB in photon flux across the classical-to-quantum transition using a two-photon interference effect that distinguishes between classical and quantum behavior. We explore the quantum-classical nature of the light by observing the interference contrast dependence on internal loss and demonstrate quantum collapse and revival of the interference when the four-wave mixing gain in the fiber becomes imaginary.
Application Of Iterative Reconstruction Techniques To Conventional Circular Tomography
NASA Astrophysics Data System (ADS)
Ghosh Roy, D. N.; Kruger, R. A.; Yih, B. C.; Del Rio, S. P.; Power, R. L.
1985-06-01
Two "point-by-point" iteration procedures, namely, Iterative Least Square Technique (ILST) and Simultaneous Iterative Reconstructive Technique (SIRT) were applied to classical circular tomographic reconstruction. The technique of tomosynthetic DSA was used in forming the tomographic images. Reconstructions of a dog's renal and neck anatomy are presented.
Focus-based filtering + clustering technique for power-law networks with small world phenomenon
NASA Astrophysics Data System (ADS)
Boutin, François; Thièvre, Jérôme; Hascoët, Mountaz
2006-01-01
Realistic interaction networks usually present two main properties: a power-law degree distribution and a small world behavior. Few nodes are linked to many nodes and adjacent nodes are likely to share common neighbors. Moreover, graph structure usually presents a dense core that is difficult to explore with classical filtering and clustering techniques. In this paper, we propose a new filtering technique accounting for a user-focus. This technique extracts a tree-like graph with also power-law degree distribution and small world behavior. Resulting structure is easily drawn with classical force-directed drawing algorithms. It is also quickly clustered and displayed into a multi-level silhouette tree (MuSi-Tree) from any user-focus. We built a new graph filtering + clustering + drawing API and report a case study.
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-01-01
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713
Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason
2016-12-15
With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.
Onset of fractional-order thermal convection in porous media
NASA Astrophysics Data System (ADS)
Karani, Hamid; Rashtbehesht, Majid; Huber, Christian; Magin, Richard L.
2017-12-01
The macroscopic description of buoyancy-driven thermal convection in porous media is governed by advection-diffusion processes, which in the presence of thermophysical heterogeneities fail to predict the onset of thermal convection and the average rate of heat transfer. This work extends the classical model of heat transfer in porous media by including a fractional-order advective-dispersive term to account for the role of thermophysical heterogeneities in shifting the thermal instability point. The proposed fractional-order model overcomes limitations of the common closure approaches for the thermal dispersion term by replacing the diffusive assumption with a fractional-order model. Through a linear stability analysis and Galerkin procedure, we derive an analytical formula for the critical Rayleigh number as a function of the fractional model parameters. The resulting critical Rayleigh number reduces to the classical value in the absence of thermophysical heterogeneities when solid and fluid phases have similar thermal conductivities. Numerical simulations of the coupled flow equation with the fractional-order energy model near the primary bifurcation point confirm our analytical results. Moreover, data from pore-scale simulations are used to examine the potential of the proposed fractional-order model in predicting the amount of heat transfer across the porous enclosure. The linear stability and numerical results show that, unlike the classical thermal advection-dispersion models, the fractional-order model captures the advance and delay in the onset of convection in porous media and provides correct scalings for the average heat transfer in a thermophysically heterogeneous medium.
El-Awady, Mohamed; Pyell, Ute
2013-07-05
The application of a new method developed for the assessment of sweeping efficiency in MEKC under homogeneous and inhomogeneous electric field conditions is extended to the general case, in which the distribution coefficient and the electric conductivity of the analyte in the sample zone and in the separation compartment are varied. As test analytes p-hydroxybenzoates (parabens), benzamide and some aromatic amines are studied under MEKC conditions with SDS as anionic surfactant. We show that in the general case - in contrast to the classical description - the obtainable enrichment factor is not only dependent on the retention factor of the analyte in the sample zone but also dependent on the retention factor in the background electrolyte (BGE). It is shown that in the general case sweeping is inherently a multistep focusing process. We describe an additional focusing/defocusing step (the retention factor gradient effect, RFGE) quantitatively by extending the classical equation employed for the description of the sweeping process with an additional focusing/defocusing factor. The validity of this equation is demonstrated experimentally (and theoretically) under variation of the organic solvent content (in the sample and/or the BGE), the type of organic solvent (in the sample and/or the BGE), the electric conductivity (in the sample), the pH (in the sample), and the concentration of surfactant (in the BGE). It is shown that very high enrichment factors can be obtained, if the pH in the sample zone makes possible to convert the analyte into a charged species that has a high distribution coefficient with respect to an oppositely charged micellar phase, while the pH in the BGE enables separation of the neutral species under moderate retention factor conditions. Copyright © 2013 Elsevier B.V. All rights reserved.
Pre-concentration technique for reduction in "Analytical instrument requirement and analysis"
NASA Astrophysics Data System (ADS)
Pal, Sangita; Singha, Mousumi; Meena, Sher Singh
2018-04-01
Availability of analytical instruments for a methodical detection of known and unknown effluents imposes a serious hindrance in qualification and quantification. Several analytical instruments such as Elemental analyzer, ICP-MS, ICP-AES, EDXRF, ion chromatography, Electro-analytical instruments which are not only expensive but also time consuming, required maintenance, damaged essential parts replacement which are of serious concern. Move over for field study and instant detection installation of these instruments are not convenient to each and every place. Therefore, technique such as pre-concentration of metal ions especially for lean stream elaborated and justified. Chelation/sequestration is the key of immobilization technique which is simple, user friendly, most effective, least expensive, time efficient; easy to carry (10g - 20g vial) to experimental field/site has been demonstrated.
Approximate analytical relationships for linear optimal aeroelastic flight control laws
NASA Astrophysics Data System (ADS)
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Isotope-ratio-monitoring gas chromatography-mass spectrometry: methods for isotopic calibration
NASA Technical Reports Server (NTRS)
Merritt, D. A.; Brand, W. A.; Hayes, J. M.
1994-01-01
In trial analyses of a series of n-alkanes, precise determinations of 13C contents were based on isotopic standards introduced by five different techniques and results were compared. Specifically, organic-compound standards were coinjected with the analytes and carried through chromatography and combustion with them; or CO2 was supplied from a conventional inlet and mixed with the analyte in the ion source, or CO2 was supplied from an auxiliary mixing volume and transmitted to the source without interruption of the analyte stream. Additionally, two techniques were investigated in which the analyte stream was diverted and CO2 standards were placed on a near-zero background. All methods provided accurate results. Where applicable, methods not involving interruption of the analyte stream provided the highest performance (sigma = 0.00006 at.% 13C or 0.06% for 250 pmol C as CO2 reaching the ion source), but great care was required. Techniques involving diversion of the analyte stream were immune to interference from coeluting sample components and still provided high precision (0.0001 < or = sigma < or = 0.0002 at.% or 0.1 < or = sigma < or = 0.2%).
Analytical technique characterizes all trace contaminants in water
NASA Technical Reports Server (NTRS)
Foster, J. N.; Lysyj, I.; Nelson, K. H.
1967-01-01
Properly programmed combination of advanced chemical and physical analytical techniques characterize critically all trace contaminants in both the potable and waste water from the Apollo Command Module. This methodology can also be applied to the investigation of the source of water pollution.
Foster, Katherine T; Beltz, Adriene M
2018-08-01
Ambulatory assessment (AA) methodologies have the potential to increase understanding and treatment of addictive behavior in seemingly unprecedented ways, due in part, to their emphasis on intensive repeated assessments of an individual's addictive behavior in context. But, many analytic techniques traditionally applied to AA data - techniques that average across people and time - do not fully leverage this potential. In an effort to take advantage of the individualized, temporal nature of AA data on addictive behavior, the current paper considers three underutilized person-oriented analytic techniques: multilevel modeling, p-technique, and group iterative multiple model estimation. After reviewing prevailing analytic techniques, each person-oriented technique is presented, AA data specifications are mentioned, an example analysis using generated data is provided, and advantages and limitations are discussed; the paper closes with a brief comparison across techniques. Increasing use of person-oriented techniques will substantially enhance inferences that can be drawn from AA data on addictive behavior and has implications for the development of individualized interventions. Copyright © 2017. Published by Elsevier Ltd.
NASA Technical Reports Server (NTRS)
Migneault, G. E.
1979-01-01
Emulation techniques are proposed as a solution to a difficulty arising in the analysis of the reliability of highly reliable computer systems for future commercial aircraft. The difficulty, viz., the lack of credible precision in reliability estimates obtained by analytical modeling techniques are established. The difficulty is shown to be an unavoidable consequence of: (1) a high reliability requirement so demanding as to make system evaluation by use testing infeasible, (2) a complex system design technique, fault tolerance, (3) system reliability dominated by errors due to flaws in the system definition, and (4) elaborate analytical modeling techniques whose precision outputs are quite sensitive to errors of approximation in their input data. The technique of emulation is described, indicating how its input is a simple description of the logical structure of a system and its output is the consequent behavior. The use of emulation techniques is discussed for pseudo-testing systems to evaluate bounds on the parameter values needed for the analytical techniques.
Byliński, Hubert; Gębicki, Jacek; Dymerski, Tomasz; Namieśnik, Jacek
2017-07-04
One of the major sources of error that occur during chemical analysis utilizing the more conventional and established analytical techniques is the possibility of losing part of the analytes during the sample preparation stage. Unfortunately, this sample preparation stage is required to improve analytical sensitivity and precision. Direct techniques have helped to shorten or even bypass the sample preparation stage; and in this review, we comment of some of the new direct techniques that are mass-spectrometry based. The study presents information about the measurement techniques using mass spectrometry, which allow direct sample analysis, without sample preparation or limiting some pre-concentration steps. MALDI - MS, PTR - MS, SIFT - MS, DESI - MS techniques are discussed. These solutions have numerous applications in different fields of human activity due to their interesting properties. The advantages and disadvantages of these techniques are presented. The trends in development of direct analysis using the aforementioned techniques are also presented.
Quantum-optical coherence tomography with classical light.
Lavoie, J; Kaltenbaek, R; Resch, K J
2009-03-02
Quantum-optical coherence tomography (Q-OCT) is an interferometric technique for axial imaging offering several advantages over conventional methods. Chirped-pulse interferometry (CPI) was recently demonstrated to exhibit all of the benefits of the quantum interferometer upon which Q-OCT is based. Here we use CPI to measure axial interferograms to profile a sample accruing the important benefits of Q-OCT, including automatic dispersion cancellation, but with 10 million times higher signal. Our technique solves the artifact problem in Q-OCT and highlights the power of classical correlation in optical imaging.
Woźniak, Krzysztof; Moskała, Artur; Urbanik, Andrzej; Kopacz, Paweł; Kłys, Małgorzata
2009-01-01
The techniques employed in "classic" forensic autopsy have been virtually unchanged for many years. One of the fundamental purposes of forensic documentation is to register as objectively as possible the changes found by forensic pathologists. The authors present the review of techniques of postmortem imaging studies, which aim not only at increased objectivity of observations, but also at extending the scope of the registered data. The paper is illustrated by images originating from research carried out by the authors.
Common aspects influencing the translocation of SERS to Biomedicine.
Gil, Pilar Rivera; Tsouts, Dionysia; Sanles-Sobrido, Marcos; Cabo, Andreu
2018-01-04
In this review, we introduce the reader the analytical technique, surface-enhanced Raman scattering motivated by the great potential we believe this technique have in biomedicine. We present the advantages and limitations of this technique relevant for bioanalysis in vitro and in vivo and how this technique goes beyond the state of the art of traditional analytical, labelling and healthcare diagnosis technologies. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
The problem of self-disclosure in psychoanalysis.
Meissner, W W
2002-01-01
The problem of self-disclosure is explored in relation to currently shifting paradigms of the nature of the analytic relation and analytic interaction. Relational and intersubjective perspectives emphasize the role of self-disclosure as not merely allowable, but as an essential facilitating aspect of the analytic dialogue, in keeping with the role of the analyst as a contributing partner in the process. At the opposite extreme, advocates of classical anonymity stress the importance of neutrality and abstinence. The paper seeks to chart a course between unconstrained self-disclosure and absolute anonymity, both of which foster misalliances. Self-disclosure is seen as at times contributory to the analytic process, and at times deleterious. The decision whether to self-disclose, what to disclose, and when and how, should be guided by the analyst's perspective on neutrality, conceived as a mental stance in which the analyst assesses and decides what, at any given point, seems to contribute to the analytic process and the patient's therapeutic benefit. The major risk in self-disclosure is the tendency to draw the analytic interaction into the real relation between analyst and patient, thus diminishing or distorting the therapeutic alliance, mitigating transference expression, and compromising therapeutic effectiveness.
Light aircraft crash safety program
NASA Technical Reports Server (NTRS)
Thomson, R. G.; Hayduk, R. J.
1974-01-01
NASA is embarked upon research and development tasks aimed at providing the general aviation industry with a reliable crashworthy airframe design technology. The goals of the NASA program are: reliable analytical techniques for predicting the nonlinear behavior of structures; significant design improvements of airframes; and simulated full-scale crash test data. The analytical tools will include both simplified procedures for estimating energy absorption characteristics and more complex computer programs for analysis of general airframe structures under crash loading conditions. The analytical techniques being developed both in-house and under contract are described, and a comparison of some analytical predictions with experimental results is shown.
1980-11-01
to auto ignite in color cinematography of the process. It appears the above interaction reduces classical wall quench(14 ) as the reaction continues...vivid blue hue while the core reaction is white. Continuation of the reaction is seen in the first four frames of Fig. V-3; this figure covers the time
Valuing (and Teaching) the Past
ERIC Educational Resources Information Center
Peart, Sandra J.; Levy, David M.
2005-01-01
There is a difference between the private and social cost of preserving the past. Although it may be privately rational to forget the past, the social cost is significant: We fail to see that classical political economy is analytically egalitarian. The past is a rich source of surprises and debates, and resources on the Web are uniquely suited to…
Developing Students' Ideas about Lens Imaging: Teaching Experiments with an Image-Based Approach
ERIC Educational Resources Information Center
Grusche, Sascha
2017-01-01
Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists' analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students' ideas, teaching experiments are performed and evaluated using…
COMPARING A NEW ALGORITHM WITH THE CLASSIC METHODS FOR ESTIMATING THE NUMBER OF FACTORS. (R826238)
This paper presents and compares a new algorithm for finding the number of factors in a data analytic model. After we describe the new method, called NUMFACT, we compare it with standard methods for finding the number of factors to use in a model. The standard methods that we ...
Peridynamic Modeling of Fracture and Failure of Materials
2013-08-02
is demonstrated through comparisons with classical laminate theory ( CLT ) and FEM analysis by considering laminates with complex layup under in-plane...is a symmetric cross-ply laminate with a layup of [0 / 90 ]S . For symmetric laminates, CLT predicts that there is no coupling between bending and...analytical results from the CLT in Figs. 5 and 6. 16 (a
Surface-Enhanced Raman Spectroscopy.
ERIC Educational Resources Information Center
Garrell, Robin L.
1989-01-01
Reviews the basis for the technique and its experimental requirements. Describes a few examples of the analytical problems to which surface-enhanced Raman spectroscopy (SERS) has been and can be applied. Provides a perspective on the current limitations and frontiers in developing SERS as an analytical technique. (MVL)
Jabłońska-Czapla, Magdalena
2015-01-01
Chemical speciation is a very important subject in the environmental protection, toxicology, and chemical analytics due to the fact that toxicity, availability, and reactivity of trace elements depend on the chemical forms in which these elements occur. Research on low analyte levels, particularly in complex matrix samples, requires more and more advanced and sophisticated analytical methods and techniques. The latest trends in this field concern the so-called hyphenated techniques. Arsenic, antimony, chromium, and (underestimated) thallium attract the closest attention of toxicologists and analysts. The properties of those elements depend on the oxidation state in which they occur. The aim of the following paper is to answer the question why the speciation analytics is so important. The paper also provides numerous examples of the hyphenated technique usage (e.g., the LC-ICP-MS application in the speciation analysis of chromium, antimony, arsenic, or thallium in water and bottom sediment samples). An important issue addressed is the preparation of environmental samples for speciation analysis. PMID:25873962
Pandey, Khushaboo; Dubey, Rama Shankar; Prasad, Bhim Bali
2016-03-01
The most important objectives that are frequently found in bio-analytical chemistry involve applying tools to relevant medical/biological problems and refining these applications. Developing a reliable sample preparation step, for the medical and biological fields is another primary objective in analytical chemistry, in order to extract and isolate the analytes of interest from complex biological matrices. Since, main inborn errors of metabolism (IEM) diagnosable through uracil analysis and the therapeutic monitoring of toxic 5-fluoruracil (an important anti-cancerous drug) in dihydropyrimidine dehydrogenase deficient patients, require an ultra-sensitive, reproducible, selective, and accurate analytical techniques for their measurements. Therefore, keeping in view, the diagnostic value of uracil and 5-fluoruracil measurements, this article refines several analytical techniques involved in selective recognition and quantification of uracil and 5-fluoruracil from biological and pharmaceutical samples. The prospective study revealed that implementation of molecularly imprinted polymer as a solid-phase material for sample preparation and preconcentration of uracil and 5-fluoruracil had proven to be effective as it could obviates problems related to tedious separation techniques, owing to protein binding and drastic interferences, from the complex matrices in real samples such as blood plasma, serum samples.
Dietze, Klaas; Tucakov, Anna; Engel, Tatjana; Wirtz, Sabine; Depner, Klaus; Globig, Anja; Kammerer, Robert; Mouchantat, Susan
2017-01-05
Non-invasive sampling techniques based on the analysis of oral fluid specimen have gained substantial importance in the field of swine herd management. Methodological advances have a focus on endemic viral diseases in commercial pig production. More recently, these approaches have been adapted to non-invasive sampling of wild boar for transboundary animal disease detection for which these effective population level sampling methods have not been available. In this study, a rope-in-a-bait based oral fluid sampling technique was tested to detect classical swine fever virus nucleic acid shedding from experimentally infected domestic pigs. Separated in two groups treated identically, the course of the infection was slightly differing in terms of onset of the clinical signs and levels of viral ribonucleic acid detection in the blood and oral fluid. The technique was capable of detecting classical swine fever virus nucleic acid as of day 7 post infection coinciding with the first detection in conventional oropharyngeal swab samples from some individual animals. Except for day 7 post infection in the "slower onset group", the chances of classical swine fever virus nucleic acid detection in ropes were identical or higher as compared to the individual sampling. With the provided evidence, non-invasive oral fluid sampling at group level can be considered as additional cost-effective detection tool in classical swine fever prevention and control strategies. The proposed methodology is of particular use in production systems with reduced access to veterinary services such as backyard or scavenging pig production where it can be integrated in feeding or baiting practices.
NASA Astrophysics Data System (ADS)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
Big data in medical science--a biostatistical view.
Binder, Harald; Blettner, Maria
2015-02-27
Inexpensive techniques for measurement and data storage now enable medical researchers to acquire far more data than can conveniently be analyzed by traditional methods. The expression "big data" refers to quantities on the order of magnitude of a terabyte (1012 bytes); special techniques must be used to evaluate such huge quantities of data in a scientifically meaningful way. Whether data sets of this size are useful and important is an open question that currently confronts medical science. In this article, we give illustrative examples of the use of analytical techniques for big data and discuss them in the light of a selective literature review. We point out some critical aspects that should be considered to avoid errors when large amounts of data are analyzed. Machine learning techniques enable the recognition of potentially relevant patterns. When such techniques are used, certain additional steps should be taken that are unnecessary in more traditional analyses; for example, patient characteristics should be differentially weighted. If this is not done as a preliminary step before similarity detection, which is a component of many data analysis operations, characteristics such as age or sex will be weighted no higher than any one out of 10 000 gene expression values. Experience from the analysis of conventional observational data sets can be called upon to draw conclusions about potential causal effects from big data sets. Big data techniques can be used, for example, to evaluate observational data derived from the routine care of entire populations, with clustering methods used to analyze therapeutically relevant patient subgroups. Such analyses can provide complementary information to clinical trials of the classic type. As big data analyses become more popular, various statistical techniques for causality analysis in observational data are becoming more widely available. This is likely to be of benefit to medical science, but specific adaptations will have to be made according to the requirements of the applications.
WHAEM: PROGRAM DOCUMENTATION FOR THE WELLHEAD ANALYTIC ELEMENT MODEL
The Wellhead Analytic Element Model (WhAEM) demonstrates a new technique for the definition of time-of-travel capture zones in relatively simple geohydrologic settings. he WhAEM package includes an analytic element model that uses superposition of (many) analytic solutions to gen...
Examples of Complete Solvability of 2D Classical Superintegrable Systems
NASA Astrophysics Data System (ADS)
Chen, Yuxuan; Kalnins, Ernie G.; Li, Qiushi; Miller, Willard, Jr.
2015-11-01
Classical (maximal) superintegrable systems in n dimensions are Hamiltonian systems with 2n-1 independent constants of the motion, globally defined, the maximum number possible. They are very special because they can be solved algebraically. In this paper we show explicitly, mostly through examples of 2nd order superintegrable systems in 2 dimensions, how the trajectories can be determined in detail using rather elementary algebraic, geometric and analytic methods applied to the closed quadratic algebra of symmetries of the system, without resorting to separation of variables techniques or trying to integrate Hamilton's equations. We treat a family of 2nd order degenerate systems: oscillator analogies on Darboux, nonzero constant curvature, and flat spaces, related to one another via contractions, and obeying Kepler's laws. Then we treat two 2nd order nondegenerate systems, an analogy of a caged Coulomb problem on the 2-sphere and its contraction to a Euclidean space caged Coulomb problem. In all cases the symmetry algebra structure provides detailed information about the trajectories, some of which are rather complicated. An interesting example is the occurrence of ''metronome orbits'', trajectories confined to an arc rather than a loop, which are indicated clearly from the structure equations but might be overlooked using more traditional methods. We also treat the Post-Winternitz system, an example of a classical 4th order superintegrable system that cannot be solved using separation of variables. Finally we treat a superintegrable system, related to the addition theorem for elliptic functions, whose constants of the motion are only rational in the momenta. It is a system of special interest because its constants of the motion generate a closed polynomial algebra. This paper contains many new results but we have tried to present most of the materials in a fashion that is easily accessible to nonexperts, in order to provide entrée to superintegrablity theory.
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
NASA Astrophysics Data System (ADS)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger
2017-04-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
A spatially homogeneous and isotropic Einstein-Dirac cosmology
NASA Astrophysics Data System (ADS)
Finster, Felix; Hainzl, Christian
2011-04-01
We consider a spatially homogeneous and isotropic cosmological model where Dirac spinors are coupled to classical gravity. For the Dirac spinors we choose a Hartree-Fock ansatz where all one-particle wave functions are coherent and have the same momentum. If the scale function is large, the universe behaves like the classical Friedmann dust solution. If however the scale function is small, quantum effects lead to oscillations of the energy-momentum tensor. It is shown numerically and proven analytically that these quantum oscillations can prevent the formation of a big bang or big crunch singularity. The energy conditions are analyzed. We prove the existence of time-periodic solutions which go through an infinite number of expansion and contraction cycles.
Calculations of Total Classical Cross Sections for a Central Field
NASA Astrophysics Data System (ADS)
Tsyganov, D. L.
2018-07-01
In order to find the total collision cross-section a direct method of the effective potential (EPM) in the framework of classical mechanics was proposed. EPM allows to over come both the direct scattering problem (calculation of the total collision cross-section) and the inverse scattering problem (reconstruction of the scattering potential) quickly and effectively. A general analytical expression was proposed for the generalized Lennard-Jones potentials: (6-3), (9-3), (12-3), (6-4), (8-4), (12-4), (8-6), (12-6), (18-6). The values for the scattering potential of the total cross section for pairs such as electron-N2, N-N, and O-O2 were obtained in a good approximation.
Berry phase and Hannay angle of an interacting boson system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, S. C.; Graduate School, China Academy of Engineering Physics, Beijing 100088; Liu, J.
2011-04-15
In the present paper, we investigate the Berry phase and the Hannay angle of an interacting two-mode boson system and obtain their analytic expressions in explicit forms. The relation between the Berry phase and the Hannay angle is discussed. We find that, in the large-particle-number limit, the classical Hannay angle equals the particle number derivative of the quantum Berry phase except for a sign. This relationship is applicable to other many-body boson systems where the coherent-state description is available and the total particle number is conserved. The measurement of the classical Hannay angle in the many-body systems is briefly discussedmore » as well.« less
A finite-element method for large-amplitude, two-dimensional panel flutter at hypersonic speeds
NASA Technical Reports Server (NTRS)
Mei, Chuh; Gray, Carl E.
1989-01-01
The nonlinear flutter behavior of a two-dimensional panel in hypersonic flow is investigated analytically. An FEM formulation based unsteady third-order piston theory (Ashley and Zartarian, 1956; McIntosh, 1970) and taking nonlinear structural and aerodynamic phenomena into account is derived; the solution procedure is outlined; and typical results are presented in extensive tables and graphs. A 12-element finite-element solution obtained using an alternative method for linearizing the assumed limit-cycle time function is shown to give predictions in good agreement with classical analytical results for large-amplitude vibration in a vacuum and large-amplitude panel flutter, using linear aerodynamics.
On Boundaries of the Language of Physics
NASA Astrophysics Data System (ADS)
Kvasz, Ladislav
The aim of the present paper is to outline a method of reconstruction of the historical development of the language of physical theories. We will apply the theory presented in Patterns of Change, Linguistic Innovations in the Development of Classical Mathematics to the analysis of linguistic innovations in physics. Our method is based on a reconstruction of the following potentialities of language: analytical power, expressive power, integrative power, and explanatory power, as well as analytical boundaries and expressive boundaries. One of the results of our reconstruction is a new interpretation of Kant's antinomies of pure reason. If we relate Kant's antinomies to the language, they retain validity.
Modified harmonic balance method for the solution of nonlinear jerk equations
NASA Astrophysics Data System (ADS)
Rahman, M. Saifur; Hasan, A. S. M. Z.
2018-03-01
In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.
Multi-Intelligence Analytics for Next Generation Analysts (MIAGA)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Waltz, Ed
2016-05-01
Current analysts are inundated with large volumes of data from which extraction, exploitation, and indexing are required. A future need for next-generation analysts is an appropriate balance between machine analytics from raw data and the ability of the user to interact with information through automation. Many quantitative intelligence tools and techniques have been developed which are examined towards matching analyst opportunities with recent technical trends such as big data, access to information, and visualization. The concepts and techniques summarized are derived from discussions with real analysts, documented trends of technical developments, and methods to engage future analysts with multiintelligence services. For example, qualitative techniques should be matched against physical, cognitive, and contextual quantitative analytics for intelligence reporting. Future trends include enabling knowledge search, collaborative situational sharing, and agile support for empirical decision-making and analytical reasoning.
Resonance Ionization, Mass Spectrometry.
ERIC Educational Resources Information Center
Young, J. P.; And Others
1989-01-01
Discussed is an analytical technique that uses photons from lasers to resonantly excite an electron from some initial state of a gaseous atom through various excited states of the atom or molecule. Described are the apparatus, some analytical applications, and the precision and accuracy of the technique. Lists 26 references. (CW)
Meta-Analytic Structural Equation Modeling (MASEM): Comparison of the Multivariate Methods
ERIC Educational Resources Information Center
Zhang, Ying
2011-01-01
Meta-analytic Structural Equation Modeling (MASEM) has drawn interest from many researchers recently. In doing MASEM, researchers usually first synthesize correlation matrices across studies using meta-analysis techniques and then analyze the pooled correlation matrix using structural equation modeling techniques. Several multivariate methods of…
Turbine blade tip durability analysis
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Laflen, J. H.; Spamer, G. T.
1981-01-01
An air-cooled turbine blade from an aircraft gas turbine engine chosen for its history of cracking was subjected to advanced analytical and life-prediction techniques. The utility of advanced structural analysis techniques and advanced life-prediction techniques in the life assessment of hot section components are verified. Three dimensional heat transfer and stress analyses were applied to the turbine blade mission cycle and the results were input into advanced life-prediction theories. Shortcut analytical techniques were developed. The proposed life-prediction theories are evaluated.
Analytical Challenges in Biotechnology.
ERIC Educational Resources Information Center
Glajch, Joseph L.
1986-01-01
Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciotti, Luca; Pellegrini, Silvia, E-mail: luca.ciotti@unibo.it
One of the most active fields of research of modern-day astrophysics is that of massive black hole formation and coevolution with the host galaxy. In these investigations, ranging from cosmological simulations, to semi-analytical modeling, to observational studies, the Bondi solution for accretion on a central point-mass is widely adopted. In this work we generalize the classical Bondi accretion theory to take into account the effects of the gravitational potential of the host galaxy, and of radiation pressure in the optically thin limit. Then, we present the fully analytical solution, in terms of the Lambert–Euler W -function, for isothermal accretion inmore » Jaffe and Hernquist galaxies with a central black hole. The flow structure is found to be sensitive to the shape of the mass profile of the host galaxy. These results and the formulae that are provided, most importantly, the one for the critical accretion parameter, allow for a direct evaluation of all flow properties, and are then useful for the abovementioned studies. As an application, we examine the departure from the true mass accretion rate of estimates obtained using the gas properties at various distances from the black hole, under the hypothesis of classical Bondi accretion. An overestimate is obtained from regions close to the black hole, and an underestimate outside a few Bondi radii; the exact position of the transition between the two kinds of departure depends on the galaxy model.« less
An analytical and experimental evaluation of a Fresnel lens solar concentrator
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Allums, S. A.; Cosby, R. M.
1976-01-01
An analytical and experimental evaluation of line focusing Fresnel lenses with application potential in the 200 to 370 C range was studied. Analytical techniques were formulated to assess the solar transmission and imaging properties of a grooves down lens. Experimentation was based on a 56 cm wide, f/1.0 lens. A Sun tracking heliostat provided a nonmoving solar source. Measured data indicated more spreading at the profile base than analytically predicted, resulting in a peak concentration 18 percent lower than the computed peak of 57. The measured and computed transmittances were 85 and 87 percent, respectively. Preliminary testing with a subsequent lens indicated that modified manufacturing techniques corrected the profile spreading problem and should enable improved analytical experimental correlation.
Deriving Earth Science Data Analytics Requirements
NASA Technical Reports Server (NTRS)
Kempler, Steven J.
2015-01-01
Data Analytics applications have made successful strides in the business world where co-analyzing extremely large sets of independent variables have proven profitable. Today, most data analytics tools and techniques, sometimes applicable to Earth science, have targeted the business industry. In fact, the literature is nearly absent of discussion about Earth science data analytics. Earth science data analytics (ESDA) is the process of examining large amounts of data from a variety of sources to uncover hidden patterns, unknown correlations, and other useful information. ESDA is most often applied to data preparation, data reduction, and data analysis. Co-analysis of increasing number and volume of Earth science data has become more prevalent ushered by the plethora of Earth science data sources generated by US programs, international programs, field experiments, ground stations, and citizen scientists.Through work associated with the Earth Science Information Partners (ESIP) Federation, ESDA types have been defined in terms of data analytics end goals. Goals of which are very different than those in business, requiring different tools and techniques. A sampling of use cases have been collected and analyzed in terms of data analytics end goal types, volume, specialized processing, and other attributes. The goal of collecting these use cases is to be able to better understand and specify requirements for data analytics tools and techniques yet to be implemented. This presentation will describe the attributes and preliminary findings of ESDA use cases, as well as provide early analysis of data analytics toolstechniques requirements that would support specific ESDA type goals. Representative existing data analytics toolstechniques relevant to ESDA will also be addressed.
Dielectrophoretic label-free immunoassay for rare-analyte quantification in biological samples
NASA Astrophysics Data System (ADS)
Velmanickam, Logeeshan; Laudenbach, Darrin; Nawarathna, Dharmakeerthi
2016-10-01
The current gold standard for detecting or quantifying target analytes from blood samples is the ELISA (enzyme-linked immunosorbent assay). The detection limit of ELISA is about 250 pg/ml. However, to quantify analytes that are related to various stages of tumors including early detection requires detecting well below the current limit of the ELISA test. For example, Interleukin 6 (IL-6) levels of early oral cancer patients are <100 pg/ml and the prostate specific antigen level of the early stage of prostate cancer is about 1 ng/ml. Further, it has been reported that there are significantly less than 1 pg /mL of analytes in the early stage of tumors. Therefore, depending on the tumor type and the stage of the tumors, it is required to quantify various levels of analytes ranging from ng/ml to pg/ml. To accommodate these critical needs in the current diagnosis, there is a need for a technique that has a large dynamic range with an ability to detect extremely low levels of target analytes (
Seeberg, Trine M.; Tjønnås, Johannes; Haugnes, Pål; Sandbakk, Øyvind
2017-01-01
The automatic classification of sub-techniques in classical cross-country skiing provides unique possibilities for analyzing the biomechanical aspects of outdoor skiing. This is currently possible due to the miniaturization and flexibility of wearable inertial measurement units (IMUs) that allow researchers to bring the laboratory to the field. In this study, we aimed to optimize the accuracy of the automatic classification of classical cross-country skiing sub-techniques by using two IMUs attached to the skier’s arm and chest together with a machine learning algorithm. The novelty of our approach is the reliable detection of individual cycles using a gyroscope on the skier’s arm, while a neural network machine learning algorithm robustly classifies each cycle to a sub-technique using sensor data from an accelerometer on the chest. In this study, 24 datasets from 10 different participants were separated into the categories training-, validation- and test-data. Overall, we achieved a classification accuracy of 93.9% on the test-data. Furthermore, we illustrate how an accurate classification of sub-techniques can be combined with data from standard sports equipment including position, altitude, speed and heart rate measuring systems. Combining this information has the potential to provide novel insight into physiological and biomechanical aspects valuable to coaches, athletes and researchers. PMID:29283421
Delgado-García, José M; Gruart, Agnès
2008-12-01
The availability of transgenic mice mimicking selective human neurodegenerative and psychiatric disorders calls for new electrophysiological and microstimulation techniques capable of being applied in vivo in this species. In this article, we will concentrate on experiments and techniques developed in our laboratory during the past few years. Thus we have developed different techniques for the study of learning and memory capabilities of wild-type and transgenic mice with deficits in cognitive functions, using classical conditioning procedures. These techniques include different trace (tone/SHOCK and shock/SHOCK) conditioning procedures ? that is, a classical conditioning task involving the cerebral cortex, including the hippocampus. We have also developed implantation and recording techniques for evoking long-term potentiation (LTP) in behaving mice and for recording the evolution of field excitatory postsynaptic potentials (fEPSP) evoked in the hippocampal CA1 area by the electrical stimulation of the commissural/Schaffer collateral pathway across conditioning sessions. Computer programs have also been developed to quantify the appearance and evolution of eyelid conditioned responses and the slope of evoked fEPSPs. According to the present results, the in vivo recording of the electrical activity of selected hippocampal sites during classical conditioning of eyelid responses appears to be a suitable experimental procedure for studying learning capabilities in genetically modified mice, and an excellent model for the study of selected neuropsychiatric disorders compromising cerebral cortex functioning.
Assessing the Value of Structured Analytic Techniques in the U.S. Intelligence Community
2016-01-01
Analytic Techniques, and Why Do Analysts Use Them? SATs are methods of organizing and stimulating thinking about intelligence problems. These methods... thinking ; and imaginative thinking techniques encourage new perspectives, insights, and alternative scenarios. Among the many SATs in use today, the...more transparent, so that other analysts and customers can bet - ter understand how the judgments were reached. SATs also facilitate group involvement
Three Perspectives on: Children's Classics in a Non-Classical Age
ERIC Educational Resources Information Center
Fadiman, Clifton
1972-01-01
Along with pioneering thrusts into new thematic territory for children's literature has come experimentation in form, style, and technique, even more marked in the field of illustration than in verbal narrative. This article serves as an introduction to contributions by English, French and American experts on children's literature. (Author/SJ)
NASA Astrophysics Data System (ADS)
Trappe, Neil; Murphy, J. Anthony; Withington, Stafford
2003-07-01
Gaussian beam mode analysis (GBMA) offers a more intuitive physical insight into how light beams evolve as they propagate than the conventional Fresnel diffraction integral approach. In this paper we illustrate that GBMA is a computationally efficient, alternative technique for tracing the evolution of a diffracting coherent beam. In previous papers we demonstrated the straightforward application of GBMA to the computation of the classical diffraction patterns associated with a range of standard apertures. In this paper we show how the GBMA technique can be expanded to investigate the effects of aberrations in the presence of diffraction by introducing the appropriate phase error term into the propagating quasi-optical beam. We compare our technique to the standard diffraction integral calculation for coma, astigmatism and spherical aberration, taking—for comparison—examples from the classic text 'Principles of Optics' by Born and Wolf. We show the advantages of GBMA for allowing the defocusing of an aberrated image to be evaluated quickly, which is particularly important and useful for probing the consequences of astigmatism and spherical aberration.
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
40 CFR Table 4 to Subpart Zzzz of... - Requirements for Performance Tests
Code of Federal Regulations, 2012 CFR
2012-07-01
... D6348-03,c provided in ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent R must be greater... ASTM D6348-03,c provided in ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent R must be...
40 CFR Table 4 to Subpart Zzzz of... - Requirements for Performance Tests
Code of Federal Regulations, 2011 CFR
2011-07-01
... D6348-03,c provided in ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent R must be greater... ASTM D6348-03,c provided in ASTM D6348-03 Annex A5 (Analyte Spiking Technique), the percent R must be...
Analytical aids in land management planning
David R. Betters
1978-01-01
Quantitative techniques may be applied to aid in completing various phases of land management planning. Analytical procedures which have been used include a procedure for public involvement, PUBLIC; a matrix information generator, MAGE5; an allocation procedure, linear programming (LP); and an input-output economic analysis (EA). These techniques have proven useful in...
Koczkodaj, Dorota; Popek, Sylwia; Zmorzyński, Szymon; Wąsik-Szczepanek, Ewa; Filip, Agata A
2016-04-01
One of the research methods of prognostic value in chronic lymphocytic leukemia (CLL) is cytogenetic analysis. This method requires the presence of appropriate B-cell mitogens in cultures in order to obtain a high mitotic index. The aim of our research was to determine the most effective methods of in vitro B-cell stimulation to maximize the number of metaphases from peripheral blood cells of patients with CLL for classical cytogenetic examination, and then to correlate the results with those obtained using fluorescence in situ hybridization (FISH). The study group involved 50 consecutive patients with CLL. Cell cultures were maintained with the basic composition of culture medium and addition of respective stimulators. We used the following stimulators: Pokeweed Mitogen (PWM), 12-O-tetradecanoylphorbol 13-acetate (TPA), ionophore, lipopolysaccharide (LPS), and CpG-oligonucleotide DSP30. We received the highest mitotic index when using the mixture of PWM+TPA+I+DSP30. With classical cytogenetic tests using banding techniques, numerical and structural aberrations of chromosomes were detected in 46 patients, and no change was found in only four patients. Test results clearly confirmed the legitimacy of using cell cultures enriched with the mixture of cell stimulators and combining classical cytogenetic techniques with the FISH technique in later patient diagnosing. Copyright © 2016 American Federation for Medical Research.
NASA Astrophysics Data System (ADS)
Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.
2016-09-01
Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.
Westenberger, Benjamin J; Ellison, Christopher D; Fussner, Andrew S; Jenney, Susan; Kolinski, Richard E; Lipe, Terra G; Lyon, Robbe C; Moore, Terry W; Revelle, Larry K; Smith, Anjanette P; Spencer, John A; Story, Kimberly D; Toler, Duckhee Y; Wokovich, Anna M; Buhse, Lucinda F
2005-12-08
This work investigated the use of non-traditional analytical methods to evaluate the quality of a variety of pharmaceutical products purchased via internet sites from foreign sources and compared the results with those obtained from conventional quality assurance methods. Traditional analytical techniques employing HPLC for potency, content uniformity, chromatographic purity and drug release profiles were used to evaluate the quality of five selected drug products (fluoxetine hydrochloride, levothyroxine sodium, metformin hydrochloride, phenytoin sodium, and warfarin sodium). Non-traditional techniques, such as near infrared spectroscopy (NIR), NIR imaging and thermogravimetric analysis (TGA), were employed to verify the results and investigate their potential as alternative testing methods. Two of 20 samples failed USP monographs for quality attributes. The additional analytical methods found 11 of 20 samples had different formulations when compared to the U.S. product. Seven of the 20 samples arrived in questionable containers, and 19 of 20 had incomplete labeling. Only 1 of the 20 samples had final packaging similar to the U.S. products. The non-traditional techniques complemented the traditional techniques used and highlighted additional quality issues for the products tested. For example, these methods detected suspect manufacturing issues (such as blending), which were not evident from traditional testing alone.
ERIC Educational Resources Information Center
Toh, Chee-Seng
2007-01-01
A project is described which incorporates nonlaboratory research skills in a graduate level course on analytical chemistry. This project will help students to grasp the basic principles and concepts of modern analytical techniques and also help them develop relevant research skills in analytical chemistry.
Stiffness of frictional contact of dissimilar elastic solids
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; ...
2017-12-22
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less
Stiffness of frictional contact of dissimilar elastic solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less
Stiffness of frictional contact of dissimilar elastic solids
NASA Astrophysics Data System (ADS)
Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; Xu, Haitao; Pharr, George M.
2018-03-01
The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This paper gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the friction coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations - adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. The correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.
Deport, Coralie; Ratel, Jérémy; Berdagué, Jean-Louis; Engel, Erwan
2006-05-26
The current work describes a new method, the comprehensive combinatory standard correction (CCSC), for the correction of instrumental signal drifts in GC-MS systems. The method consists in analyzing together with the products of interest a mixture of n selected internal standards, and in normalizing the peak area of each analyte by the sum of standard areas and then, select among the summation operator sigma(p = 1)(n)C(n)p possible sums, the sum that enables the best product discrimination. The CCSC method was compared with classical techniques of data pre-processing like internal normalization (IN) or single standard correction (SSC) on their ability to correct raw data from the main drifts occurring in a dynamic headspace-gas chromatography-mass spectrometry system. Three edible oils with closely similar compositions in volatile compounds were analysed using a device which performance was modulated by using new or used dynamic headspace traps and GC-columns, and by modifying the tuning of the mass spectrometer. According to one-way ANOVA, the CCSC method increased the number of analytes discriminating the products (31 after CCSC versus 25 with raw data or after IN and 26 after SSC). Moreover, CCSC enabled a satisfactory discrimination of the products irrespective of the drifts. In a factorial discriminant analysis, 100% of the samples (n = 121) were well-classified after CCSC versus 45% for raw data, 90 and 93%, respectively after IN and SSC.
NASA Astrophysics Data System (ADS)
Yazdchi, K.; Salehi, M.; Shokrieh, M. M.
2009-03-01
By introducing a new simplified 3D representative volume element for wavy carbon nanotubes, an analytical model is developed to study the stress transfer in single-walled carbon nanotube-reinforced polymer composites. Based on the pull-out modeling technique, the effects of waviness, aspect ratio, and Poisson ratio on the axial and interfacial shear stresses are analyzed in detail. The results of the present analytical model are in a good agreement with corresponding results for straight nanotubes.
Hyphenated analytical techniques for materials characterisation
NASA Astrophysics Data System (ADS)
Armstrong, Gordon; Kailas, Lekshmi
2017-09-01
This topical review will provide a survey of the current state of the art in ‘hyphenated’ techniques for characterisation of bulk materials, surface, and interfaces, whereby two or more analytical methods investigating different properties are applied simultaneously to the same sample to better characterise the sample than can be achieved by conducting separate analyses in series using different instruments. It is intended for final year undergraduates and recent graduates, who may have some background knowledge of standard analytical techniques, but are not familiar with ‘hyphenated’ techniques or hybrid instrumentation. The review will begin by defining ‘complementary’, ‘hybrid’ and ‘hyphenated’ techniques, as there is not a broad consensus among analytical scientists as to what each term means. The motivating factors driving increased development of hyphenated analytical methods will also be discussed. This introduction will conclude with a brief discussion of gas chromatography-mass spectroscopy and energy dispersive x-ray analysis in electron microscopy as two examples, in the context that combining complementary techniques for chemical analysis were among the earliest examples of hyphenated characterisation methods. The emphasis of the main review will be on techniques which are sufficiently well-established that the instrumentation is commercially available, to examine physical properties including physical, mechanical, electrical and thermal, in addition to variations in composition, rather than methods solely to identify and quantify chemical species. Therefore, the proposed topical review will address three broad categories of techniques that the reader may expect to encounter in a well-equipped materials characterisation laboratory: microscopy based techniques, scanning probe-based techniques, and thermal analysis based techniques. Examples drawn from recent literature, and a concluding case study, will be used to explain the practical issues that arise in combining different techniques. We will consider how the complementary and varied information obtained by combining these techniques may be interpreted together to better understand the sample in greater detail than that was possible before, and also how combining different techniques can simplify sample preparation and ensure reliable comparisons are made between multiple analyses on the same samples—a topic of particular importance as nanoscale technologies become more prevalent in applied and industrial research and development (R&D). The review will conclude with a brief outline of the emerging state of the art in the research laboratory, and a suggested approach to using hyphenated techniques, whether in the teaching, quality control or R&D laboratory.
Elsayed, Hany H; Mostafa, Ahmed M; Soliman, Saleh; El-Bawab, Hatem Y; Moharram, Adel A; El-Nori, Ahmed A
2016-05-01
Airway metal pins are one of the most commonly inhaled foreign bodies in Eastern societies in young females wearing headscarves. We innovated a modified bronchoscopic technique to extract tracheobronchial headscarf pins by the insertion of a magnet to allow an easy and non-traumatic extraction of the pins. The aim of this study was to assess the feasibility and safety of our new technique and compare it with our large previous experience with the classic bronchoscopic method of extraction of tracheobronchial headscarf pins. We performed a study comparing our retrospective experience of classic bronchoscopic extraction from February 2004 to January 2014 and prospective experience with our modified technique using the magnet from January 2014 to June 2015. An institutional review board and new device approval were obtained. Three hundred and twenty-six procedures on 315 patients were performed during our initial 10-year experience. Of them, 304 patients were females. The median age of our group was 13 (0-62). The median time from inhalation to procedure was 1 day (0-1022). After introducing our modified new technique using the magnet, 20 procedures were performed. Nineteen were females. The median time of the procedure and the need to forcefully bend the pin for extraction were in favour of the new technique in comparison with our classic approach (2 vs 6 min; P < 0.001) (2 patients = 20% vs 192 = 58%; P < 0.001). The conversion rate to surgery was also in favour of the modified technique but did not reach statistical significance (0 = 0% vs 15 = 4.8%; P = 0.32). All patients who underwent the modified technique were discharged home on the same day of the procedure. No procedural complications were recorded. All remain well on a follow-up period of up to 14 months. Bronchoscopic extraction of tracheobronchial inhaled headscarf pins using a novel technique using homemade magnets was safer and simpler in comparison with our large experience with the classic approach. We advise the use of this device (or concept) in selected patients in centres dealing with this problem. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W
2014-12-22
Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.
ERIC Educational Resources Information Center
Yuen, Allan H.; Park, Jae Hyung; Chen, Lu; Cheng, Miaoting
2017-01-01
Our study examines digital equity in a cultural context. Many studies have used classic analytical variables such as socioeconomic status and gender to investigate the problem of unequal access to, and more recently differences in the use of, information and communication technology (ICT). The few studies that have explored cultural variables have…
Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair
ERIC Educational Resources Information Center
Chudinov, Peter Sergey
2010-01-01
The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…