Robust Accurate Non-Invasive Analyte Monitor
Robinson, Mark R.
1998-11-03
An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
NASA Astrophysics Data System (ADS)
Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang
2016-08-01
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Pyrosequencing for Accurate Imprinted Allele Expression Analysis
Yang, Bing; Damaschke, Nathan; Yao, Tianyu; McCormick, Johnathon; Wagner, Jennifer; Jarrard, David
2016-01-01
Genomic imprinting is an epigenetic mechanism that restricts gene expression to one inherited allele. Improper maintenance of imprinting has been implicated in a number of human diseases and developmental syndromes. Assays are needed that can quantify the contribution of each paternal allele to a gene expression profile. We have developed a rapid, sensitive quantitative assay for the measurement of individual allelic ratios termed Pyrosequencing for Imprinted Expression (PIE). Advantages of PIE over other approaches include shorter experimental time, decreased labor, avoiding the need for restriction endonuclease enzymes at polymorphic sites, and prevent heteroduplex formation which is problematic in quantitative PCR-based methods. We demonstrate the improved sensitivity of PIE including the ability to detect differences in allelic expression down to 1%. The assay is capable of measuring genomic heterozygosity as well as imprinting in a single run. PIE is applied to determine the status of Insulin-like Growth Factor-2 (IGF2) imprinting in human and mouse tissues. PMID:25581900
An accurate analytic representation of the water pair potential.
Cencek, Wojciech; Szalewicz, Krzysztof; Leforestier, Claude; van Harrevelt, Rob; van der Avoird, Ad
2008-08-28
The ab initio water dimer interaction energies obtained from coupled cluster calculations and used in the CC-pol water pair potential (Bukowski et al., Science, 2007, 315, 1249) have been refitted to a site-site form containing eight symmetry-independent sites in each monomer and denoted as CC-pol-8s. Initially, the site-site functions were assumed in a B-spline form, which allowed a precise optimization of the positions of the sites. Next, these functions were assumed in the standard exponential plus inverse powers form. The root mean square error of the CC-pol-8s fit with respect to the 2510 ab initio points is 0.10 kcal mol(-1), compared to 0.42 kcal mol(-1) of the CC-pol fit (0.010 kcal mol(-1) compared to 0.089 kcal mol(-1) for points with negative interaction energies). The energies of the stationary points in the CC-pol-8s potential are considerably more accurate than in the case of CC-pol. The water dimer vibration-rotation-tunneling spectrum predicted by the CC-pol-8s potential agrees substantially and systematically better with experiment than the already very accurate spectrum predicted by CC-pol, while specific features that could not be accurately predicted previously now agree very well with experiment. This shows that the uncertainties of the fit were the largest source of error in the previous predictions and that the present potential sets a new standard of accuracy in investigations of the water dimer. PMID:18688514
Highly accurate analytic formulae for projectile motion subjected to quadratic drag
NASA Astrophysics Data System (ADS)
Turkyilmazoglu, Mustafa
2016-05-01
The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.
Simple analytic expressions for correcting the factorizable formula for Compton
NASA Astrophysics Data System (ADS)
Lajohn, L. A.; Pratt, R. H.
2016-05-01
The factorizable form of the relativistic impulse approximation (RIA) expression for Compton scattering doubly differential cross sections (DDCS) becomes progressively less accurate as the binding energy of the ejected electron increases. This expression, which we call the RKJ approximation, makes it possible to obtain the Compton profile (CP) from measured DDCS. We have derived three simple analytic expressions, each which can be used to correct the RKJ error for the atomic K-shell CP obtained from DDCS for any atomic number Z. The expression which is the most general is valid over a broad range of energy ω and scattering angle θ, a second expression which is somewhat simpler is valid at very high ω but over most θ, and the third which is the simplest is valid at small θ over a broad range of ω. We demonstrate that such expressions can yield a CP accurate to within a 1% error over 99% of the electron momentum distribution range of the Uranium K-shell CP. Since the K-shell contribution dominates the extremes of the whole atom CP (this is where the error of RKJ can exceed an order of magnitude), this region can be of concern in assessing the bonding properties of molecules as well as semiconducting materials.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data. PMID:26355331
Accurate analytical method for the extraction of solar cell model parameters
NASA Astrophysics Data System (ADS)
Phang, J. C. H.; Chan, D. S. H.; Phillips, J. R.
1984-05-01
Single diode solar cell model parameters are rapidly extracted from experimental data by means of the presently derived analytical expressions. The parameter values obtained have a less than 5 percent error for most solar cells, in light of the extraction of model parameters for two cells of differing quality which were compared with parameters extracted by means of the iterative method.
Accurate expressions for solar cell fill factors including series and shunt resistances
NASA Astrophysics Data System (ADS)
Green, Martin A.
2016-02-01
Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
NASA Astrophysics Data System (ADS)
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Analytical expressions for fringe fields in multipole magnets
NASA Astrophysics Data System (ADS)
Muratori, B. D.; Jones, J. K.; Wolski, A.
2015-06-01
Fringe fields in multipole magnets can have a variety of effects on the linear and nonlinear dynamics of particles moving along an accelerator beam line. An accurate model of an accelerator must include realistic models of the magnet fringe fields. Fringe fields for dipoles are well understood and can be modeled at an early stage of accelerator design in such codes as mad8, madx, gpt or elegant. Existing techniques for quadrupole and higher order multipoles rely either on the use of a numerical field map, or on a description of the field in the form of a series expansion about a chosen axis. Usually, it is not until the later stages of a design project that such descriptions (based on magnet modeling or measurement) become available. Furthermore, series expansions rely on the assumption that the beam travels more or less on axis throughout the beam line; but in some types of machines (for example, Fixed Field Alternating Gradients or FFAGs) this is not a good assumption. Furthermore, some tracking codes, such as gpt, use methods for including space charge effects that require fields to vary smoothly and continuously along a beam line: in such cases, realistic fringe field models are of significant importance. In this paper, a method for constructing analytical expressions for multipole fringe fields is presented. Such expressions allow fringe field effects to be included in beam dynamics simulations from the start of an accelerator design project, even before detailed magnet design work has been undertaken. The magnetostatic Maxwell equations are solved analytically and a solution that fits all orders of multipoles is derived. Quadrupole fringe fields are considered in detail as these are the ones that give the strongest effects. The analytic expressions for quadrupole fringe fields are compared with data obtained from numerical modeling codes in two cases: a magnet in the high luminosity upgrade of the Large Hadron Collider inner triplet, and a magnet in the
NASA Astrophysics Data System (ADS)
Walter, Johannes; Thajudeen, Thaseem; Süß, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-01
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-21
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles. PMID:25789666
Analytic expressions for ULF wave radiation belt radial diffusion coefficients
Ozeke, Louis G; Mann, Ian R; Murphy, Kyle R; Jonathan Rae, I; Milling, David K
2014-01-01
We present analytic expressions for ULF wave-derived radiation belt radial diffusion coefficients, as a function of L and Kp, which can easily be incorporated into global radiation belt transport models. The diffusion coefficients are derived from statistical representations of ULF wave power, electric field power mapped from ground magnetometer data, and compressional magnetic field power from in situ measurements. We show that the overall electric and magnetic diffusion coefficients are to a good approximation both independent of energy. We present example 1-D radial diffusion results from simulations driven by CRRES-observed time-dependent energy spectra at the outer boundary, under the action of radial diffusion driven by the new ULF wave radial diffusion coefficients and with empirical chorus wave loss terms (as a function of energy, Kp and L). There is excellent agreement between the differential flux produced by the 1-D, Kp-driven, radial diffusion model and CRRES observations of differential electron flux at 0.976 MeV—even though the model does not include the effects of local internal acceleration sources. Our results highlight not only the importance of correct specification of radial diffusion coefficients for developing accurate models but also show significant promise for belt specification based on relatively simple models driven by solar wind parameters such as solar wind speed or geomagnetic indices such as Kp. Key Points Analytic expressions for the radial diffusion coefficients are presented The coefficients do not dependent on energy or wave m value The electric field diffusion coefficient dominates over the magnetic PMID:26167440
Fast and Accurate Digital Morphometry of Facial Expressions.
Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan
2015-10-01
Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated. PMID:26579859
Generalized diffusion equation and analytical expressions to neutron scattering experiments
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2014-12-01
An integro-differential diffusion equation with linear force, based on the continuous time random walk model, is considered. The equation generalizes the ordinary and fractional diffusion equations. Analytical expressions related to neutron scattering experiments are presented and analyzed, which can be used to describe, for instance, biological systems.
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Ghittorelli, Matteo; Torricelli, Fabrizio; Kovács-Vajna, Zsolt Miklos
2015-12-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of Thin-Film Transistors (TFTs) and, in turn, of Organic Thin-Film Transistors (OTFTs), available today. However, the need for iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not enough accurate to model OTFTs and, in particular, transconductances and transcapacitances. In this paper we present an accurate and computationally efficient closed-form approximation of the surface potential, based on the Lagrange Reversion Theorem, that can be exploited in advanced surface-potential-based OTFTs and TFTs device models.
Fixing a rigorous formalism for the accurate analytic derivation of halo properties
NASA Astrophysics Data System (ADS)
Juan, Enric; Salvador-Solé, Eduard; Domènech, Guillem; Manrique, Alberto
2014-03-01
We establish a one-to-one correspondence between virialized haloes and their seeds, namely peaks with a given density contrast at appropriate Gaussian-filtering radii, in the initial Gaussian random density field. This fixes a rigorous formalism for the analytic derivation of halo properties from the linear power spectrum of density perturbations in any hierarchical cosmology. The typical spherically averaged density profile and mass function of haloes so obtained match those found in numerical simulations.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Meshkov, Vladimir V.; Stolyarov, Andrej V.
2009-06-01
We have shown that one and two-parameter analytical mapping functions such as r(y;bar{r}, α)=bar{r}[1 + {1}/{α} tan(π y/2)] and r(y;bar{r})=bar{r} [ {1+ y}/{1-y} ] transform the conventional radial Schrödinger equation into equivalent alternate forms {d^2φ(y)}/{dy^2} = [{π^2}/{4}+({2μ}/ {hbar^2} ) g^2(y) [E - U(r(y))
NASA Astrophysics Data System (ADS)
Qiao, Yaojun; Li, Ming; Yang, Qiuhong; Xu, Yanfei; Ji, Yuefeng
2015-01-01
Closed-form expressions of nonlinear interference of dense wavelength-division-multiplexed (WDM) systems with dispersion managed transmission (DMT) are derived. We carry out a simulative validation by addressing an ample and significant set of the Nyquist-WDM systems based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) subcarriers at a baud rate of 32 Gbaud per channel. Simulation results show the simple closed-form analytical expressions can provide an effective tool for the quick and accurate prediction of system performance in DMT coherent optical systems.
Analytical Grid Generation for accurate representation of clearances in CFD for Screw Machines
NASA Astrophysics Data System (ADS)
Rane, S.; Kovačević, A.; Stošić, N.
2015-08-01
One of the major factors affecting the performance prediction of twin screw compressors by use of computational fluid dynamics (CFD) is the accuracy with which the leakage gaps are captured by the discretization methods. The accuracy of mapping leakage flows can be improved by increasing the number of grid points on the profile. However, this method faces limitations when it comes to the complex deforming domains of a twin screw compressor because the computational time increases tremendously. In order to address this problem, an analytical grid distribution procedure is formulated that can independently refine the region of high importance for leakage flows in the interlobe space. This paper describes the procedure of analytical grid generation with the refined mesh in the interlobe area and presents a test case to show the influence of the mesh refinement in that area on the performance prediction. It is shown that by using this method, the flow domains in the vicinity of the interlobe gap and the blowhole area are refined which improves accuracy of leakage flow predictions.
Analytic expression for poloidal flow velocity in the banana regime
Taguchi, M.
2013-01-15
The poloidal flow velocity in the banana regime is calculated by improving the l = 1 approximation for the Fokker-Planck collision operator [M. Taguchi, Plasma Phys. Controlled Fusion 30, 1897 (1988)]. The obtained analytic expression for this flow, which can be used for general axisymmetric toroidal plasmas, agrees quite well with the recently calculated numerical results by Parker and Catto [Plasma Phys. Controlled Fusion 54, 085011 (2012)] in the full range of aspect ratio.
Analytical expression for critical frequency of microwave assisted magnetization switching
NASA Astrophysics Data System (ADS)
Arai, Hiroko; Imamura, Hiroshi
2016-02-01
The microwave-assisted switching (MAS) of magnetization in a perpendicularly magnetized circular disk is studied based on the macrospin model in a rotating frame. The analytical expression for the critical frequency of MAS is derived by analyzing the presence of a quasiperiodic mode. The critical frequency is expressed as a function of the radio frequency (rf) field Hrf and the effective anisotropy field H\\text{k}\\text{eff}. For a small rf field such that H\\text{rf} \\ll H\\text{k}\\text{eff}, the critical frequency is approximately equal to (γ /π )\\root 3 \\of{\\smash{H\\text{k}\\text{eff}H\\text{rf}2}\\mathstrut}.
A novel fast and accurate pseudo-analytical simulation approach for MOAO
NASA Astrophysics Data System (ADS)
Gendron, É.; Charara, A.; Abdelfattah, A.; Gratadour, D.; Keyes, D.; Ltaief, H.; Morel, C.; Vidal, F.; Sevin, A.; Rousset, G.
2014-08-01
Multi-object adaptive optics (MOAO) is a novel adaptive optics (AO) technique for wide-field multi-object spectrographs (MOS). MOAO aims at applying dedicated wavefront corrections to numerous separated tiny patches spread over a large field of view (FOV), limited only by that of the telescope. The control of each deformable mirror (DM) is done individually using a tomographic reconstruction of the phase based on measurements from a number of wavefront sensors (WFS) pointing at natural and artificial guide stars in the field. We have developed a novel hybrid, pseudo-analytical simulation scheme, somewhere in between the end-to- end and purely analytical approaches, that allows us to simulate in detail the tomographic problem as well as noise and aliasing with a high fidelity, and including fitting and bandwidth errors thanks to a Fourier-based code. Our tomographic approach is based on the computation of the minimum mean square error (MMSE) reconstructor, from which we derive numerically the covariance matrix of the tomographic error, including aliasing and propagated noise. We are then able to simulate the point-spread function (PSF) associated to this covariance matrix of the residuals, like in PSF reconstruction algorithms. The advantage of our approach is that we compute the same tomographic reconstructor that would be computed when operating the real instrument, so that our developments open the way for a future on-sky implementation of the tomographic control, plus the joint PSF and performance estimation. The main challenge resides in the computation of the tomographic reconstructor which involves the inversion of a large matrix (typically 40 000 × 40 000 elements). To perform this computation efficiently, we chose an optimized approach based on the use of GPUs as accelerators and using an optimized linear algebra library: MORSE providing a significant speedup against standard CPU oriented libraries such as Intel MKL. Because the covariance matrix is
Helping Students Become Accurate, Expressive Readers: Fluency Instruction for Small Groups
ERIC Educational Resources Information Center
Kuhn, Melanie
2004-01-01
Effective approaches to fluency instruction should facilitate automatic and accurate word recognition as well as the ability to read with expression. The study reported in this article focused on instructional approaches that can be used with small groups of learners within a broader literacy curriculum, one that is suitable for flexible grouping.…
Analytical expressions for vibrational matrix elements of Morse oscillators
Zuniga, J.; Hidalgo, A.; Frances, J.M.; Requena, A.; Lopez Pineiro, A.; Olivares del Valle, F.J.
1988-10-15
Several exact recursion relations connecting different Morse oscillator matrix elements associated with the operators q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/ and q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/(d/dr) are derived. Matrix elements of the other useful operators may then be obtained easily. In particular, analytical expressions for (y/sup k/d/dr) and (y/sup k/d/dr+(d/dr)y/sup k/), matrix elements of interest in the study of the internuclear motion in polyatomic molecules, are obtained.
NASA Astrophysics Data System (ADS)
RoyChoudhury, Raja; RoyChoudhury, Arundhati
2011-02-01
This paper presents a semi analytical formulation of modal properties of a non linear optical fiber having Kerr non linearity with a three parameter approximation of fundamental modal field. The minimization of core parameter ( U) which involves Kerr nonlinearity through the non-stationary expression of propagation constant, is carried out by Nelder-Mead Simplex method of non linear unconstrained minimization, suitable for problems with non-smooth functions as the method does not require any derivative information. Use of three parameters in modal approximation and implementation of Simplex methods enables our semi analytical description to be an alternative way having less computational burden for calculation of modal parameters than full numerical methods.
NASA Astrophysics Data System (ADS)
Lloyd, N. S.; Bouman, C.; Horstwood, M. S.; Parrish, R. R.; Schwieters, J. B.
2010-12-01
This presentation describes progress in mass spectrometry for analysing very small analyte quantities, illustrated by example applications from nuclear forensics. In this challenging application, precise and accurate (‰) uranium isotope ratios are required from 1 - 2 µm diameter uranium oxide particles, which comprise less than 40 pg of uranium. Traditionally these are analysed using thermal ionisation mass spectrometry (TIMS), and more recently using secondary ionisation mass spectrometry (SIMS). Multicollector inductively-coupled plasma mass spectrometry (MC-ICP-MS) can offer higher productivity compared to these techniques, but is traditionally limited by low efficiency of analyte utilisation (sample through to ion detection). Samples can either be introduced as a solution, or sampled directly from solid using laser ablation. Large multi-isotope ratio datasets can help identify provenance and intended use of anthropogenic uranium and other nuclear materials [1]. The Thermo Scientific NEPTUNE Plus (Bremen, Germany) with ‘Jet Interface’ option offers unparalleled MC-ICP-MS sensitivity. An analyte utilisation of c. 4% has previously been reported for uranium [2]. This high-sensitivity configuration utilises a dry high-capacity (100 m3/h) interface pump, special skimmer and sampler cones and a desolvating nebuliser system. Coupled with new acquisition methodologies, this sensitivity enhancement makes possible the analysis of micro-particles and small sample volumes at higher precision levels than previously achieved. New, high-performance, full-size and compact discrete dynode secondary electron multipliers (SEM) exhibit excellent stability and linearity over a large dynamic range and can be configured to simultaneously measure all of the uranium isotopes. Options for high abundance-sensitivity filters on two ion beams are also available, e.g. for 236U and 234U. Additionally, amplifiers with high ohm (1012 - 1013) feedback resistors have been developed to
Expressions of homosexuality and the perspective of analytical psychology.
Miller, Barry
2010-02-01
Homosexuality, as a description and category of human experience, has a long, complicated and problematic history. It has been utilized as a carrier of theological, political, and psychological ideologies of all sorts, with varying and contradictory influences into the lives of us all. Analytical psychology, emphasizing the purposiveness found in manifestations of the psyche, offers a unique approach to this subject. The focus moves from causation to the meanings embedded in erotic expressions, fantasies, and dreams. Consequently, homosexuality loses its predetermined meaning and finds its definition in the psychology of the individual. Categories of 'sexual orientation' may defend against personal analysis, deflecting the essential fluidity and mystery of Eros. This is illustrated with samples of the variety found in 'homosexual' material. PMID:20433499
Analytic expressions for geometric measure of three-qubit states
Tamaryan, Levon; Park, DaeKil; Tamaryan, Sayatnova
2008-02-15
A method is developed to derive algebraic equations for the geometric measure of entanglement of three-qubit pure states. The equations are derived explicitly and solved in the cases of most interest. These equations allow one to derive analytic expressions of the geometric entanglement measure in a wide range of three-qubit systems, including the general class of W states and states which are symmetric under the permutation of two qubits. The nearest separable states are not necessarily unique, and highly entangled states are surrounded by a one-parametric set of equally distant separable states. A possibility for physical applications of the various three-qubit states to quantum teleportation and superdense coding is suggested from the aspect of entanglement.
Analytic expressions of quantum correlations in qutrit Werner states
NASA Astrophysics Data System (ADS)
Ye, Biaoliang; Liu, Yimin; Chen, Jianlan; Liu, Xiansong; Zhang, Zhanjun
2013-07-01
Quantum correlations in qutrit Werner states are extensively investigated with five popular methods, namely, original quantum discord (OQD) (Ollivier and Zurek in Phys Rev Lett 88:017901, 2001), measurement-induced disturbance (MID) (Luo in Phys Rev A 77:022301, 2008), ameliorated MID (AMID) (Girolami et al. in J Phys A Math Theor 44:352002, 2011), relative entropy (RE) (Modi et al. in Phys Rev Lett 104:080501, 2010) and geometric discord (GD) (Dakić et al. in Phys Rev Lett 105:190502, 2010). Two different analytic expressions of quantum correlations are derived. Quantum correlations captured by the former four methods are same and bigger than those obtained via the GD method. Nonetheless, they all qualitatively characterize quantum correlations in the concerned states. Moreover, as same as the qubit case, there exist quantum correlations in separable qutrit Werner states, too.
NASA Astrophysics Data System (ADS)
Guo, Kongming; Jiang, Jun; Xu, Yalan
2016-09-01
In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.
Onken, Michael D.; Worley, Lori A.; Tuscan, Meghan D.; Harbour, J. William
2010-01-01
Uveal (ocular) melanoma is an aggressive cancer that often forms undetectable micrometastases before diagnosis of the primary tumor. These micrometastases later multiply to generate metastatic tumors that are resistant to therapy and are uniformly fatal. We have previously identified a gene expression profile derived from the primary tumor that is extremely accurate for identifying patients at high risk of metastatic disease. Development of a practical clinically feasible platform for analyzing this expression profile would benefit high-risk patients through intensified metastatic surveillance, earlier intervention for metastasis, and stratification for entry into clinical trials of adjuvant therapy. Here, we migrate the expression profile from a hybridization-based microarray platform to a robust, clinically practical, PCR-based 15-gene assay comprising 12 discriminating genes and three endogenous control genes. We analyze the technical performance of the assay in a prospective study of 609 tumor samples, including 421 samples sent from distant locations. We show that the assay can be performed accurately on fine needle aspirate biopsy samples, even when the quantity of RNA is below detectable limits. Preliminary outcome data from the prospective study affirm the prognostic accuracy of the assay. This prognostic assay provides an important addition to the armamentarium for managing patients with uveal melanoma, and it provides a proof of principle for the development of similar assays for other cancers. PMID:20413675
Transcriptional Bursting in Gene Expression: Analytical Results for General Stochastic Models
Kumar, Niraj; Singh, Abhyudai; Kulkarni, Rahul V.
2015-01-01
Gene expression in individual cells is highly variable and sporadic, often resulting in the synthesis of mRNAs and proteins in bursts. Such bursting has important consequences for cell-fate decisions in diverse processes ranging from HIV-1 viral infections to stem-cell differentiation. It is generally assumed that bursts are geometrically distributed and that they arrive according to a Poisson process. On the other hand, recent single-cell experiments provide evidence for complex burst arrival processes, highlighting the need for analysis of more general stochastic models. To address this issue, we invoke a mapping between general stochastic models of gene expression and systems studied in queueing theory to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions. These results are then used to derive noise signatures, i.e. explicit conditions based entirely on experimentally measurable quantities, that determine if the burst distributions deviate from the geometric distribution or if burst arrival deviates from a Poisson process. For non-Poisson arrivals, we develop approaches for accurate estimation of burst parameters. The proposed approaches can lead to new insights into transcriptional bursting based on measurements of steady-state mRNA/protein distributions. PMID:26474290
Fast and accurate approximate inference of transcript expression from RNA-seq data
Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus
2015-01-01
Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online
Accurate Gene Expression-Based Biodosimetry Using a Minimal Set of Human Gene Transcripts
Tucker, James D.; Joiner, Michael C.; Thomas, Robert A.; Grever, William E.; Bakhmutsky, Marina V.; Chinkhota, Chantelle N.; Smolinski, Joseph M.; Divine, George W.; Auner, Gregory W.
2014-03-15
Purpose: Rapid and reliable methods for conducting biological dosimetry are a necessity in the event of a large-scale nuclear event. Conventional biodosimetry methods lack the speed, portability, ease of use, and low cost required for triaging numerous victims. Here we address this need by showing that polymerase chain reaction (PCR) on a small number of gene transcripts can provide accurate and rapid dosimetry. The low cost and relative ease of PCR compared with existing dosimetry methods suggest that this approach may be useful in mass-casualty triage situations. Methods and Materials: Human peripheral blood from 60 adult donors was acutely exposed to cobalt-60 gamma rays at doses of 0 (control) to 10 Gy. mRNA expression levels of 121 selected genes were obtained 0.5, 1, and 2 days after exposure by reverse-transcriptase real-time PCR. Optimal dosimetry at each time point was obtained by stepwise regression of dose received against individual gene transcript expression levels. Results: Only 3 to 4 different gene transcripts, ASTN2, CDKN1A, GDF15, and ATM, are needed to explain ≥0.87 of the variance (R{sup 2}). Receiver-operator characteristics, a measure of sensitivity and specificity, of 0.98 for these statistical models were achieved at each time point. Conclusions: The actual and predicted radiation doses agree very closely up to 6 Gy. Dosimetry at 8 and 10 Gy shows some effect of saturation, thereby slightly diminishing the ability to quantify higher exposures. Analyses of these gene transcripts may be advantageous for use in a field-portable device designed to assess exposures in mass casualty situations or in clinical radiation emergencies.
Analytic expressions for {alpha} particle preformation in heavy nuclei
Zhang, H. F.; Wang, Y. J.; Dong, J. M.; Royer, G.
2009-11-15
Experimental {alpha} decay energies and half-lives are investigated systematically to extract {alpha} particle preformation in heavy nuclei. Formulas for the preformation factors are proposed that can be used to guide microscopic studies on preformation factors and perform accurate calculations of the {alpha} decay half-lives. There is little evidence for the existence of an island of long stability of superheavy nuclei.
On the analytical form of the Earth's magnetic attraction expressed as a function of time
NASA Technical Reports Server (NTRS)
Carlheim-Gyllenskold, V.
1983-01-01
An attempt is made to express the Earth's magnetic attraction in simple analytical form using observations during the 16th to 19th centuries. Observations of the magnetic inclination in the 16th and 17th centuries are discussed.
Simple Analytic Expressions for the Magnetic Field of a Circular Current Loop
NASA Technical Reports Server (NTRS)
Simpson, James C.; Lane, John E.; Immer, Christopher D.; Youngquist, Robert C.; Steinrock, Todd (Technical Monitor)
2001-01-01
Analytic expressions for the magnetic induction and its spatial derivatives for a circular loop carrying a static current are presented in Cartesian, spherical and cylindrical coordinates. The solutions are exact throughout all space outside the conductor.
Analytical expression for the inverted inertia matrix of serial robots
Saha, S.K.
1999-01-01
This paper presents the analytical derivation of the inertia matrix and its inverse for an open-loop, serial-chain robot. The derivation allows one to write a recursive forward-dynamics algorithm for simulation purposes whose computational complexity is of order n, i.e., {Omicron}(n) -- n being the degrees of freedom of the robot under study. The proposed methodology is based on the Gaussian elimination of the inertia matrix, in contrast to, say, Kalman filtering, which is proposed elsewhere. The derivation is illustrated with a three-degrees-of-freedom planar robot.
Analytical workflow profiling gene expression in murine macrophages
Nixon, Scott E.; González-Peña, Dianelys; Lawson, Marcus A.; McCusker, Robert H.; Hernandez, Alvaro G.; O’Connor, Jason C.; Dantzer, Robert; Kelley, Keith W.
2015-01-01
Comprehensive and simultaneous analysis of all genes in a biological sample is a capability of RNA-Seq technology. Analysis of the entire transcriptome benefits from summarization of genes at the functional level. As a cellular response of interest not previously explored with RNA-Seq, peritoneal macrophages from mice under two conditions (control and immunologically challenged) were analyzed for gene expression differences. Quantification of individual transcripts modeled RNA-Seq read distribution and uncertainty (using a Beta Negative Binomial distribution), then tested for differential transcript expression (False Discovery Rate-adjusted p-value < 0.05). Enrichment of functional categories utilized the list of differentially expressed genes. A total of 2079 differentially expressed transcripts representing 1884 genes were detected. Enrichment of 92 categories from Gene Ontology Biological Processes and Molecular Functions, and KEGG pathways were grouped into 6 clusters. Clusters included defense and inflammatory response (Enrichment Score = 11.24) and ribosomal activity (Enrichment Score = 17.89). Our work provides a context to the fine detail of individual gene expression differences in murine peritoneal macrophages during immunological challenge with high throughput RNA-Seq. PMID:25708305
Analytic Expressions for the BCDMEM Model of Recognition Memory
Myung, Jay I.; Montenegro, Maximiliano; Pitt, Mark A.
2007-01-01
We introduce a Fourier Transformation technique that enables one to derive closed-form expressions of performance measures (e.g., hit and false alarm rates) of simulation-based models of recognition memory. Application of the technique is demonstrated using the bind cue decide model of episodic memory (BCDMEM; Dennis & Humphreys, 2001). In addition to reducing the time required to test the model, which for models like BCDMEM can be excessive, asymptotic expressions of the measures reveal heretofore unknown properties of the model, such as model predictions being dependent on vector length. PMID:18516213
Analytical Expressions for Deformation from an Arbitrarily Oriented Spheroid in a Half-Space
NASA Astrophysics Data System (ADS)
Cervelli, P. F.
2013-12-01
Deformation from magma chambers can be modeled by an elastic half-space with an embedded cavity subject to uniform pressure change along its interior surface. For a small number of cavity shapes, such as a sphere or a prolate spheroid, closed-form, analytical expressions for deformation have been derived, although these only approximate the uniform-pressure-change boundary condition, with the approximation becoming more accurate as the ratio of source depth to source dimension increases. Using the method of Elshelby [1957] and Yang [1988], which consists of a distribution of double forces and centers of dilatation along the vertical axis, I have derived expressions for displacement from a finite spheroid of arbitrary orientation and aspect ratio that are exact in an infinite elastic medium and approximate in a half-space. The approximation, like those for other cavity shapes, becomes increasingly accurate as the depth to source ratio grows larger, and is accurate to within a few percent in most real-world cases. I have also derived expressions for the deformation-gradient tensor, i.e., the derivatives of each component of displacement with respect to each coordinate direction. These can be transformed easily into the strain and stress tensors. The expressions give deformation both at the surface and at any point within the half-space, and include conditional statements that account for limiting cases that would otherwise prove singular. I have developed MATLAB code for these expressions (and their derivatives), which I use to demonstrate the accuracy of the approximation by showing how well the uniform-pressure-change boundary condition is satisfied in a variety of cases. I also show that a vertical, oblate spheroid with a zero-length vertical axis is equivalent to the penny-shaped crack of Fialko [2001] in an infinite medium and an excellent approximation in a half-space. Finally, because, in many cases, volume change is more tangible than pressure change, I have
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Shi, De-Heng; Liu, Yu-Fang; Sun, Jin-Feng; Zhu, Zun-Lue; Yang, Xiang-Dong
2006-12-01
The reasonable dissociation limit of the second excited singlet state B1Π of 7LiH molecule is obtained. The accurate dissociation energy and equilibrium geometry of the B1Π state are calculated using a symmetry-adapted-cluster configuration-interaction method in full active space. The whole potential energy curve for the B1Π state is obtained over the internuclear distance ranging from about 0.10 nm to 0.54 nm, and has a least-square fit to the analytic Murrell-Sorbie function form. The vertical excitation energy is calculated from the ground state to the B1Π state and compared with previous theoretical results. The equilibrium internuclear distance obtained by geometry optimization is found to be quite different from that obtained by single-point energy scanning under the same calculation condition. Based on the analytic potential energy function, the harmonic frequency value of the B1Π state is estimated. A comparison of the theoretical calculations of dissociation energies, equilibrium interatomic distances and the analytic potential energy function with those obtained by previous theoretical results clearly shows that the present work is more comprehensive and in better agreement with experiments than previous theories, thus it is an improvement on previous theories.
Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.
2016-01-01
The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113
NASA Astrophysics Data System (ADS)
Dubas, F.; Espanet, C.; Miraoui, A.
2007-05-01
An exact two-dimensional (2-D) analytical model (AM) of slotless permanent magnet (PM) machines in polar coordinates is used to determine the analytical equations of the air-gap flux density at no-load operation. The authors show that, for a radial magnetization, there is an optimal magnet thickness which permits to maximize the no-load flux density. In order to use easily and directly this optimal value during the design of surface mounted PM motors (SMPMM), the authors propose an original analytical expression of this maximum magnet thickness that have been obtained by interpolation of the values given by several analytical computations. This interpolation function could be applied to SMPMM having a parallel or radial magnetization direction.
Simple analytical expression for work function in the “nearest neighbour” approximation
NASA Astrophysics Data System (ADS)
Chrzanowski, J.; Kravtsov, Yu. A.
2011-01-01
Nonlocal operator of potential is suggested, based on the “nearest neighbour” approximation (NNA) for single electron wave function in metals. It is shown that Schrödinger equation with nonlocal potential leads to quite simple analytical expression for work function, which surprisingly well fits to experimental data.
High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells
Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C
2015-01-01
Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1–2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1–2+ MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8+ T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8+ subsets we demonstrated that high expression of CD26 on CD8+ TRAV1–2+ cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161hi was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161dim. Using cell surface expression of CD8, TRAV1–2, and CD26hi in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis. PMID:25752900
A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression
Nguyen, Nha; Vo, An; Choi, Inchan
2015-01-01
Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910
NASA Astrophysics Data System (ADS)
Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.
2015-06-01
A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.
Wood, David L A; Nones, Katia; Steptoe, Anita; Christ, Angelika; Harliwong, Ivon; Newell, Felicity; Bruxner, Timothy J C; Miller, David; Cloonan, Nicole; Grimmond, Sean M
2015-01-01
Genetic variation modulates gene expression transcriptionally or post-transcriptionally, and can profoundly alter an individual's phenotype. Measuring allelic differential expression at heterozygous loci within an individual, a phenomenon called allele-specific expression (ASE), can assist in identifying such factors. Massively parallel DNA and RNA sequencing and advances in bioinformatic methodologies provide an outstanding opportunity to measure ASE genome-wide. In this study, matched DNA and RNA sequencing, genotyping arrays and computationally phased haplotypes were integrated to comprehensively and conservatively quantify ASE in a single human brain and liver tissue sample. We describe a methodological evaluation and assessment of common bioinformatic steps for ASE quantification, and recommend a robust approach to accurately measure SNP, gene and isoform ASE through the use of personalized haplotype genome alignment, strict alignment quality control and intragenic SNP aggregation. Our results indicate that accurate ASE quantification requires careful bioinformatic analyses and is adversely affected by sample specific alignment confounders and random sampling even at moderate sequence depths. We identified multiple known and several novel ASE genes in liver, including WDR72, DSP and UBD, as well as genes that contained ASE SNPs with imbalance direction discordant with haplotype phase, explainable by annotated transcript structure, suggesting isoform derived ASE. The methods evaluated in this study will be of use to researchers performing highly conservative quantification of ASE, and the genes and isoforms identified as ASE of interest to researchers studying those loci. PMID:25965996
Wood, David L. A.; Nones, Katia; Steptoe, Anita; Christ, Angelika; Harliwong, Ivon; Newell, Felicity; Bruxner, Timothy J. C.; Miller, David; Cloonan, Nicole; Grimmond, Sean M.
2015-01-01
Genetic variation modulates gene expression transcriptionally or post-transcriptionally, and can profoundly alter an individual’s phenotype. Measuring allelic differential expression at heterozygous loci within an individual, a phenomenon called allele-specific expression (ASE), can assist in identifying such factors. Massively parallel DNA and RNA sequencing and advances in bioinformatic methodologies provide an outstanding opportunity to measure ASE genome-wide. In this study, matched DNA and RNA sequencing, genotyping arrays and computationally phased haplotypes were integrated to comprehensively and conservatively quantify ASE in a single human brain and liver tissue sample. We describe a methodological evaluation and assessment of common bioinformatic steps for ASE quantification, and recommend a robust approach to accurately measure SNP, gene and isoform ASE through the use of personalized haplotype genome alignment, strict alignment quality control and intragenic SNP aggregation. Our results indicate that accurate ASE quantification requires careful bioinformatic analyses and is adversely affected by sample specific alignment confounders and random sampling even at moderate sequence depths. We identified multiple known and several novel ASE genes in liver, including WDR72, DSP and UBD, as well as genes that contained ASE SNPs with imbalance direction discordant with haplotype phase, explainable by annotated transcript structure, suggesting isoform derived ASE. The methods evaluated in this study will be of use to researchers performing highly conservative quantification of ASE, and the genes and isoforms identified as ASE of interest to researchers studying those loci. PMID:25965996
Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei
2015-01-01
Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446
Ewing, Michael A.; Zucker, Steven M.; Valentine, Stephen J.; Clemmer, David E.
2015-01-01
Mathematical expressions for the analytical duty cycle associated with different overtones in overtone mobility spectrometry are derived from the widths of the transmitted packets of ions under different instrumental operating conditions. Support for these derivations is provided through ion trajectory simulations. The outcome of the theory and simulations indicates that under all operating conditions there exists a limit or maximum observable overtone that will result in ion transmission. Implications of these findings on experimental design are discussed. PMID:23468094
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Analytical expressions are derived to first order for the rms position error in the triangulation solution of a point object in space for several ideal observation-station configurations. These expressions provide insights into the nature of the dependence of the rms position error on certain of the experimental parameters involved. The station geometries examined are: (1) the configuration of two arbitrarily located stations; (2) the symmetrical circular configuration of two or more stations with equal elevation angles; and (3) the circular configuration of more than two stations with equal elevation angles, when one of the stations is permitted to drift around the circle from its position of symmetry. The expressions for the rms position error are expressed as functions of the rms line-of-sight errors, the total number of stations of interest, and the elevation angles.
Analytical Characteristics of a Noninvasive Gene Expression Assay for Pigmented Skin Lesions.
Yao, Zuxu; Allen, Talisha; Oakley, Margaret; Samons, Carol; Garrison, Darryl; Jansen, Burkhard
2016-08-01
We previously reported clinical performance of a novel noninvasive and quantitative PCR (qPCR)-based molecular diagnostic assay (the pigmented lesion assay; PLA) that differentiates primary cutaneous melanoma from benign pigmented skin lesions through two target gene signatures, LINC00518 (LINC) and preferentially expressed antigen in melanoma (PRAME). This study focuses on analytical characterization of this PLA, including qPCR specificity and sensitivity, optimization of RNA input in qPCR to achieve a desired diagnostic sensitivity and specificity, and analytical performance (repeatability and reproducibility) of this two-gene PLA. All target qPCRs demonstrated a good specificity (100%) and sensitivity (with a limit of detection of 1-2 copies), which allows reliable detection of gene expression changes of LINC and PRAME between melanomas and nonmelanomas. Through normalizing RNA input in qPCR, we converted the traditional gene expression analyses to a binomial detection of gene transcripts (i.e., detected or not detected). By combining the binomial qPCR results of the two genes, an improved diagnostic sensitivity (raised from 52%- 65% to 71% at 1 pg of total RNA input, and to 91% at 3 pg of total RNA input) was achieved. This two-gene PLA demonstrates a high repeatability and reproducibility (coefficient of variation <3%) and all required analytical performance characteristics for the commercial processing of clinical samples. PMID:27505074
DICOM-compatible format for analytical cytology data that can be expressed in XML
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Leif, Suzanne B.
2001-05-01
Flow cytometry data can be directly mapped to the Digital Imaging and Communications in Medicine, DICOM standard. A preliminary mapping of list-mode data to the DICOM Waveform information Object will be presented. This mapping encompasses both flow and image list-mode data. Since list- mode data is also produced by digital slide microscopy, which has already been standardized under DICOM, both branches of Analytical Cytology can be united under the DICOM standard. This will result in the functionality of the present International Society for Analytical Cytology Flow Cytometry Standard, FCS, being significantly extended and the elimination of the previously reported FCS design deficiencies. Thus, the present Flow Cytometry Standard can and should be replaced by a Digital Imaging and Communications in Medicine, DICOM, standard. Expression of Analytical Cytology data in any other format, such as XML, can be made interoperable with DICOM by employing the DICOM data types. A fragment of an XML Schema has been created, which demonstrates the feasibility of expressing DICOM data types in XML syntax. The extension of DICOM to include Flow Cytometry will have the benefits of 1) retiring the present FCS, 2) providing a standard that is ubiquitous, internationally accepted, and backed by the medical profession, and 3) inter-operating with the existing medical informatics infrastructure.
A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs
NASA Astrophysics Data System (ADS)
Bouneb, I.; Kerrour, F.
2016-03-01
Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc
Simple Analytic Expressions for the Magnetic Field of a Circular Current Loop
NASA Technical Reports Server (NTRS)
Simpson, James C.; Lane, John E.; Immer, Christopher D.; Youngquist, Robert C.
2001-01-01
Analytic expressions for the magnetic induction (magnetic flux density, B) of a simple planar circular current loop have been published in Cartesian and cylindrical coordinates [1,2], and are also known implicitly in spherical coordinates [3]. In this paper, we present explicit analytic expressions for B and its spatial derivatives in Cartesian, cylindrical, and spherical coordinates for a filamentary current loop. These results were obtained with extensive use of Mathematica "TM" and are exact throughout all space outside of the conductor. The field expressions reduce to the well-known limiting cases and satisfy V · B = 0 and V x B = 0 outside the conductor. These results are general and applicable to any model using filamentary circular current loops. Solenoids of arbitrary size may be easily modeled by approximating the total magnetic induction as the sum of those for the individual loops. The inclusion of the spatial derivatives expands their utility to magnetohydrodynamics where the derivatives are required. The equations can be coded into any high-level programming language. It is necessary to numerically evaluate complete elliptic integrals of the first and second kind, but this capability is now available with most programming packages.
Ryan, Natalia; Chorley, Brian; Tice, Raymond R; Judson, Richard; Corton, J Christopher
2016-05-01
Microarray profiling of chemical-induced effects is being increasingly used in medium- and high-throughput formats. Computational methods are described here to identify molecular targets from whole-genome microarray data using as an example the estrogen receptor α (ERα), often modulated by potential endocrine disrupting chemicals. ERα biomarker genes were identified by their consistent expression after exposure to 7 structurally diverse ERα agonists and 3 ERα antagonists in ERα-positive MCF-7 cells. Most of the biomarker genes were shown to be directly regulated by ERα as determined by ESR1 gene knockdown using siRNA as well as through chromatin immunoprecipitation coupled with DNA sequencing analysis of ERα-DNA interactions. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression datasets from experiments using MCF-7 cells, including those evaluating the transcriptional effects of hormones and chemicals. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% and 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) ER reference chemicals including "very weak" agonists. Importantly, the biomarker predictions accurately replicated predictions based on 18 in vitro high-throughput screening assays that queried different steps in ERα signaling. For 114 chemicals, the balanced accuracies were 95% and 98% for activation or suppression, respectively. These results demonstrate that the ERα gene expression biomarker can accurately identify ERα modulators in large collections of microarray data derived from MCF-7 cells. PMID:26865669
NASA Astrophysics Data System (ADS)
Ahmad, Imad S.; Rao Gudimetla, V. S.
2002-12-01
Using the nonlinear Volterra series representation, analytical expressions for the third-order intermodulation distortion power and intercept point for a MESFET small-signal amplifier are derived when its equivalent circuit is bilateral and includes the gate-to-drain capacitance ( Cgd) explicitly as a nonlinear element. Previously developed analytical expressions treated Cgd as a linear element or incorporated it as a part of gate-to-source and drain-to-source capacitances ( Cgs and Cds). These new analytical expressions are then compared with experimental data and good agreement is obtained. The analytical expressions are also used to study the variation of intermodulation distortion with input power and frequency, and the effect of the individual nonlinear elements in the MESFET's equivalent circuit.
The first analytical expression to estimate photometric redshifts suggested by a machine
NASA Astrophysics Data System (ADS)
Krone-Martins, A.; Ishida, E. E. O.; de Souza, R. S.
2014-09-01
We report the first analytical expression purely constructed by a machine to determine photometric redshifts (zphot) of galaxies. A simple and reliable functional form is derived using 41 214 galaxies from the Sloan Digital Sky Survey Data Release 10 (SDSS-DR10) spectroscopic sample. The method automatically dropped the u and z bands, relying only on g, r and i for the final solution. Applying this expression to other 1417 181 SDSS-DR10 galaxies, with measured spectroscopic redshifts (zspec), we achieved a mean <(zphot - zspec)/(1 + zspec)> ≲ 0.0086 and a scatter σ(zphot - zspec)/(1 + zspec) ≲ 0.045 when averaged up to z ≲ 1.0. The method was also applied to the PHAT0 data set, confirming the competitiveness of our results when faced with other methods from the literature. This is the first use of symbolic regression in cosmology, representing a leap forward in astronomy-data-mining connection.
Ohshima, Hiroyuki
2015-12-29
An approximate analytic expression for the electrophoretic mobility of an infinitely long cylindrical colloidal particle in a symmetrical electrolyte solution in a transverse electric field is obtained. This mobility expression, which is correct to the order of the third power of the zeta potential ζ of the particle, considerably improves Henry's mobility formula correct to the order of the first power of ζ (Proc. R. Soc. London, Ser. A 1931, 133, 106). Comparison with the numerical calculations by Stigter (J. Phys. Chem. 1978, 82, 1417) shows that the obtained mobility formula is an excellent approximation for low-to-moderate zeta potential values at all values of κa (κ = Debye-Hückel parameter and a = cylinder radius). PMID:26639309
Ricci, Davide; Mennander, Ari A; Pham, Linh D; Rao, Vinay P; Miyagi, Naoto; Byrne, Guerard W; Russell, Stephen J; McGregor, Christopher GA
2008-01-01
Objectives We studied the concordance of transgene expression in the transplanted heart using bicistronic adenoviral vector coding for a transgene of interest (human carcinoembryonic antigen: hCEA - beta human chorionic gonadotropin: βhCG) and for a marker imaging transgene (human sodium iodide symporter: hNIS). Methods Inbred Lewis rats were used for syngeneic heterotopic cardiac transplantation. Donor rat hearts were perfused ex vivo for 30 minutes prior to transplantation with University of Wisconsin (UW) solution (n=3), with 109 pfu/ml of adenovirus expressing hNIS (Ad-NIS; n=6), hNIS-hCEA (Ad-NIS-CEA; n=6) and hNIS-βhCG (Ad-NIS-CG; n=6). On post-operative day (POD) 5, 10, 15 all animals underwent micro-SPECT/CT imaging of the donor hearts after tail vein injection of 1000 μCi 123I and blood sample collection for hCEA and βhCG quantification. Results Significantly higher image intensity was noted in the hearts perfused with Ad-NIS (1.1±0.2; 0.9±0.07), Ad-NIS-CEA (1.2±0.3; 0.9±0.1) and Ad-NIS-CG (1.1±0.1; 0.9±0.1) compared to UW group (0.44±0.03; 0.47±0.06) on POD 5 and 10 (p<0.05). Serum levels of hCEA and βhCG increased in animals showing high cardiac 123I uptake, but not in those with lower uptake. Above this threshold, image intensities correlated well with serum levels of hCEA and βhCG (R2=0.99 and R2=0.96 respectively). Conclusions These data demonstrate that hNIS is an excellent reporter gene for the transplanted heart. The expression level of hNIS can be accurately and non-invasively monitored by serial radioisotopic single photon emission computed tomography (SPECT) imaging. High concordance has been demonstrated between imaging and soluble marker peptides at the maximum transgene expression on POD 5. PMID:17980613
Analytical expressions to estimate the free product recovery in oil-contaminated aquifers
NASA Astrophysics Data System (ADS)
Corapcioglu, M. Yavuz; Tuncay, Kagan; Lingarn, Rajasekhar; Kambham, Kiran K. R.
1994-12-01
Petroleum products, such as gasoline, leaked from an underground storage tank can be recovered successfully by two-pump operations. The success of the recovery effort depends on the accurate placement of the recovery well at the spill site. An effective recovery operation can minimize the remaining contamination mass in the subsurface. Therefore, a careful evaluation and determination has to be made as to where to locate the recovery well. The location of the well can be decided based on an estimation of the extent and thickness of free product on the water table. Such an estimation should be based on analysis of governing mechanisms. In this study we present analytical solutions to estimate the recovery of oil from an established oil lens. These solutions are obtained by applying the Laplace transformation to averaged linear partial differential equations governing the phenomenon. The governing equation for the free product thickness is derived by averaging the oil phase mass balance equation along the free product thickness and substituting the boundary conditions at the oil/water interface and oil surface. The analytical solutions estimate the temporal and spatial distribution of free product thickness on the water table for a number of recovery scenarios. Results are presented for the temporal and spatial variation of the free product thickness, temporal variation of the free product volume recovered, and recovery efficiency based on the readings at the monitoring wells. Since they can be utilized without a great deal of data, analytical solutions are quite attractive as screening tools in two-pump free product recovery operations.
Kashiwa, B. A.
2010-12-01
Abstract A thermodynamically consistent and fully general equation–of– state (EOS) for multifield applications is described. EOS functions are derived from a Helmholtz free energy expressed as the sum of thermal (fluctuational) and collisional (condensed–phase) contributions; thus the free energy is of the Mie–Gr¨uneisen1 form. The phase–coexistence region is defined using a parameterized saturation curve by extending the form introduced by Guggenheim,2 which scales the curve relative to conditions at the critical point. We use the zero–temperature condensed–phase contribution developed by Barnes,3 which extends the Thomas–Fermi–Dirac equation to zero pressure. Thus, the functional form of the EOS could be called MGGB (for Mie– Gr¨uneisen–Guggenheim–Barnes). Substance–specific parameters are obtained by fitting the low–density energy to data from the Sesame4 library; fitting the zero–temperature pressure to the Sesame cold curve; and fitting the saturation curve and latent heat to laboratory data,5 if available. When suitable coexistence data, or Sesame data, are not available, then we apply the Principle of Corresponding States.2 Thus MGGB can be thought of as a numerical recipe for rendering the tabular Sesame EOS data in an analytic form that includes a proper coexistence region, and which permits the accurate calculation of derivatives associated with compressibility, expansivity, Joule coefficient, and specific heat, all of which are required for multifield applications. 1
Tolias, P.; Ratynskaia, S.; Angelis, U. de
2015-08-15
The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.
An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode
Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter
2012-01-01
The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.
NASA Astrophysics Data System (ADS)
Loveridge, A. J.; van der Sluys, M. V.; Kalogera, V.
2011-12-01
The common-envelope (CE) phase is an important stage in the evolution of binary stellar populations. The most common way to compute the change in orbital period during a CE is to relate the binding energy of the envelope of the Roche-lobe filling giant to the change in orbital energy. Especially in population-synthesis codes, where the evolution of millions of stars must be computed and detailed evolutionary models are too expensive computationally, simple approximations are made for the envelope binding energy. In this study, we present accurate analytic prescriptions based on detailed stellar-evolution models that provide the envelope binding energy for giants with metallicities between Z = 10-4 and Z = 0.03 and masses between 0.8 M ⊙ and 100 M ⊙, as a function of the metallicity, mass, radius, and evolutionary phase of the star. Our results are also presented in the form of electronic data tables and Fortran routines that use them. We find that the accuracy of our fits is better than 15% for 90% of our model data points in all cases, and better than 10% for 90% of our data points in all cases except the asymptotic giant branches for three of the six metallicities we consider. For very massive stars (M >~ 50 M ⊙), when stars lose more than ~20% of their initial mass due to stellar winds, our fits do not describe the models as accurately. Our results are more widely applicable—covering wider ranges of metallicity and mass—and are of higher accuracy than those of previous studies.
Loveridge, A. J.; Van der Sluys, M. V.; Kalogera, V.
2011-12-10
The common-envelope (CE) phase is an important stage in the evolution of binary stellar populations. The most common way to compute the change in orbital period during a CE is to relate the binding energy of the envelope of the Roche-lobe filling giant to the change in orbital energy. Especially in population-synthesis codes, where the evolution of millions of stars must be computed and detailed evolutionary models are too expensive computationally, simple approximations are made for the envelope binding energy. In this study, we present accurate analytic prescriptions based on detailed stellar-evolution models that provide the envelope binding energy for giants with metallicities between Z = 10{sup -4} and Z = 0.03 and masses between 0.8 M{sub Sun} and 100 M{sub Sun }, as a function of the metallicity, mass, radius, and evolutionary phase of the star. Our results are also presented in the form of electronic data tables and Fortran routines that use them. We find that the accuracy of our fits is better than 15% for 90% of our model data points in all cases, and better than 10% for 90% of our data points in all cases except the asymptotic giant branches for three of the six metallicities we consider. For very massive stars (M {approx}> 50 M{sub Sun }), when stars lose more than {approx}20% of their initial mass due to stellar winds, our fits do not describe the models as accurately. Our results are more widely applicable-covering wider ranges of metallicity and mass-and are of higher accuracy than those of previous studies.
Gender Differences in Emotion Expression in Children: A Meta-Analytic Review
Chaplin, Tara M.; Aldao, Amelia
2012-01-01
Emotion expression is an important feature of healthy child development that has been found to show gender differences. However, there has been no empirical review of the literature on gender and facial, vocal, and behavioral expressions of different types of emotions in children. The present study constitutes a comprehensive meta-analytic review of gender differences, and moderators of differences, in emotion expression from infancy through adolescence. We analyzed 555 effect sizes from 166 studies with a total of 21,709 participants. Significant, but very small, gender differences were found overall, with girls showing more positive emotions (g = −.08) and internalizing emotions (e.g., sadness, anxiety, sympathy; g = −.10) than boys, and boys showing more externalizing emotions (e.g., anger; g = .09) than girls. Notably, gender differences were moderated by age, interpersonal context, and task valence, underscoring the importance of contextual factors in gender differences. Gender differences in positive emotions were more pronounced with increasing age, with girls showing more positive emotions than boys in middle childhood (g = −.20) and adolescence (g = −.28). Boys showed more externalizing emotions than girls at toddler/preschool age (g = .17) and middle childhood (g = .13) and fewer externalizing emotions than girls in adolescence (g = −.27). Gender differences were less pronounced with parents and were more pronounced with unfamiliar adults (for positive emotions) and with peers/when alone (for externalizing emotions). Our findings of gender differences in emotion expression in specific contexts have important implications for gender differences in children’s healthy and maladaptive development. PMID:23231534
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Analytical expressions for chatter analysis in milling operations with one dominant mode
NASA Astrophysics Data System (ADS)
Iglesias, A.; Munoa, J.; Ciurana, J.; Dombovari, Z.; Stepan, G.
2016-08-01
In milling, an accurate prediction of chatter is still one of the most complex problems in the field. The presence of these self-excited vibrations can spoil the surface of the part and can also cause a large reduction in tool life. The stability diagrams provide a practical selection of the optimum cutting conditions determined either by time domain or frequency domain based methods. Applying these methods parametric or parameter traced representations of the linear stability limits can be achieved by solving the corresponding eigenvalue problems. In this work, new analytical formulae are proposed related to the parameter domains of both Hopf and period doubling type stability boundaries emerging in the regenerative mechanical model of time periodical milling processes. These formulae are useful to enrich and speed up the currently used numerical methods. Also, the destabilization mechanism of double period chatter is explained, creating an analogy with the chatter related to the Hopf bifurcation, considering one dominant mode and using concepts established by the Pioneers of chatter research.
Analytical expression of Kondo temperature in quantum dot embedded in Aharonov-Bohm ring.
Yoshii, Ryosuke; Eto, Mikio
2011-01-01
We theoretically study the Kondo effect in a quantum dot embedded in an Aharonov-Bohm ring, using the "poor man's" scaling method. Analytical expressions of the Kondo temperature TK are given as a function of magnetic flux Φ penetrating the ring. In this Kondo problem, there are two characteristic lengths, Lc=ℏvF∕|ε̃0| and LK = ħvF = TK, where vF is the Fermi velocity and ε̃0 is the renormalized energy level in the quantum dot. The former is the screening length of the charge fluctuation and the latter is that of the spin fluctuation, i.e., size of Kondo screening cloud. We obtain diferent expressions of TK(Φ) for (i) Lc ≪ LK ≪ L, (ii) Lc ≪ L ≪ LK, and (iii) L ≪ Lc ≪ LK, where L is the size of the ring. TK is remarkably modulated by Φ in cases (ii) and (iii), whereas it hardly depends on Φ in case (i).PACS numbers: PMID:22112300
Analytical expression for the exit probability of the q -voter model in one dimension
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Galam, Serge
2015-07-01
We present in this paper an approximation that is able to give an analytical expression for the exit probability of the q -voter model in one dimension. This expression gives a better fit for the more recent data about simulations in large networks [A. M. Timpanaro and C. P. C. do Prado, Phys. Rev. E 89, 052808 (2014), 10.1103/PhysRevE.89.052808] and as such departs from the expression ρ/qρq+(1-ρ ) q found in papers that investigated small networks only [R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007; P. Przybyła et al., Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117; F. Slanina et al., Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006]. The approximation consists in assuming a large separation on the time scales at which active groups of agents convince inactive ones and the time taken in the competition between active groups. Some interesting findings are that for q =2 we still have ρ/2ρ2+(1-ρ ) 2 as the exit probability and for q >2 we can obtain a lower-order approximation of the form ρ/sρs+(1-ρ ) s with s varying from q for low values of q to q -1/2 for large values of q . As such, this work can also be seen as a deduction for why the exit probability ρ/qρq+(1-ρ ) q gives a good fit, without relying on mean-field arguments or on the assumption that only the first step is nondeterministic, as q and q -1/2 will give very similar results when q →∞ .
Su, Juan; Feng, Guoying
2012-05-10
We provide a detailed analytical expression of group-delay dispersion (GDD) and third-order dispersion (TOD) for a reflection grism-pair compressor without the first-order approximation of grating diffraction. The analytical expressions can be used to design a grism-pair compressor for compensating the dispersive material without ray tracing. Furthermore, the dispersion performance of the grism pair compressor, depending on compressor parameters, is comprehensively analyzed. Results are shown that we can adjust several parameters to obtain a certain GDD and TOD, such as the incidence angle of the beam, refractive index of the prism, grating constant, and the separation of the grism pair. PMID:22614499
Analytical expression for gas-particle equilibration time scale and its numerical evaluation
NASA Astrophysics Data System (ADS)
Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka
2016-05-01
We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Lanman, Richard B; Mortimer, Stefanie A; Zill, Oliver A; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A; Divers, Stephen G; Hoon, Dave S B; Kopetz, E Scott; Lee, Jeeyun; Nikolinakos, Petros G; Baca, Arthur M; Kermani, Bahram G; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital Sequencing™ is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient's cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong
2016-06-01
Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050
Loriaux, Paul Michael; Tesler, Glenn; Hoffmann, Alexander
2013-01-01
The steady states of cells affect their response to perturbation. Indeed, diagnostic markers for predicting the response to therapeutic perturbation are often based on steady state measurements. In spite of this, no method exists to systematically characterize the relationship between steady state and response. Mathematical models are established tools for studying cellular responses, but characterizing their relationship to the steady state requires that it have a parametric, or analytical, expression. For some models, this expression can be derived by the King-Altman method. However, King-Altman requires that no substrate act as an enzyme, and is therefore not applicable to most models of signal transduction. For this reason we developed py-substitution, a simple but general method for deriving analytical expressions for the steady states of mass action models. Where the King-Altman method is applicable, we show that py-substitution yields an equivalent expression, and at comparable efficiency. We use py-substitution to study the relationship between steady state and sensitivity to the anti-cancer drug candidate, dulanermin (recombinant human TRAIL). First, we use py-substitution to derive an analytical expression for the steady state of a published model of TRAIL-induced apoptosis. Next, we show that the amount of TRAIL required for cell death is sensitive to the steady state concentrations of procaspase 8 and its negative regulator, Bar, but not the other procaspase molecules. This suggests that activation of caspase 8 is a critical point in the death decision process. Finally, we show that changes in the threshold at which TRAIL results in cell death is not always equivalent to changes in the time of death, as is commonly assumed. Our work demonstrates that an analytical expression is a powerful tool for identifying steady state determinants of the cellular response to perturbation. All code is available at http://signalingsystems.ucsd.edu/models-and-code/ or
ERIC Educational Resources Information Center
Rom, Mark Carl
2011-01-01
Grades matter. College grading systems, however, are often ad hoc and prone to mistakes. This essay focuses on one factor that contributes to high-quality grading systems: grading accuracy (or "efficiency"). I proceed in several steps. First, I discuss the elements of "efficient" (i.e., accurate) grading. Next, I present analytical results…
Keller, Andreas; Leidinger, Petra; Lange, Julia; Borries, Anne; Schroers, Hannah; Scheffler, Matthias; Lenhof, Hans-Peter; Ruprecht, Klemens; Meese, Eckart
2009-01-01
Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease of the central nervous system, which is heterogenous with respect to clinical manifestations and response to therapy. Identification of biomarkers appears desirable for an improved diagnosis of MS as well as for monitoring of disease activity and treatment response. MicroRNAs (miRNAs) are short non-coding RNAs, which have been shown to have the potential to serve as biomarkers for different human diseases, most notably cancer. Here, we analyzed the expression profiles of 866 human miRNAs. In detail, we investigated the miRNA expression in blood cells of 20 patients with relapsing-remitting MS (RRMS) and 19 healthy controls using a human miRNA microarray and the Geniom Real Time Analyzer (GRTA) platform. We identified 165 miRNAs that were significantly up- or downregulated in patients with RRMS as compared to healthy controls. The best single miRNA marker, hsa-miR-145, allowed discriminating MS from controls with a specificity of 89.5%, a sensitivity of 90.0%, and an accuracy of 89.7%. A set of 48 miRNAs that was evaluated by radial basis function kernel support vector machines and 10-fold cross validation yielded a specificity of 95%, a sensitivity of 97.6%, and an accuracy of 96.3%. While 43 of the 165 miRNAs deregulated in patients with MS have previously been related to other human diseases, the remaining 122 miRNAs are so far exclusively associated with MS. The implications of our study are twofold. The miRNA expression profiles in blood cells may serve as a biomarker for MS, and deregulation of miRNA expression may play a role in the pathogenesis of MS. PMID:19823682
Lange, Julia; Borries, Anne; Schroers, Hannah; Scheffler, Matthias; Lenhof, Hans-Peter; Ruprecht, Klemens; Meese, Eckart
2009-01-01
Multiple sclerosis (MS) is a chronic inflammatory demyelinating disease of the central nervous system, which is heterogenous with respect to clinical manifestations and response to therapy. Identification of biomarkers appears desirable for an improved diagnosis of MS as well as for monitoring of disease activity and treatment response. MicroRNAs (miRNAs) are short non-coding RNAs, which have been shown to have the potential to serve as biomarkers for different human diseases, most notably cancer. Here, we analyzed the expression profiles of 866 human miRNAs. In detail, we investigated the miRNA expression in blood cells of 20 patients with relapsing-remitting MS (RRMS) and 19 healthy controls using a human miRNA microarray and the Geniom Real Time Analyzer (GRTA) platform. We identified 165 miRNAs that were significantly up- or downregulated in patients with RRMS as compared to healthy controls. The best single miRNA marker, hsa-miR-145, allowed discriminating MS from controls with a specificity of 89.5%, a sensitivity of 90.0%, and an accuracy of 89.7%. A set of 48 miRNAs that was evaluated by radial basis function kernel support vector machines and 10-fold cross validation yielded a specificity of 95%, a sensitivity of 97.6%, and an accuracy of 96.3%. While 43 of the 165 miRNAs deregulated in patients with MS have previously been related to other human diseases, the remaining 122 miRNAs are so far exclusively associated with MS. The implications of our study are twofold. The miRNA expression profiles in blood cells may serve as a biomarker for MS, and deregulation of miRNA expression may play a role in the pathogenesis of MS. PMID:19823682
Yanguas-Gil, Angel; Elam, Jeffrey W.
2014-05-01
In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authors show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e. g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes. (C) 2014 American Vacuum Society
NASA Astrophysics Data System (ADS)
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations.
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations. PMID:27343483
Tabatabaei-Panah, Akram-Sadat; Jeddi-Tehrani, Mahmood; Ghods, Roya; Akhondi, Mohammad-Mehdi; Mojtabavi, Nazanin; Mahmoudi, Ahmad-Reza; Mirzadegan, Ebrahim; Shojaeian, Sorour; Zarnani, Amir-Hassan
2013-03-01
Here we introduce novel optical properties and accurate sensitivity of Quantum dot (QD)-based detection system for tracking the breast cancer marker, HER2. QD525 was used to detect HER2 using home-made HER2-specific monoclonal antibodies in fixed and living HER2(+) SKBR-3 cell line and breast cancer tissues. Additionally, we compared fluorescence intensity (FI), photostability and staining index (SI) of QD525 signals at different exposure times and two excitation wavelengths with those of the conventional organic dye, FITC. Labeling signals of QD525 in both fixed and living breast cancer cells and tissue preparations were found to be significantly higher than those of FITC at 460-495 nm excitation wavelengths. Interestingly, when excited at 330-385 nm, the superiority of QD525 was more highlighted with at least 4-5 fold higher FI and SI compared to FITC. Moreover, QDs exhibited exceptional photostability during continuous illumination of cancerous cells and tissues, while FITC signal faded very quickly. QDs can be used as sensitive reporters for in situ detection of tumor markers which in turn could be viewed as a novel approach for early detection of cancers. To take comprehensive advantage of QDs, it is necessary that their optimal excitation wavelength is employed. PMID:23212129
2013-01-01
Background Flower colour variation is one of the most crucial selection criteria in the breeding of a flowering pot plant, as is also the case for azalea (Rhododendron simsii hybrids). Flavonoid biosynthesis was studied intensively in several species. In azalea, flower colour can be described by means of a 3-gene model. However, this model does not clarify pink-coloration. The last decade gene expression studies have been implemented widely for studying flower colour. However, the methods used were often only semi-quantitative or quantification was not done according to the MIQE-guidelines. We aimed to develop an accurate protocol for RT-qPCR and to validate the protocol to study flower colour in an azalea mapping population. Results An accurate RT-qPCR protocol had to be established. RNA quality was evaluated in a combined approach by means of different techniques e.g. SPUD-assay and Experion-analysis. We demonstrated the importance of testing noRT-samples for all genes under study to detect contaminating DNA. In spite of the limited sequence information available, we prepared a set of 11 reference genes which was validated in flower petals; a combination of three reference genes was most optimal. Finally we also used plasmids for the construction of standard curves. This allowed us to calculate gene-specific PCR efficiencies for every gene to assure an accurate quantification. The validity of the protocol was demonstrated by means of the study of six genes of the flavonoid biosynthesis pathway. No correlations were found between flower colour and the individual expression profiles. However, the combination of early pathway genes (CHS, F3H, F3'H and FLS) is clearly related to co-pigmentation with flavonols. The late pathway genes DFR and ANS are to a minor extent involved in differentiating between coloured and white flowers. Concerning pink coloration, we could demonstrate that the lower intensity in this type of flowers is correlated to the expression of F3'H
Zhou, Xiang; Li, Rui; Michal, Jennifer J.; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R.; Pouchnik, Derek J.; Harland, Richard M.; Jiang, Zhihua
2016-01-01
Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5′- and 3′-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915
Zhou, Xiang; Li, Rui; Michal, Jennifer J; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R; Pouchnik, Derek J; Harland, Richard M; Jiang, Zhihua
2016-06-01
Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5'- and 3'-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915
Sauvage, J-F; Mugnier, L M; Rousset, G; Fusco, T
2010-11-01
In this paper we derive an analytical model of a long-exposure star image for an adaptive-optics(AO)-corrected coronagraphic imaging system. This expression accounts for static aberrations upstream and downstream of the coronagraphic mask as well as turbulence residuals. It is based on the perfect coronagraph model. The analytical model is validated by means of simulations using the design and parameters of the SPHERE instrument. The analytical model is also compared to a simulated four-quadrant phase-mask coronagraph. Then, its sensitivity to a miscalibration of structure function and upstream static aberrations is studied, and the impact on exoplanet detectability is quantified. Last, a first inversion method is presented for a simulation case using a single monochromatic image with no reference. The obtained result shows a planet detectability increase by two orders of magnitude with respect to the raw image. This analytical model presents numerous potential applications in coronographic imaging, such as exoplanet direct detection, and circumstellar disk observation. PMID:21045877
NASA Astrophysics Data System (ADS)
Kántor, Tibor; Bartha, András
2015-11-01
The self-absorption of spectral lines was studied with up to date multi-element inductively coupled plasma atomic emission spectrometry (ICP-AES) instrumentation using radial and axial viewing of the plasma, as well, performing line peak height and line peak area measurements. Two resonance atomic and ionic lines of Cd and Mg were studied, the concentration range was extended up to 2000 mg/L. At the varying analyte concentration, constant matrix concentration of 10,000 mg/L Ca was ensured in the pneumatically nebulized solutions. The physical and the phenomenological formulation of the emission analytical function is overviewed and as the continuity of the earlier results the following equation is offered:
Dawany, Noor; Showe, Louise C.; Kossenkov, Andrew V.; Chang, Celia; Ive, Prudence; Conradie, Francesca; Stevens, Wendy; Sanne, Ian
2014-01-01
Background Co-infection with tuberculosis (TB) is the leading cause of death in HIV-infected individuals. However, diagnosis of TB, especially in the presence of an HIV co-infection, can be limiting due to the high inaccuracy associated with the use of conventional diagnostic methods. Here we report a gene signature that can identify a tuberculosis infection in patients co-infected with HIV as well as in the absence of HIV. Methods We analyzed global gene expression data from peripheral blood mononuclear cell (PBMC) samples of patients that were either mono-infected with HIV or co-infected with HIV/TB and used support vector machines to identify a gene signature that can distinguish between the two classes. We then validated our results using publically available gene expression data from patients mono-infected with TB. Results Our analysis successfully identified a 251-gene signature that accurately distinguishes patients co-infected with HIV/TB from those infected with HIV only, with an overall accuracy of 81.4% (sensitivity = 76.2%, specificity = 86.4%). Furthermore, we show that our 251-gene signature can also accurately distinguish patients with active TB in the absence of an HIV infection from both patients with a latent TB infection and healthy controls (88.9–94.7% accuracy; 69.2–90% sensitivity and 90.3–100% specificity). We also demonstrate that the expression levels of the 251-gene signature diminish as a correlate of the length of TB treatment. Conclusions A 251-gene signature is described to (a) detect TB in the presence or absence of an HIV co-infection, and (b) assess response to treatment following anti-TB therapy. PMID:24587128
Analytical approximations for spatial stochastic gene expression in single cells and tissues.
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-05-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction-diffusion master equation (RDME) describing stochastic reaction-diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction-diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Analytical approximations for spatial stochastic gene expression in single cells and tissues
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-01-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Recent α decay half-lives and analytic expression predictions including superheavy nuclei
NASA Astrophysics Data System (ADS)
Royer, G.; Zhang, H. F.
2008-03-01
New recent experimental α decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the α emitter and the Qα value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the α decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Qα of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].
NASA Technical Reports Server (NTRS)
Zuffada, C.; Cwik, T.; Jamnejad, V.
1993-01-01
Recently an approach which combines the finite element technique and an integral equation to determine the fields scattered by inhomogeneous bodies of complicated shape has been proposed. Basically, a mathematical surface which encloses the scatterers is introduced, thus dividing the space into an interior and an exterior volume, in which the finite element technique and an integral equation for EM scattering, respectively, are applied. The integral equation is set up for the tangential components of the fields at the surface, while the interior volume the unknowns are the total fields. Continuity of the tangential fields at the boundary, as required by Maxwell's equations, is imposed, thus coupling the two methods to obtain a consistent solution. The coupling term is expressed by a surface integral formed by the dot product of a FE basis function and an IE testing function, or viceversa. By choosing the boundary to be a surface of revolution and by making a convenient selection of IE basis (testing) functions, it is possible to evaluate the integrals analytically on surfaces such as curved triangles, curved quadrilaterals and curved pentagons. We will illustrate the salient steps involved in setting up and carrying out these integrals and discuss what class of basis (testing) functions and analytic surfaces of revolution they are applicable to. Analytic calculations offer the advantage of better accuracy than purely numerical ones, and, when combined with them, often shed light on issues of numerical convergence and limiting values. Furthermore, they may reduce computation time and storage requirements.
Lewis, E.R.; Schwartz, S.
2010-03-15
Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.
Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor
2015-01-10
The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. PMID:25445290
Padma, S; Hariharan, G
2016-06-01
In this paper, we have developed an efficient wavelet based approximation method to biofilm model under steady state arising in enzyme kinetics. Chebyshev wavelet based approximation method is successfully introduced in solving nonlinear steady state biofilm reaction model. To the best of our knowledge, until now there is no rigorous wavelet based solution has been addressed for the proposed model. Analytical solutions for substrate concentration have been derived for all values of the parameters δ and SL. The power of the manageable method is confirmed. Some numerical examples are presented to demonstrate the validity and applicability of the wavelet method. Moreover the use of Chebyshev wavelets is found to be simple, efficient, flexible, convenient, small computation costs and computationally attractive. PMID:26661721
Recent {alpha} decay half-lives and analytic expression predictions including superheavy nuclei
Royer, G.
2008-03-15
New recent experimental {alpha} decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the {alpha} emitter and the Q{sub {alpha}} value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the {alpha} decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Q{sub {alpha}} of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].
Analytical expression of the potential generated by a massive inhomogeneous straight segment
NASA Astrophysics Data System (ADS)
Najid, N.-E.; Elourabi, E.
2011-12-01
Potential calculation is an important task to study dynamical behavior of test particles around celestial bodies. Gravitational potential of irregular bodies is of great importance since the discoveries of binary asteroids, this opened a new field of research. A simple model to describe the motion of a test particle, in that case, is to consider a finite homogeneous straight segment. In our work, we take this model by adding an inhomogeneous distribution of mass. To be consistent with the geometrical shape of the asteroid, we explore a parabolic profile of the density. We establish the closet analytical form of the potential generated by this inhomogeneous massive straight segment. The study of the dynamical behavior is fulfilled by the use of Lagrangian formulation, which allowed us to give some two and three dimensional orbits.
Fortunato, Angelo; Gasparoli, Luca; Falsini, Sara; Boni, Luca; Luca, Boni; Arcangeli, Annarosa
2013-12-01
Cancer molecular investigation revealed a huge molecular heterogeneity between different types of cancers as well as among cancer patients affected by the same cancer type. This implies the necessity of a personalized approach for cancer diagnosis and therapy, on the basis of the development of standardized protocols to facilitate the application of molecular techniques in the clinical decision-making process. Ion channels encoding genes are acquiring increasing relevance in oncological translational studies, representing new candidates for molecular diagnostic and therapeutic purposes. Hence, the development of molecular protocols for the quantification of ion channels encoding genes in tumor specimens may have relevance for diagnostic and prognostic investigation. Two main hindrances must be overcome for these purposes: the use of formalin-fixed and paraffin-embedded samples for gene expression analysis and the physiological expression of ion channels in excitable cells, potentially present in the tumor sample. We here propose a method for hERG1 gene quantification in colorectal cancer samples in both cryopreserved and formalin-fixed and paraffin-embedded samples. An analytical method was developed to estimate hERG1 gene expression exclusively in epithelial cancer cells. Indeed, we found that the hERG1 gene was expressed at significant levels by myofibroblasts present in the tumor stroma. This method was based on the normalization on a smooth muscle-myofibroblast-specific gene, MYH11, with no need of microdissection. By applying this method, hERG1 expression turned out to correlate with VEGF-A expression, confirming previous immunohistochemical data. PMID:24193004
Analytical expressions for maximum wind turbine average power in a Rayleigh wind regime
Carlin, P.W.
1996-12-01
Average or expectation values for annual power of a wind turbine in a Rayleigh wind regime are calculated and plotted as a function of cut-out wind speed. This wind speed is expressed in multiples of the annual average wind speed at the turbine installation site. To provide a common basis for comparison of all real and imagined turbines, the Rayleigh-Betz wind machine is postulated. This machine is an ideal wind machine operating with the ideal Betz power coefficient of 0.593 in a Rayleigh probability wind regime. All other average annual powers are expressed in fractions of that power. Cases considered include: (1) an ideal machine with finite power and finite cutout speed, (2) real machines operating in variable speed mode at their maximum power coefficient, and (3) real machines operating at constant speed.
NASA Astrophysics Data System (ADS)
Modak, Viraj P.; Wyslouzil, Barbara E.; Singer, Sherwin J.
2016-08-01
The crystal-vapor surface free energy γ is an important physical parameter governing physical processes, such as wetting and adhesion. We explore exact and approximate routes to calculate γ based on cleaving an intact crystal into non-interacting sub-systems with crystal-vapor interfaces. We do this by turning off the interactions, ΔV, between the sub-systems. Using the soft-core scheme for turning off ΔV, we find that the free energy varies smoothly with the coupling parameter λ, and a single thermodynamic integration yields the exact γ. We generate another exact method, and a cumulant expansion for γ by expressing the surface free energy in terms of an average of e-βΔV in the intact crystal. The second cumulant, or Gaussian approximation for γ is surprisingly accurate in most situations, even though we find that the underlying probability distribution for ΔV is clearly not Gaussian. We account for this fact by developing a non-Gaussian theory for γ and find that the difference between the non-Gaussian and Gaussian expressions for γ consist of terms that are negligible in many situations. Exact and approximate methods are applied to the (111) surface of a Lennard-Jones crystal and are also tested for more complex molecular solids, the surface of octane and nonadecane. Alkane surfaces were chosen for study because their crystal-vapor surface free energy has been of particular interest for understanding surface freezing in these systems.
Sahu, Basudeb
2008-10-15
An analytically solvable composite potential that can closely reproduce the combined potential of an {alpha}+nucleus system consisting of attractive nuclear and repulsive electrostatic potentials is developed. The exact s-wave solution of the Schroedinger equation with this potential in the interior region and the outside Coulomb wave function are used to give a heuristic expression for the width or half-life of the quasibound state at the accurately determined resonance energy, called the Q value of the decaying system. By using the fact that for a relatively low resonance energy, the quasibound state wave function is quite similar to the bound state wave function where the amplitude of the wave function in the interaction region is very large as compared to the amplitude outside, the resonance energy could easily be calculated from the variation of relative probability densities of inside and outside waves as a function of energy. By considering recent {alpha}-decay systems, the applicability of the model is demonstrated with excellent explanations being found for the experimental data of Q values and half-lives of a vast range of masses including superheavy nuclei and nuclei with very long lifetimes (of order 10{sup 22} s). Throughout the application, by simply varying the value of a single potential parameter describing the flatness of the barrier, we obtain successful results in cases with as many as 70 pairs of {alpha}+daughter nucleus systems.
An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species
NASA Astrophysics Data System (ADS)
Hatami, M. M.
2015-04-01
The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, it reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically.
An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species
Hatami, M. M.
2015-04-15
The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, it reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically.
Analytic expression for epithermal neutron spectra amplitudes as a function of water content
NASA Technical Reports Server (NTRS)
Drake, Darrell
1993-01-01
The epithermal portion of an equilibrium neutron spectrum in a planetary body is a function of the water content of its material. The neutrons are produced at high energies but are moderated by elastic and inelastic scattering until they either are captured by surrounding nuclei or escape. We have derived an expression that explicitly shows the dependance of epithermal neutron spectra on water content. Additionally, we compared its predictions to calculations done by Boltzman transport code for infinite media for silicon, oxygen, and a possible lunar composition, and we have obtained very good agreement.
Croft, Stephen; Evans, Louise G; Schear, Melissa A
2010-01-01
In the realm of nuclear safeguards, passive neutron multiplicity counting using shift register pulse train analysis to nondestructively quantify Pu in product materials is a familiar and widely applied technique. The approach most commonly taken is to construct a neutron detector consisting of {sup 3}He filled cylindrical proportional counters embedded in a high density polyethylene moderator. Fast neutrons from the item enter the moderator and are quickly slowed down, on timescales of the order of 1-2 {micro}s, creating a thermal population which then persists typically for several 10's {micro}s and is sampled by the {sup 3}He detectors. Because the initial transient is of comparatively short duration it has been traditional to treat it as instantaneous and furthermore to approximate the subsequent capture time distribution as exponential in shape. With these approximations simple expressions for the various Gate Utilization Factors (GUFs) can be obtained. These factors represent the proportion of time correlated events i.e. Doubles and Triples signal present in the pulse train that is detected by the coincidence gate structure chosen (predelay and gate width settings of the multiplicity shift register). More complicated expressions can be derived by generalizing the capture time distribution to multiple time components or harmonics typically present in real systems. When it comes to applying passive neutron multiplicity methods to extremely intense (i.e. high emission rate and highly multiplying) neutron sources there is a drive to use detector types with very fast response characteristics in order to cope with the high rates. In addition to short pulse width, detectors with a short capture time profile are also desirable so that a short coincidence gate width can be set in order to reduce the chance or Accidental coincidence signal. In extreme cases, such as might be realized using boron loaded scintillators, the dieaway time may be so short that the build
Analytical Expressions for the Hard-Scattering Production of Massive Partons
Wong, Cheuk-Yin
2016-01-01
We obtain explicit expressions for the two-particle differential cross section $E_c E_\\kappa d\\sigma (AB \\to c\\kappa X) /d\\bb c d \\bb \\kappa$ and the two-particle angular correlation function \\break $d\\sigma(AB$$ \\to$$ c\\kappa X)/d\\Delta \\phi \\, d\\Delta y$ in the hard-scattering production of massive partons in order to exhibit the ``ridge" structure on the away side in the hard-scattering process. The single-particle production cross section $d\\sigma(AB \\to cX) /dy_c c_T dc_T $ is also obtained and compared with the ALICE experimental data for charm production in $pp$ collisions at 7 TeV at LHC.
NASA Technical Reports Server (NTRS)
Klunker, E. B.
1971-01-01
The problem of determining the small-disturbance flow about two-dimensional airfoils at transonic speeds has been successfully treated by the process of matching a numerical solution of the near field to analytic expressions for the far field. The three-dimensional problem, it would appear, can be treated in a similar way with the aid of algorithms adapted to high-speed and high-capacity computers. The far-field potential for both lifting and nonlifting three-dimensional wings at transonic speeds is developed herein for a subsonic free stream. This potential could be used for a three-dimensional-wing computation similar to the computation made for the two-dimensional wing.
Analytical expression for the sheath edge around wedge-shaped cathodes
NASA Astrophysics Data System (ADS)
Sheridan, T. E.
2008-03-01
The sheath is the boundary layer separating a quasi-neutral plasma from a material electrode. Understanding the sheath is important for numerous applications, including plasma-based ion implantation, plasma etching of semiconductors, plasma assisted electrostatic cleaning, and Langmuir probes. In a 1D planar geometry, the Child-Langmuir (CL) law describes the sheath when the bias on a negative electrode, i.e., a cathode, is much greater than the electron temperature. In this case, the sheath width s is an eigenvalue of the problem. In 2D, the sheath edge is an unknown line (an ``eigen-boundary") which is determined by a set of coupled, nonlinear, partial differential equations. I have found an expression for the sheath edge around a 2D wedge-shaped cathode with included angle θw. In polar coordinates (r,θ), the sheath edge is a solution of r(aθ)=as where s is the planar sheath width far from the corner and θw=2π- π/a, so that a=1/2 gives a knife edge, while a=2/3 gives a square corner. This result is verified by comparison with the numerical solutions of Watterson [P. A. Watterson, J. Phys. D 22, 1300 (1989)].
NASA Astrophysics Data System (ADS)
Li, Ping; Li, Xin-zhou; Xi, Ping
2016-06-01
We present a detailed study of the spherically symmetric solutions in Lorentz-breaking massive gravity. There is an undetermined function { F }(X,{w}1,{w}2,{w}3) in the action of Stückelberg fields {S}φ ={{{Λ }}}4\\int {{{d}}}4x\\sqrt{-g}{ F }, which should be resolved through physical means. In general relativity, the spherically symmetric solution to the Einstein equation is a benchmark and its massive deformation also plays a crucial role in Lorentz-breaking massive gravity. { F } will satisfy the constraint equation {T}01=0 from the spherically symmetric Einstein tensor {G}01=0, if we maintain that any reasonable physical theory should possess the spherically symmetric solutions. The Stückelberg field {φ }i is taken as a ‘hedgehog’ configuration {φ }i=φ (r){x}i/r, whose stability is guaranteed by the topological one. Under this ansätz, {T}01=0 is reduced to d{ F }=0. The functions { F } for d{ F }=0 form a commutative ring {R}{ F }. We obtain an expression of the solution to the functional differential equation with spherical symmetry if { F }\\in {R}{ F }. If { F }\\in {R}{ F } and \\partial { F }/\\partial X=0, the functions { F } form a subring {S}{ F }\\subset {R}{ F }. We show that the metric is Schwarzschild, Schwarzschild-AdS or Schwarzschild-dS if { F }\\in {S}{ F }. When { F }\\in {R}{ F } but { F }\
Modak, Viraj P; Wyslouzil, Barbara E; Singer, Sherwin J
2016-08-01
The crystal-vapor surface free energy γ is an important physical parameter governing physical processes, such as wetting and adhesion. We explore exact and approximate routes to calculate γ based on cleaving an intact crystal into non-interacting sub-systems with crystal-vapor interfaces. We do this by turning off the interactions, ΔV, between the sub-systems. Using the soft-core scheme for turning off ΔV, we find that the free energy varies smoothly with the coupling parameter λ, and a single thermodynamic integration yields the exact γ. We generate another exact method, and a cumulant expansion for γ by expressing the surface free energy in terms of an average of e(-βΔV) in the intact crystal. The second cumulant, or Gaussian approximation for γ is surprisingly accurate in most situations, even though we find that the underlying probability distribution for ΔV is clearly not Gaussian. We account for this fact by developing a non-Gaussian theory for γ and find that the difference between the non-Gaussian and Gaussian expressions for γ consist of terms that are negligible in many situations. Exact and approximate methods are applied to the (111) surface of a Lennard-Jones crystal and are also tested for more complex molecular solids, the surface of octane and nonadecane. Alkane surfaces were chosen for study because their crystal-vapor surface free energy has been of particular interest for understanding surface freezing in these systems. PMID:27497575
Yanguas-Gil, Angel; Elam, Jeffrey W.
2014-05-15
In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authors show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e.g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes.
Zisi, Ch; Fasoula, S; Pappa-Louisi, A; Nikitas, P
2013-10-15
Expressions for the retention time of ionogenic analytes eluted under multilinear double pH/solvent-gradients in reversed-phase liquid chromatography are developed by dividing each gradient profile into a finite number of subportions, where the solute retention factors or their logarithms vary linearly with time. To test the theory, two series of experimental gradient retention data of amino acid OPA derivatives were analyzed: The first one was a monolinear or bilinear pH-gradient data set obtained in eluents with different but constant organic modifier contents, whereas the second data set comprised retention data of combined pH/organic solvent-gradients, where the organic content was changed linearly with time but the variation of pH exhibited a curved form approximated by five linear subportions. It was found that the derived expressions describe these experimental retention data with high accuracy, since under double pH/solvent-gradients the overall errors in the fitted and predicted retention times were 1.9% and 1.7%, respectively, whereas under simple pH-gradients these errors were 0.9% and 2%, respectively. PMID:24010983
Electron back-scattering coefficient below 5 keV: Analytical expressions and surface-barrier effects
NASA Astrophysics Data System (ADS)
Cazaux, J.
2012-10-01
Simple analytical expressions for the electron backscattering coefficient, η, are established from published data obtained in the ˜0.4-5 keV range for 21 elements ranging from Be to Au. They take into account the decline in η with a decrease in energy E° for high-Z elements and the reverse behavior for low-Z elements. The proposed expressions for η (E°) lead to crossing energies situated in the 0.4-1 keV range and they may be reasonably extended to any of the other elements—via an interpolation procedure—to metallic alloys and probably to compounds. The influence of the surface barrier on the escape probability of the back-scattered electrons is next evaluated. This evaluation provides a theoretical basis to explain the observed deviation between various published data as a consequence of surface contamination or oxidation. Various practical applications and strategies are deduced for the η-measurements in dedicated instruments as well for the image interpretation in low voltage scanning electron microscopy based on the backscattered electron detection. In this microscopy, the present investigation allows to generalize the scarce contrast changes and contrast reversals previously observed on multi elemental samples and it suggests the possibility of a new type of contrast: the work function contrast.
Leblond, F; Ovanesyan, Z; Davis, S C; Valdés, P A; Kim, A; Hartov, A; Wilson, B C; Pogue, B W; Paulsen, K D; Roberts, D W
2016-01-01
Here we derived analytical solutions to diffuse light transport in biological tissue based on spectral deformation of diffused near-infrared measurements. These solutions provide a closed-form mathematical expression which predicts that the depth of a fluorescent molecule distribution is linearly related to the logarithm of the ratio of fluorescence at two different wavelengths. The slope and intercept values of the equation depend on the intrinsic values of absorption and reduced scattering of tissue. This linear behavior occurs if the following two conditions are satisfied: the depth is beyond a few millimeters, and the tissue is relatively homogenous. We present experimental measurements acquired with a broad-beam non-contact multi-spectral fluorescence imaging system using a hemoglobin-containing diffusive phantom. Preliminary results confirm that a significant correlation exists between the predicted depth of a distribution of protoporphyrin IX (PpIX) molecules and the measured ratio of fluorescence at two different wavelengths. These results suggest that depth assessment of fluorescence contrast can be achieved in fluorescence-guided surgery to allow improved intra-operative delineation of tumor margins. PMID:21971201
Deridder, Sander; Desmet, Gert
2012-02-01
Using computational fluid dynamics (CFD), the effective B-term diffusion constant γ(eff) has been calculated for four different random sphere packings with different particle size distributions and packing geometries. Both fully porous and porous-shell sphere packings are considered. The obtained γ(eff)-values have subsequently been used to determine the value of the three-point geometrical constant (ζ₂) appearing in the 2nd-order accurate effective medium theory expression for γ(eff). It was found that, whereas the 1st-order accurate effective medium theory expression is accurate to within 5% over most part of the retention factor range, the 2nd-order accurate expression is accurate to within 1% when calculated with the best-fit ζ₂-value. Depending on the exact microscopic geometry, the best-fit ζ₂-values typically lie in the range of 0.20-0.30, holding over the entire range of intra-particle diffusion coefficients typically encountered for small molecules (0.1 ≤ D(pz)/D(m) ≤ 0.5). These values are in agreement with the ζ₂-value proposed by Thovert et al. for the random packing they considered. PMID:22236565
NASA Technical Reports Server (NTRS)
Georgevic, R. M.
1973-01-01
Closed-form analytic expressions for the time variations of instantaneous orbital parameters and of the topocentric range and range rate of a spacecraft moving in the gravitational field of an oblate large body are derived using a first-order variation of parameters technique. In addition, the closed-form analytic expressions for the partial derivatives of the topocentric range and range rate are obtained, with respect to the coefficient of the second harmonic of the potential of the central body (J sub 2). The results are applied to the motion of a point-mass spacecraft moving in the orbit around the equatorially elliptic, oblate sun, with J sub 2 approximately equal to .000027.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
2011-10-01
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions. PMID:21622076
Lambret-Frotté, Julia; de Almeida, Leandro C. S.; de Moura, Stéfanie M.; Souza, Flavio L. F.; Linhares, Francisco S.; Alves-Ferreira, Marcio
2015-01-01
Employing reference genes to normalize the data generated with quantitative PCR (qPCR) can increase the accuracy and reliability of this method. Previous results have shown that no single housekeeping gene can be universally applied to all experiments. Thus, the identification of a suitable reference gene represents a critical step of any qPCR analysis. Setaria viridis has recently been proposed as a model system for the study of Panicoid grasses, a crop family of major agronomic importance. Therefore, this paper aims to identify suitable S. viridis reference genes that can enhance the analysis of gene expression in this novel model plant. The first aim of this study was the identification of a suitable RNA extraction method that could retrieve a high quality and yield of RNA. After this, two distinct algorithms were used to assess the gene expression of fifteen different candidate genes in eighteen different samples, which were divided into two major datasets, the developmental and the leaf gradient. The best-ranked pair of reference genes from the developmental dataset included genes that encoded a phosphoglucomutase and a folylpolyglutamate synthase; genes that encoded a cullin and the same phosphoglucomutase as above were the most stable genes in the leaf gradient dataset. Additionally, the expression pattern of two target genes, a SvAP3/PI MADS-box transcription factor and the carbon-fixation enzyme PEPC, were assessed to illustrate the reliability of the chosen reference genes. This study has shown that novel reference genes may perform better than traditional housekeeping genes, a phenomenon which has been previously reported. These results illustrate the importance of carefully validating reference gene candidates for each experimental set before employing them as universal standards. Additionally, the robustness of the expression of the target genes may increase the utility of S. viridis as a model for Panicoid grasses. PMID:26247784
NASA Astrophysics Data System (ADS)
Ukaegbu, Ikechi Augustine; Sangirov, Jamshid; Cho, Mu Hee; Lee, Tae-Woo; Park, Hyo-Hoon
2011-07-01
In this paper, a crosstalk expression and equivalent circuit model have been proposed based on RLC line model and interconnect parameters for wire-bonded and flip-chip bonded multichannel optoelectronic modules. The analytical expression and model are accurate for computing crosstalk of interconnects used in chip packaging. In addition, full-wave simulation and experimental results from total crosstalk measurement are discussed.
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed. PMID:26631024
Zhao, Luyao; Yang, Shuming; Zhang, Yanhua; Zhang, Ying; Hou, Can; Cheng, Yongyou; You, Xinyong; Gu, Xu; Zhao, Zhen; Muhammad Tarique, Tunio
2016-03-01
In this study, quantification of mRNA gene expression was examined as biomarkers to detect ractopamaine abuse and ractopamaine residues in cashmere goats. It was focused on the identification of potential gene expression biomarkers and describing the coreletionship between gene expression and residue level by 58 animals for 49 days. The results showed that administration periods and residue levels significantly influenced mRNA expressions of the β2-adrenergic receptor (β2AR), the enzymes PRKACB, ADCY3, ATP1A3, ATP2A3, PTH, and MYLK, and the immune factors IL-1β and TNF-α. Statistical analysis like principal components analysis (PCA), hierarchical cluster analysis (HCA), and discriminant analysis (DA) showed that these genes can serve as potential biomarkers for ractopamine in skeletal muscle and that they are also suitable for describing different residue levels separately. PMID:26886866
Emfietzoglou, D.; Kyriakou, I.; Garcia-Molina, R.; Abril, I.; Kostarelos, K.
2010-09-15
We have determined ''effective'' Bethe coefficients and the mean excitation energy of stopping theory (I-value) for multiwalled carbon nanotubes (MWCNTs) and single-walled carbon nanotube (SWCNT) bundles based on a sum-rule constrained optical-data model energy loss function with improved asymptotic properties. Noticeable differences between MWCNTs, SWCNT bundles, and the three allotropes of carbon (diamond, graphite, glassy carbon) are found. By means of Bethe's asymptotic approximation, the inelastic scattering cross section, the electronic stopping power, and the average energy transfer to target electrons in a single inelastic collision, are calculated analytically for a broad range of electron and proton beam energies using realistic excitation parameters.
NASA Astrophysics Data System (ADS)
Emfietzoglou, D.; Kyriakou, I.; Garcia-Molina, R.; Abril, I.; Kostarelos, K.
2010-09-01
We have determined "effective" Bethe coefficients and the mean excitation energy of stopping theory (I-value) for multiwalled carbon nanotubes (MWCNTs) and single-walled carbon nanotube (SWCNT) bundles based on a sum-rule constrained optical-data model energy loss function with improved asymptotic properties. Noticeable differences between MWCNTs, SWCNT bundles, and the three allotropes of carbon (diamond, graphite, glassy carbon) are found. By means of Bethe's asymptotic approximation, the inelastic scattering cross section, the electronic stopping power, and the average energy transfer to target electrons in a single inelastic collision, are calculated analytically for a broad range of electron and proton beam energies using realistic excitation parameters.
Furman, M.A.
2007-05-29
By combining the method of images with calculus of complex variables, we provide a simple expression for the electric field of a two-dimensional (2D) static elliptical charge distribution inside a perfectly conducting cylinder. The charge distribution need not be concentric with the cylinder.
Supernova neutrino oscillations: A simple analytical approach
NASA Astrophysics Data System (ADS)
Fogli, G. L.; Lisi, E.; Montanino, D.; Palazzo, A.
2002-04-01
Analyses of observable supernova neutrino oscillation effects require the calculation of the electron (anti)neutrino survival probability Pee along a given supernova matter density profile. We propose a simple analytical prescription for Pee, based on a double-exponential form for the crossing probability and on the concept of maximum violation of adiabaticity. In the case of two-flavor transitions, the prescription is shown to reproduce accurately, in the whole neutrino oscillation parameter space, the results of exact numerical calculations for generic (realistic or power-law) profiles. The analytical approach is then generalized to cover three-flavor transitions with (direct or inverse) mass spectrum hierarchy, and to incorporate Earth matter effects. Compact analytical expressions, explicitly showing the symmetry properties of Pee, are provided for practical calculations.
NASA Technical Reports Server (NTRS)
Gonzales, David A.; Varghese, Philip L.
1993-01-01
Closed form expressions for inelastic state-to-state and state-specific dissociative rate coefficients for utilization in vibrational master equation studies of shock heated CO, N2, and O2 highly dilute in Ar are considered. The master equation is linearized by neglecting diatom-diatom collisions and recombination. Master equation results indicate that the most significant contribution to dissociation comes from low and mid lying vibrational levels.
NASA Astrophysics Data System (ADS)
Tournus, F.; Bonet, E.
2011-05-01
We study a model system made of non-interacting monodomain ferromagnetic nanoparticles, considered as macrospins, with a randomly oriented uniaxial magnetic anisotropy. We derive a simple differential equation governing the magnetic moment evolution in an experimental magnetic susceptibility measurement, at low field and as a function of temperature, following the well-known Zero-Field Cooled/Field Cooled (ZFC/FC) protocol. Exact and approximate analytical solutions are obtained, together for the ZFC curve and the FC curve. The notion of blocking temperature is discussed and the influence of various parameters on the curves is investigated. A crossover temperature is defined and a comparison is made between our progressive crossover model (PCM) and the crude "two states" or abrupt transition model (ATM), where the particles are assumed to be either fully blocked or purely superparamagnetic. We consider here the case of a single magnetic anisotropy energy (MAE), which is a prerequisite before considering the more realistic and experimentally relevant case of an assembly of particles with a MAE distribution (cf. part II that follows).
NASA Astrophysics Data System (ADS)
Richeton, T.; Tiba, I.; Berbenni, S.; Bouaziz, O.
2015-01-01
Strong incompatibility stresses may develop at grain or twin boundaries because of elastic and plastic anisotropies. Their knowledge at ? twin boundaries may be of interest for a better understanding of the mechanical behaviour of fcc materials that can display lamellar twin structures, such as twinning-induced plasticity (TWIP) steels or general nanotwinned materials. In this paper, incompatibility stresses arising at general twin boundaries are explicitly derived for a given twin volume fraction. They are deduced from the solutions of the general infinite bicrystal, which is equivalent to a periodic layered structure. In the case of pure elasticity and ? twin boundaries, the result is of remarkable simplicity. The incompatibility stress field reduces to a shear stress acting upon a plane orthogonal to twin plane. Simple analytical expressions of the resolved shear stresses are also determined according to the twin-boundary orientation, the twin volume fraction and the elastic anisotropy factor. Such expressions allow performing a comprehensive study of slip initiation. In particular, there exists a large physical domain, depending on the three above parameters, where simultaneous slip parallel to twin plane in the parent and in the twin is greatly promoted. There is also a restricted domain where simultaneous single slip parallel to twin plane is promoted. The conditions for these promotions are realistic considering the literature data on TWIP steels. The present results, hence, support the high ductility and strong contribution of kinematic hardening observed in TWIP steels and agree with composite hardening models with single- and multi-slip-deforming grains.
Koutra, Katerina; Simos, Panagiotis; Triliva, Sofia; Lionis, Christos; Vgontzas, Alexandros N
2016-06-30
The present study aimed to evaluate a path analytic model accounting for caregivers' psychological distress that takes into account perceived family cohesion and flexibility, expressed emotion and caregiver's burden associated with the presence of mental illness in the family. 50 first-episode and 50 chronic patients diagnosed with schizophrenia or bipolar disorder (most recent episode manic severe with psychotic features) recruited from the Inpatient Psychiatric Unit of the University Hospital of Heraklion, Crete, Greece, and their family caregivers participated in the study. Family functioning was assessed in terms of cohesion and flexibility (FACES-IV), expressed emotion (FQ), family burden (FBS) and caregivers' psychological distress (GHQ-28). Structural equation modelling was used to evaluate the direct and indirect effects of family dynamics on caregivers' psychological distress. The results showed that neither family cohesion nor family flexibility exerted significant direct effects on caregivers' psychological distress. Instead, the effect of flexibility was mediated by caregivers' criticism and family burden indicating an indirect effect on caregivers' psychological distress. These results apply equally to caregivers of first episode and chronic patients. Family interventions aiming to improve dysfunctional family interactions by promoting awareness of family dynamics could reduce the burden and improve the emotional well-being of family caregivers. PMID:27085666
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Analytical solutions for radiation-driven winds in massive stars. I. The fast regime
Araya, I.; Curé, M.; Cidale, L. S.
2014-11-01
Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.
Predict amine solution properties accurately
Cheng, S.; Meisen, A.; Chakma, A.
1996-02-01
Improved process design begins with using accurate physical property data. Especially in the preliminary design stage, physical property data such as density viscosity, thermal conductivity and specific heat can affect the overall performance of absorbers, heat exchangers, reboilers and pump. These properties can also influence temperature profiles in heat transfer equipment and thus control or affect the rate of amine breakdown. Aqueous-amine solution physical property data are available in graphical form. However, it is not convenient to use with computer-based calculations. Developed equations allow improved correlations of derived physical property estimates with published data. Expressions are given which can be used to estimate physical properties of methyldiethanolamine (MDEA), monoethanolamine (MEA) and diglycolamine (DGA) solutions.
NASA Astrophysics Data System (ADS)
Itano, Wayne M.; Ramsey, Norman F.
1993-07-01
The paper discusses current methods for accurate measurements of time by conventional atomic clocks, with particular attention given to the principles of operation of atomic-beam frequency standards, atomic hydrogen masers, and atomic fountain and to the potential use of strings of trapped mercury ions as a time device more stable than conventional atomic clocks. The areas of application of the ultraprecise and ultrastable time-measuring devices that tax the capacity of modern atomic clocks include radio astronomy and tests of relativity. The paper also discusses practical applications of ultraprecise clocks, such as navigation of space vehicles and pinpointing the exact position of ships and other objects on earth using the GPS.
NASA Technical Reports Server (NTRS)
Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.
1981-01-01
Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Accurate ab Initio Spin Densities
2012-01-01
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921
Analytic orbit plane targeting for orbit transfers about an oblate planet
NASA Technical Reports Server (NTRS)
Mchenry, R. L.
1992-01-01
This paper develops closed-form expressions which accurately model variations in orbital inclination and longitude of the ascending node due to the influence of the J2 oblateness perturbation. These analytic expressions are particularly useful in defining perturbed orbit transfer planes which naturally regress into the target intercept position for Lambert-type transfers and in compensating for differential nodal regression between two orbiting vehicles in rendezvous targeting problems. Results of example problems for each of these scenarios demonstrate that they accurately compensate for these oblateness effects.
Not Available
2006-06-01
In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.
ERIC Educational Resources Information Center
Pappas, Marjorie L.
1995-01-01
Discusses analytical searching, a process that enables searchers of electronic resources to develop a planned strategy by combining words or phrases with Boolean operators. Defines simple and complex searching, and describes search strategies developed with Boolean logic and truncation. Provides guidelines for teaching students analytical…
Integrated Risk Information System (IRIS)
Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect
Lewis, D.W. . Dept. of Geology); McConchie, D.M. . Centre for Coastal Management)
1994-01-01
Both a self instruction manual and a cookbook'' guide to field and laboratory analytical procedures, this book provides an essential reference for non-specialists. With a minimum of mathematics and virtually no theory, it introduces practitioners to easy, inexpensive options for sample collection and preparation, data acquisition, analytic protocols, result interpretation and verification techniques. This step-by-step guide considers the advantages and limitations of different procedures, discusses safety and troubleshooting, and explains support skills like mapping, photography and report writing. It also offers managers, off-site engineers and others using sediments data a quick course in commissioning studies and making the most of the reports. This manual will answer the growing needs of practitioners in the field, either alone or accompanied by Practical Sedimentology, which surveys the science of sedimentology and provides a basic overview of the principles behind the applications.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
NASA Astrophysics Data System (ADS)
Endress, E.; Weigelt, S.; Reents, G.; Bayerl, T. M.
2005-01-01
Measurements of very slow diffusive processes in membranes, like the diffusion of integral membrane proteins, by fluorescence recovery after photo bleaching (FRAP) are hampered by bleaching of the probe during the read out of the fluorescence recovery. In the limit of long observation time (very slow diffusion as in the case of large membrane proteins), this bleaching may cause errors to the recovery function and thus provides error-prone diffusion coefficients. In this work we present a new approach to a two-dimensional closed form analytical solution of the reaction-diffusion equation, based on the addition of a dissipative term to the conventional diffusion equation. The calculation was done assuming (i) a Gaussian laser beam profile for bleaching the spot and (ii) that the fluorescence intensity profile emerging from the spot can be approximated by a two-dimensional Gaussian. The detection scheme derived from the analytical solution allows for diffusion measurements without the constraint of observation bleaching. Recovery curves of experimental FRAP data obtained under non-negligible read-out bleaching for native membranes (rabbit endoplasmic reticulum) on a planar solid support showed excellent agreement with the analytical solution and allowed the calculation of the lipid diffusion coefficient.
Analytical validation of accelerator mass spectrometry for pharmaceutical development
Keck, Bradly D; Ognibene, Ted; Vogel, John S
2011-01-01
The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of 14C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the 14C label), stable across samples storage conditions for at least 1 year, linear over four orders of magnitude with an analytical range from 0.1 Modern to at least 2000 Modern (instrument specific). Furthermore, accuracy was excellent (between 1 and 3%), while precision expressed as coefficient of variation was between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of 14C, respectively (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with 14C corresponds to 30 fg equivalents. Accelerator mass spectrometry provides a sensitive, accurate and precise method of measuring drug compounds in biological matrices. PMID:21083256
Accurate Mass Measurements in Proteomics
Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.
2007-08-01
To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed
An Analytical State Transition Matrix for Orbits Perturbed by an Oblate Spheroid
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An analytical state transition matrix and its inverse, which include the short period and secular effects of the second zonal harmonic, were developed from the nonsingular PS satellite theory. The fact that the independent variable in the PS theory is not time is in no respect disadvantageous, since any explicit analytical solution must be expressed in the true or eccentric anomaly. This is shown to be the case for the simple conic matrix. The PS theory allows for a concise, accurate, and algorithmically simple state transition matrix. The improvement over the conic matrix ranges from 2 to 4 digits accuracy.
NASA Astrophysics Data System (ADS)
Olivares-Rivas, Wilmer; Colmenares, Pedro J.
2016-09-01
The non-static generalized Langevin equation and its corresponding Fokker-Planck equation for the position of a viscous fluid particle were solved in closed form for a time dependent external force. Its solution for a constant external force was obtained analytically. The non-Markovian stochastic differential equation, associated to the dynamics of the position under a colored noise, was then applied to the description of the dynamics and persistence time of particles constrained within absorbing barriers. Comparisons with molecular dynamics were very satisfactory.
Gravitational lensing from compact bodies: Analytical results for strong and weak deflection limits
Amore, Paolo; Cervantes, Mayra; De Pace, Arturo; Fernandez, Francisco M.
2007-04-15
We develop a nonperturbative method that yields analytical expressions for the deflection angle of light in a general static and spherically symmetric metric. The method works by introducing into the problem an artificial parameter, called {delta}, and by performing an expansion in this parameter to a given order. The results obtained are analytical and nonperturbative because they do not correspond to a polynomial expression in the physical parameters. Already to first order in {delta} the analytical formulas obtained using our method provide at the same time accurate approximations both at large distances (weak deflection limit) and at distances close to the photon sphere (strong deflection limit). We have applied our technique to different metrics and verified that the error is at most 0.5% for all regimes. We have also proposed an alternative approach which provides simpler formulas, although with larger errors.
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
How to accurately bypass damage
Broyde, Suse; Patel, Dinshaw J.
2016-01-01
Ultraviolet radiation can cause cancer through DNA damage — specifically, by linking adjacent thymine bases. Crystal structures show how the enzyme DNA polymerase η accurately bypasses such lesions, offering protection. PMID:20577203
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, David C.; Goorvitch, D.
1994-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schr\\"{o}dinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
Bozkaya, Uğur
2014-09-28
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller–Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur
2014-09-01
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller-Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
Bozkaya, Uğur
2014-09-28
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller-Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths. PMID:25273413
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Analytic matrix elements for the two-electron atomic basis with logarithmic terms
Liverts, Evgeny Z.; Barnea, Nir
2014-08-01
The two-electron problem for the helium-like atoms in S-state is considered. The basis containing the integer powers of ln r, where r is a radial variable of the Fock expansion, is studied. In this basis, the analytic expressions for the matrix elements of the corresponding Hamiltonian are presented. These expressions include only elementary and special functions, what enables very fast and accurate computation of the matrix elements. The decisive contribution of the correct logarithmic terms to the behavior of the two-electron wave function in the vicinity of the triple-coalescence point is reaffirmed.
NASA Astrophysics Data System (ADS)
Ormel, C. W.; Klahr, H. H.
2010-09-01
Planetary bodies form by accretion of smaller bodies. It has been suggested that a very efficient way to grow protoplanets is by accreting particles of size ≪km (e.g., chondrules, boulders, or fragments of larger bodies) as they can be kept dynamically cold. We investigate the effects of gas drag on the impact radii and the accretion rates of these particles. As simplifying assumptions we restrict our analysis to 2D settings, a gas drag law linear in velocity, and a laminar disk characterized by a smooth (global) pressure gradient that causes particles to drift in radially. These approximations, however, enable us to cover an arbitrary large parameter space. The framework of the circularly restricted three body problem is used to numerically integrate particle trajectories and to derive their impact parameters. Three accretion modes can be distinguished: hyperbolic encounters, where the 2-body gravitational focusing enhances the impact parameter; three-body encounters, where gas drag enhances the capture probability; and settling encounters, where particles settle towards the protoplanet. An analysis of the observed behavior is presented; and we provide a recipe to analytically calculate the impact radius, which confirms the numerical findings. We apply our results to the sweepup of fragments by a protoplanet at a distance of 5 AU. Accretion of debris on small protoplanets (⪉50 km) is found to be slow, because the fragments are distributed over a rather thick layer. However, the newly found settling mechanism, which is characterized by much larger impact radii, becomes relevant for protoplanets of ~103 km in size and provides a much faster channel for growth.
Lavergne, J; Trissl, H W
1995-01-01
The theoretical relationships between the fluorescence and photochemical yields of PS II and the fraction of open reaction centers are examined in a general model endowed with the following features: i) a homogeneous, infinite PS II domain; ii) exciton-radical-pair equilibrium; and iii) different rates of exciton transfer between core and peripheral antenna beds. Simple analytical relations are derived for the yields and their time courses in induction experiments. The introduction of the exciton-radical-pair equilibrium, for both the open and closed states of the trap, is shown to be equivalent to an irreversible trapping scheme with modified parameters. Variation of the interunit transfer rate allows continuous modulation from the case of separated units to the pure lake model. Broadly used relations for estimating the relative amount of reaction centers from the complementary area of the fluorescence kinetics or the photochemical yield from fluorescence levels are examined in this framework. Their dependence on parameters controlling exciton decay is discussed, allowing assessment of their range of applicability. An experimental induction curve is analyzed, with a discussion of its decomposition into alpha and beta contributions. The sigmoidicity of the induction kinetics is characterized by a single parameter J related to Joliot's p, which is shown to depend on both the connectivity of the photosynthetic units and reaction center parameters. On the other hand, the relation between J and the extreme fluorescence levels (or the deviation from the linear Stern-Volmer dependence of 1/phi f on the fraction of open traps) is controlled only by antenna connectivity. Experimental data are consistent with a model of connected units for PS II alpha, intermediate between the pure lake model of unrestricted exciton transfer and the isolated units model. PMID:7647250
NASA Astrophysics Data System (ADS)
Kurylyk, B. L.; MacQuarrie, K. T. B.; Caissie, D.; McKenzie, J. M.
2015-05-01
Climate change is expected to increase stream temperatures and the projected warming may alter the spatial extent of habitat for cold-water fish and other aquatic taxa. Recent studies have proposed that stream thermal sensitivities, derived from short-term air temperature variations, can be employed to infer future stream warming due to long-term climate change. However, this approach does not consider the potential for streambed heat fluxes to increase due to gradual warming of the shallow subsurface. The temperature of shallow groundwater is particularly important for the thermal regimes of groundwater-dominated streams and rivers. Also, recent studies have investigated how land surface perturbations, such as wildfires or timber harvesting, can influence stream temperatures by changing stream surface heat fluxes, but these studies have typically not considered how these surface disturbances can also alter shallow groundwater temperatures and streambed heat fluxes. In this study, several analytical solutions to the one-dimensional unsteady advection-diffusion equation for subsurface heat transport are employed to estimate the timing and magnitude of groundwater temperature changes due to seasonal and long-term variability in land surface temperatures. Groundwater thermal sensitivity formulae are proposed that accommodate different surface warming scenarios. The thermal sensitivity formulae suggest that shallow groundwater will warm in response to climate change and other surface perturbations, but the timing and magnitude of the subsurface warming depends on the rate of surface warming, subsurface thermal properties, bulk aquifer depth, and groundwater velocity. The results also emphasize the difference between the thermal sensitivity of shallow groundwater to short-term (e.g., seasonal) and long-term (e.g., multi-decadal) land surface-temperature variability, and thus demonstrate the limitations of using short-term air and water temperature records to project
Analytical Chemistry of Nitric Oxide
Hetrick, Evan M.
2013-01-01
Nitric oxide (NO) is the focus of intense research, owing primarily to its wide-ranging biological and physiological actions. A requirement for understanding its origin, activity, and regulation is the need for accurate and precise measurement techniques. Unfortunately, analytical assays for monitoring NO are challenged by NO’s unique chemical and physical properties, including its reactivity, rapid diffusion, and short half-life. Moreover, NO concentrations may span pM to µM in physiological milieu, requiring techniques with wide dynamic response ranges. Despite such challenges, many analytical techniques have emerged for the detection of NO. Herein, we review the most common spectroscopic and electrochemical methods, with special focus on the fundamentals behind each technique and approaches that have been coupled with modern analytical measurement tools or exploited to create novel NO sensors. PMID:20636069
Fast and accurate propagation of coherent light
Lewis, R. D.; Beylkin, G.; Monzón, L.
2013-01-01
We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184
Palm: Easing the Burden of Analytical Performance Modeling
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexity (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.
Visual Analytics of Brain Networks
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2014-01-01
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991
Visual analytics of brain networks.
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2012-05-15
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Rapid Non-Linear Uncertainty Propagation via Analytical Techniques
NASA Astrophysics Data System (ADS)
Fujimoto, K.; Scheeres, D. J.
2012-09-01
Space situational awareness (SSA) is known to be a data starved problem compared to traditional estimation problems in that observation gaps per object may span over days if not weeks. Therefore, consistent characterization of the uncertainty associated with these objects including non-linear effects is crucial in maintaining an accurate catalog of objects in Earth orbit. Simultaneously, the motion of satellites in Earth orbit is well-modeled in that it is particularly amenable to having their solution and their uncertainty described through analytic or semi-analytic techniques. Even when stronger non-gravitational perturbations such as solar radiation pressure and atmospheric drag are encountered, these perturbations generally have deterministic components that are substantially larger than their time-varying stochastic components. Analytic techniques are powerful because time propagation is only a matter of changing the time parameter, allowing for rapid computational turnaround. These two ideas are combined in this paper: a method of analytically propagating non-linear orbit uncertainties is discussed. In particular, the uncertainty is expressed as an analytic probability density function (pdf) for all time. For a deterministic system model, such pdfs may be obtained if the initial pdf and the system states for all time are also given analytically. Even when closed-form solutions are not available, approximate solutions exist in the form of Edgeworth series for pdfs and Taylor series for the states. The coefficients of the latter expansion are referred to as state transition tensors (STTs), which are a generalization of state transition matrices to arbitrary order. Analytically expressed pdfs can be incorporated in many practical tasks in SSA. One can compute the mean and covariance of the uncertainty, for example, with the moments of the initial pdf as inputs. This process does not involve any sampling and its accuracy can be determined a priori. Analytical
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-04-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong (2014, BA14), extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded image of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies/second/core with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multi-band observations; and joint inference of photometric redshifts and lensing tomography.
Analytical model of diffuse reflectance spectrum of skin tissue
Lisenko, S A; Kugeiko, M M; Firago, V A; Sobchuk, A N
2014-01-31
We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions. (biophotonics)
Analytical model of diffuse reflectance spectrum of skin tissue
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.
2014-01-01
We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.
Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method
NASA Astrophysics Data System (ADS)
Rahimi, L.; Askari, A. A.; Saghafifar, H.
2016-07-01
The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.
Hadjitheodorou, Amalia; Kalosakas, George
2014-09-01
We investigate, both analytically and numerically, diffusion-controlled drug release from composite spherical formulations consisting of an inner core and an outer shell of different drug diffusion coefficients. Theoretically derived analytical results are based on the exact solution of Fick's second law of diffusion for a composite sphere, while numerical data are obtained using Monte Carlo simulations. In both cases, and for the range of matrix parameter values considered in this work, fractional drug release profiles are described accurately by a stretched exponential function. The release kinetics obtained is quantified through a detailed investigation of the dependence of the two stretched exponential release parameters on the device characteristics, namely the geometrical radii of the inner core and outer shell and the corresponding drug diffusion coefficients. Similar behaviors are revealed by both the theoretical results and the numerical simulations, and approximate analytical expressions are presented for the dependencies. PMID:25063169
NASA Astrophysics Data System (ADS)
Zhang, Gang; Zhou, Di; Mortari, Daniele
2012-12-01
A new approximate analytical method for the two-body impulsive orbit rendezvous problem with short range is presented. The classical analytical approach derives the initial relative velocity from the state transition matrix of linear relative motion equations. This paper proposes a different analytical approach based on the relative Lambert solutions. An approximate expression for the transfer time is obtained as a function of chaser's and target's semi-major axes difference. This results in first and second order estimates of the chaser's semi-major axis. Singularity points of rendezvous time for the classical and proposed new methods are both analyzed. As compared with the classical method, the new solution is simpler, more accurate, and has fewer singularity points. Moreover, the proposed method can be easily expanded to higher order solutions. A numerical example quantifies the accuracy gain for multiple-revolution cases.
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in
Huckans, Marilyn; Fuller, Bret E; Olavarria, Hannah; Sasaki, Anna W; Chang, Michael; Flora, Kenneth D; Kolessar, Michael; Kriz, Daniel; Anderson, Jeanne R; Vandenbark, Arthur A; Loftis, Jennifer M
2014-03-01
BackgroundThe purpose of this study was to characterize hepatitis C virus (HCV)-associated differences in the expression of 47 inflammatory factors and to evaluate the potential role of peripheral immune activation in HCV-associated neuropsychiatric symptoms-depression, anxiety, fatigue, and pain. An additional objective was to evaluate the role of immune factor dysregulation in the expression of specific neuropsychiatric symptoms to identify biomarkers that may be relevant to the treatment of these neuropsychiatric symptoms in adults with or without HCV. MethodsBlood samples and neuropsychiatric symptom severity scales were collected from HCV-infected adults (HCV+, n = 39) and demographically similar noninfected controls (HCV-, n = 40). Multi-analyte profile analysis was used to evaluate plasma biomarkers. ResultsCompared with HCV- controls, HCV+ adults reported significantly (P < 0.050) greater depression, anxiety, fatigue, and pain, and they were more likely to present with an increased inflammatory profile as indicated by significantly higher plasma levels of 40% (19/47) of the factors assessed (21%, after correcting for multiple comparisons). Within the HCV+ group, but not within the HCV- group, an increased inflammatory profile (indicated by the number of immune factors > the LDC) significantly correlated with depression, anxiety, and pain. Within the total sample, neuropsychiatric symptom severity was significantly predicted by protein signatures consisting of 4-10 plasma immune factors; protein signatures significantly accounted for 19-40% of the variance in depression, anxiety, fatigue, and pain. ConclusionsOverall, the results demonstrate that altered expression of a network of plasma immune factors contributes to neuropsychiatric symptom severity. These findings offer new biomarkers to potentially facilitate pharmacotherapeutic development and to increase our understanding of the molecular pathways associated with neuropsychiatric symptoms in
Huckans, Marilyn; Fuller, Bret E; Olavarria, Hannah; Sasaki, Anna W; Chang, Michael; Flora, Kenneth D; Kolessar, Michael; Kriz, Daniel; Anderson, Jeanne R; Vandenbark, Arthur A; Loftis, Jennifer M
2014-01-01
Background The purpose of this study was to characterize hepatitis C virus (HCV)-associated differences in the expression of 47 inflammatory factors and to evaluate the potential role of peripheral immune activation in HCV-associated neuropsychiatric symptoms—depression, anxiety, fatigue, and pain. An additional objective was to evaluate the role of immune factor dysregulation in the expression of specific neuropsychiatric symptoms to identify biomarkers that may be relevant to the treatment of these neuropsychiatric symptoms in adults with or without HCV. Methods Blood samples and neuropsychiatric symptom severity scales were collected from HCV-infected adults (HCV+, n = 39) and demographically similar noninfected controls (HCV−, n = 40). Multi-analyte profile analysis was used to evaluate plasma biomarkers. Results Compared with HCV− controls, HCV+ adults reported significantly (P < 0.050) greater depression, anxiety, fatigue, and pain, and they were more likely to present with an increased inflammatory profile as indicated by significantly higher plasma levels of 40% (19/47) of the factors assessed (21%, after correcting for multiple comparisons). Within the HCV+ group, but not within the HCV− group, an increased inflammatory profile (indicated by the number of immune factors > the LDC) significantly correlated with depression, anxiety, and pain. Within the total sample, neuropsychiatric symptom severity was significantly predicted by protein signatures consisting of 4–10 plasma immune factors; protein signatures significantly accounted for 19–40% of the variance in depression, anxiety, fatigue, and pain. Conclusions Overall, the results demonstrate that altered expression of a network of plasma immune factors contributes to neuropsychiatric symptom severity. These findings offer new biomarkers to potentially facilitate pharmacotherapeutic development and to increase our understanding of the molecular pathways associated with neuropsychiatric
The analytic renormalization group
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2016-08-01
Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients Gk, k ∈ Z, associated with the Matsubara frequencies νk = 2 πk / β. We show that analyticity implies that the coefficients Gk must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct "Analytic Renormalization Group" linear maps Aμ which, for any choice of cut-off μ, allow to express the low energy Fourier coefficients for |νk | < μ (with the possible exception of the zero mode G0), together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for |νk | ≥ μ. Operating a simple numerical algorithm, we show that the exact universal linear constraints on Gk can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.
Wideband analytical equivalent circuit for one-dimensional periodic stacked arrays.
Molero, Carlos; Rodríguez-Berral, Raúl; Mesa, Francisco; Medina, Francisco; Yakovlev, Alexander B
2016-01-01
A wideband equivalent circuit is proposed for the accurate analysis of scattering from a set of stacked slit gratings illuminated by a plane wave with transverse magnetic or electric polarization that impinges normally or obliquely along one of the principal planes of the structure. The slit gratings are printed on dielectric slabs of arbitrary thickness, including the case of closely spaced gratings that interact by higher-order modes. A Π-circuit topology is obtained for a pair of coupled arrays, with fully analytical expressions for all the circuit elements. This equivalent Π circuit is employed as the basis to derive the equivalent circuit of finite stacks with any given number of gratings. Analytical expressions for the Brillouin diagram and the Bloch impedance are also obtained for infinite periodic stacks. PMID:26871189
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations
NASA Astrophysics Data System (ADS)
Ro, Stephen; Matzner, Christopher D.
2013-08-01
We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.
SHOCK EMERGENCE IN SUPERNOVAE: LIMITING CASES AND ACCURATE APPROXIMATIONS
Ro, Stephen; Matzner, Christopher D.
2013-08-10
We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.
ERIC Educational Resources Information Center
Oblinger, Diana G.
2012-01-01
Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)
ERIC Educational Resources Information Center
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
Technology Transfer Automated Retrieval System (TEKTRAN)
Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
An Analytic Function of Lunar Surface Temperature for Exospheric Modeling
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David
2014-01-01
We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.
Detecting Cancer Quickly and Accurately
NASA Astrophysics Data System (ADS)
Gourley, Paul; McDonald, Anthony; Hendricks, Judy; Copeland, Guild; Hunter, John; Akhil, Ohmar; Capps, Heather; Curry, Marc; Skirboll, Steve
2000-03-01
We present a new technique for high throughput screening of tumor cells in a sensitive nanodevice that has the potential to quickly identify a cell population that has begun the rapid protein synthesis and mitosis characteristic of cancer cell proliferation. Currently, pathologists rely on microscopic examination of cell morphology using century-old staining methods that are labor-intensive, time-consuming and frequently in error. New micro-analytical methods for automated, real time screening without chemical modification are critically needed to advance pathology and improve diagnoses. We have teamed scientists with physicians to create a microlaser biochip (based upon our R&D award winning bio-laser concept)1 which evaluates tumor cells by quantifying their growth kinetics. The key new discovery was demonstrating that the lasing spectra are sensitive to the biomolecular mass in the cell, which changes the speed of light in the laser microcavity. Initial results with normal and cancerous human brain cells show that only a few hundred cells -- the equivalent of a billionth of a liter -- are required to detect abnormal growth. The ability to detect cancer in such a minute tissue sample is crucial for resecting a tumor margin or grading highly localized tumor malignancy. 1. P. L. Gourley, NanoLasers, Scientific American, March 1998, pp. 56-61. This work supported under DOE contract DE-AC04-94AL85000 and the Office of Basic Energy Sciences.
Detecting cancer quickly and accurately
NASA Astrophysics Data System (ADS)
Gourley, Paul L.; McDonald, Anthony E.; Hendricks, Judy K.; Copeland, G. C.; Hunter, John A.; Akhil, O.; Cheung, D.; Cox, Jimmy D.; Capps, H.; Curry, Mark S.; Skirboll, Steven K.
2000-03-01
We present a new technique for high throughput screening of tumor cells in a sensitive nanodevice that has the potential to quickly identify a cell population that has begun the rapid protein synthesis and mitosis characteristic of cancer cell proliferation. Currently, pathologists rely on microscopic examination of cell morphology using century-old staining methods that are labor-intensive, time-consuming and frequently in error. New micro-analytical methods for automated, real time screening without chemical modification are critically needed to advance pathology and improve diagnoses. We have teamed scientists with physicians to create a microlaser biochip (based upon our R&D award winning bio- laser concept) which evaluates tumor cells by quantifying their growth kinetics. The key new discovery was demonstrating that the lasing spectra are sensitive to the biomolecular mass in the cell, which changes the speed of light in the laser microcavity. Initial results with normal and cancerous human brain cells show that only a few hundred cells -- the equivalent of a billionth of a liter -- are required to detect abnormal growth. The ability to detect cancer in such a minute tissue sample is crucial for resecting a tumor margin or grading highly localized tumor malignancy.
Simple analytic model for astrophysical S factors
Yakovlev, D. G.; Beard, M.; Gasques, L. R.; Wiescher, M.
2010-10-15
We propose a physically transparent analytic model of astrophysical S factors as a function of a center-of-mass energy E of colliding nuclei (below and above the Coulomb barrier) for nonresonant fusion reactions. For any given reaction, the S(E) model contains four parameters [two of which approximate the barrier potential, U(r)]. They are easily interpolated along many reactions involving isotopes of the same elements; they give accurate practical expressions for S(E) with only several input parameters for many reactions. The model reproduces the suppression of S(E) at low energies (of astrophysical importance) due to the shape of the low-r wing of U(r). The model can be used to reconstruct U(r) from computed or measured S(E). For illustration, we parametrize our recent calculations of S(E) (using the Sao Paulo potential and the barrier penetration formalism) for 946 reactions involving stable and unstable isotopes of C, O, Ne, and Mg (with nine parameters for all reactions involving many isotopes of the same elements, e.g., C+O). In addition, we analyze astrophysically important {sup 12}C+{sup 12}C reaction, compare theoretical models with experimental data, and discuss the problem of interpolating reliably known S(E) values to low energies (E < or approx. 2-3 MeV).
Analytical transmission electron microscopy in materials science
Fraser, H.L.
1980-01-01
Microcharacterization of materials on a scale of less than 10 nm has been afforded by recent advances in analytical transmission electron microscopy. The factors limiting accurate analysis at the limit of spatial resolution for the case of a combination of scanning transmission electron microscopy and energy dispersive x-ray spectroscopy are examined in this paper.
Analytically derived weighting factors for transmission tomography cone beam projections
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Leszczynski, Konrad
2009-02-01
Weighting factors, which define the contributions of individual voxels of a 3D object to individual projection elements (pixels) on the detector, are the basic elements required in iterative tomographic reconstructions from transmission projections. Exact or as accurate as possible values for weighting factors are required in high-resolution reconstructions. Geometric complexity of the problem, however, makes it difficult to obtain exact weighting factor values. In this work, we derive an analytical expression for the weighting factors in cone beam projection geometry. The resulting formula is validated and applied to reconstruction from mega and kilovoltage x-ray cone beam projections. The reconstruction speed and accuracy are significantly improved by using the weighting factor values.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.; Christel, Michael; Ribarsky, Martin W.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Analytical Challenges in Biotechnology.
ERIC Educational Resources Information Center
Glajch, Joseph L.
1986-01-01
Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)
Analyticity without Differentiability
ERIC Educational Resources Information Center
Kirillova, Evgenia; Spindler, Karlheinz
2008-01-01
In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…
Modern analytical chemistry in the contemporary world
NASA Astrophysics Data System (ADS)
Šíma, Jan
2016-02-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.
Accurate orbit propagation with planetary close encounters
NASA Astrophysics Data System (ADS)
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Analytic Modeling of Collector Current and Delay Time in Hbts
NASA Astrophysics Data System (ADS)
Jung, Hee-Bum
1992-01-01
Collector current in abrupt Al_ {0.48}In_{0.52} As/In_{0.53}Ga _{0.47}As HBTs is investigated. Because tunneling plays an important role for abrupt heterojunctions, thermionic field emission (TF) mechanism is included, as a part of the model, in addition to thermionic emission (TE) theory. To model the modulation of the effective barrier height correctly, non-ideal doping profile across the heterojunction is considered. Calculations showed that under nominal operating conditions, TF is dominant over TE in determining the collector current. Furthermore, modulation of the effective barrier height manifests itself in the collector ideality factor that is greater than unity. It is shown that, by calculating the above mentioned transport mechanisms and including the barrier height modulation, the collector current and its temperature dependence in abrupt AlInAs/InGaAs HBTs can be predicted correctly. The detailed calculation is reduced to an analytical closed -form model by assuming a Gaussian energy spectrum for TF current. The model is determined to be accurate over a wide range of bias and temperatures. A simple TE/TF Ebers -Moll model for abrupt HBTs is derived. The classical expression for collector small signal delay time is inadequate for vertically scaled transistors where transient velocity effects can no longer be ignored. Analytical expressions for collector transit time and small signal delay time are proposed for circuit simulation. These models use a general non-uniform velocity profile described entirely in terms of five physical parameters: momentum and energy relaxation times, and initial, peak, and saturated velocities. A C_infty-continuous function approximation for the transit time is used to obtain analytical closed-form expressions for collector small signal delay time in terms of physically meaningful transport parameters. An accurate empirical two-piece model is also proposed. As the collector thickness is scaled down, the ratio of small signal
Analytical evaluation of atomic form factors: Application to Rayleigh scattering
Safari, L.; Santos, J. P.; Amaro, P.; Jänkälä, K.; Fratini, F.
2015-05-15
Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Light-Emitting Diodes for Analytical Chemistry
NASA Astrophysics Data System (ADS)
Macka, Mirek; Piasecki, Tomasz; Dasgupta, Purnendu K.
2014-06-01
Light-emitting diodes (LEDs) are playing increasingly important roles in analytical chemistry, from the final analysis stage to photoreactors for analyte conversion to actual fabrication of and incorporation in microdevices for analytical use. The extremely fast turn-on/off rates of LEDs have made possible simple approaches to fluorescence lifetime measurement. Although they are increasingly being used as detectors, their wavelength selectivity as detectors has rarely been exploited. From their first proposed use for absorbance measurement in 1970, LEDs have been used in analytical chemistry in too many ways to make a comprehensive review possible. Hence, we critically review here the more recent literature on their use in optical detection and measurement systems. Cloudy as our crystal ball may be, we express our views on the future applications of LEDs in analytical chemistry: The horizon will certainly become wider as LEDs in the deep UV with sufficient intensity become available.
Analytically solvable processes on networks.
Smilkov, Daniel; Kocarev, Ljupco
2011-07-01
We introduce a broad class of analytically solvable processes on networks. In the special case, they reduce to random walk and consensus process, the two most basic processes on networks. Our class differs from previous models of interactions (such as the stochastic Ising model, cellular automata, infinite particle systems, and the voter model) in several ways, the two most important being (i) the model is analytically solvable even when the dynamical equation for each node may be different and the network may have an arbitrary finite graph and influence structure and (ii) when local dynamics is described by the same evolution equation, the model is decomposable, with the equilibrium behavior of the system expressed as an explicit function of network topology and node dynamics. PMID:21867254
Analytic theory of orbit contraction
NASA Technical Reports Server (NTRS)
Vinh, N. X.; Longuski, J. M.; Busemann, A.; Culp, R. D.
1977-01-01
The motion of a satellite in orbit, subject to atmospheric force and the motion of a reentry vehicle are governed by gravitational and aerodynamic forces. This suggests the derivation of a uniform set of equations applicable to both cases. For the case of satellite motion, by a proper transformation and by the method of averaging, a technique appropriate for long duration flight, the classical nonlinear differential equation describing the contraction of the major axis is derived. A rigorous analytic solution is used to integrate this equation with a high degree of accuracy, using Poincare's method of small parameters and Lagrange's expansion to explicitly express the major axis as a function of the eccentricity. The solution is uniformly valid for moderate and small eccentricities. For highly eccentric orbits, the asymptotic equation is derived directly from the general equation. Numerical solutions were generated to display the accuracy of the analytic theory.
Efficient and accurate sound propagation using adaptive rectangular decomposition.
Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C
2009-01-01
Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105
Science Update: Analytical Chemistry.
ERIC Educational Resources Information Center
Worthy, Ward
1980-01-01
Briefly discusses new instrumentation in the field of analytical chemistry. Advances in liquid chromatography, photoacoustic spectroscopy, the use of lasers, and mass spectrometry are also discussed. (CS)
Analytical Chemistry in Industry.
ERIC Educational Resources Information Center
Kaiser, Mary A.; Ullman, Alan H.
1988-01-01
Clarifies the roles of a practicing analytical chemist in industry: quality control, methods and technique development, troubleshooting, research, and chemical analysis. Lists criteria for success in industry. (ML)
Road Transportable Analytical Laboratory (RTAL) system
Finger, S.M.
1995-10-01
The goal of the Road Transportable Analytical Laboratory (RTAL) Project is the development and demonstration of a system to meet the unique needs of the DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. This laboratory system has been designed to provide the field and laboratory analytical equipment necessary to detect and quantify radionuclides, organics, heavy metals and other inorganic compounds. The laboratory system consists of a set of individual laboratory modules deployable independently or as an interconnected group to meet each DOE site`s specific needs.
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability. PMID:26548137
An analytic model for the Phobos surface
NASA Technical Reports Server (NTRS)
Duxbury, Thomas C.
1991-01-01
Analytic expressions are derived to model the surface topography and the normal to the surface of Phobos. The analytic expressions are comprised of a spherical harmonic expansion for the global figure of Phobos, augmented by addition terms for the large crater Stickney and other craters. Over 300 craters were measured in more than 100 Viking Orbiter images to produce the model. In general, the largest craters were measured since they have a significant effect on topography. The topographic model derived has a global spatial and topographic accuracy ranging from about 100 m in areas having the highest resolution and convergent, stereo coverage, up to 500 m in the poorest areas.
Analytic estimates of coupling in damping rings
Raubenheimer, T.O.; Ruth, R.D.
1989-03-01
In this paper we present analytic formulas to estimate the vertical emittance in weakly coupled electron/positron storage rings. We consider contributions from both the vertical dispersion and linear coupling of the betatron motions. In addition to simple expressions for random misalignments and rotations of the magnets, formulas are presented to calculate the vertical emittance blowup due to orbit distortions. The orbit distortions are assumed to be caused by random misalignments, but because the closed orbit is correlated from point to point, the effects must be treated differently. We consider only corrected orbits. Finally, the analytic expressions are compared with computer simulations of storage rings with random misalignments. 6 refs., 3 figs.
Photovoltaic Degradation Rates -- An Analytical Review
Jordan, D. C.; Kurtz, S. R.
2012-06-01
As photovoltaic penetration of the power grid increases, accurate predictions of return on investment require accurate prediction of decreased power output over time. Degradation rates must be known in order to predict power delivery. This article reviews degradation rates of flat-plate terrestrial modules and systems reported in published literature from field testing throughout the last 40 years. Nearly 2000 degradation rates, measured on individual modules or entire systems, have been assembled from the literature, showing a median value of 0.5%/year. The review consists of three parts: a brief historical outline, an analytical summary of degradation rates, and a detailed bibliography partitioned by technology.
Simple analytic approximations for the Blasius problem
NASA Astrophysics Data System (ADS)
Iacono, R.; Boyd, John P.
2015-08-01
The classical boundary layer problem formulated by Heinrich Blasius more than a century ago is revisited, with the purpose of deriving simple and accurate analytical approximations to its solution. This is achieved through the combined use of a generalized Padé approach and of an integral iteration scheme devised by Hermann Weyl. The iteration scheme is also used to derive very accurate bounds for the value of the second derivative of the Blasius function at the origin, which plays a crucial role in this problem.
ERIC Educational Resources Information Center
Callis, James B.; And Others
1987-01-01
Discusses process analytical chemistry as a discipline designed to supply quantitative and qualitative information about a chemical process. Encourages academic institutions to examine this field for employment opportunities for students. Describes the five areas of process analytical chemistry, including off-line, at-line, on-line, in-line, and…
Extreme Scale Visual Analytics
Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio
2012-05-08
Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.
Learning Analytics Considered Harmful
ERIC Educational Resources Information Center
Dringus, Laurie P.
2012-01-01
This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…
Not Available
1990-01-01
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Analytical mass spectrometry. Abstracts
Not Available
1990-12-31
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
ERIC Educational Resources Information Center
Jackson, Brian
2010-01-01
Using a survey of 138 writing programs, I argue that we must be more explicit about what we think students should get out of analysis to make it more likely that students will transfer their analytical skills to different settings. To ensure our students take analytical skills with them at the end of the semester, we must simplify the task we…
Signals: Applying Academic Analytics
ERIC Educational Resources Information Center
Arnold, Kimberly E.
2010-01-01
Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. PMID:26689962
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
Accurate and Precise Zinc Isotope Ratio Measurements in Urban Aerosols
NASA Astrophysics Data System (ADS)
Weiss, D.; Gioia, S. M. C. L.; Coles, B.; Arnold, T.; Babinski, M.
2009-04-01
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of δ66Zn determinations in aerosols is around 0.05 per mil per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in δ66Zn ranging between -0.96 and -0.37 per mil in coarse and between -1.04 and 0.02 per mil in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source.
Advances in multiple analyte profiling.
Salas, Virginia M; Edwards, Bruce S; Sklar, Larry A
2008-01-01
The advent of multiparameter technology has been driven by the need to understand the complexity in biological systems. It has spawned two main branches, one in the arena of high-content measurements, primarily in microscopy and flow cytometry where it has become commonplace to analyze multiple fluorescence signatures arising from multiple excitation sources and multiple emission wavelengths. Microscopy is augmented by topographical content that identifies the source location of the signature. The other branch involves multiplex technology. Here, the intent is to measure multiple analytes simultaneously. A key feature of multiplexing is an address system for the individual analytes. In planar arrays the address system is spatial, in which affinity reactions occur at defined locations. In suspension arrays, the address is encoded as a fluorescent signature in the particle assigned to a specific reaction or analyte. Several hybrid systems have also been developed for multiplexing. In the commercial regime, the most widespread applications of multiplexing are currently in the areas of genome and biomarker analysis. Planar chips with fixed arrays are now available to probe the entire genome at the level of message expression and large segments of the genome at the level of single nucleotide polymorphism (SNP). In contrast, suspension arrays provide the potential for probing segments of the genome in a customized way, using capture tags that locate specific oligonucleotide sequences to specific array elements. PMID:18429493
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Analytic streamline calculations on linear tetrahedra
Diachin, D.P.; Herzog, J.A.
1997-06-01
Analytic solutions for streamlines within tetrahedra are used to define operators that accurately and efficiently compute streamlines. The method presented here is based on linear interpolation, and therefore produces exact results for linear velocity fields. In addition, the method requires less computation than the forward Euler numerical method. Results are presented that compare accuracy measurements of the method with forward Euler and fourth order Runge-Kutta applied to both a linear and a nonlinear velocity field.
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of an LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
Effective Permeability of Fractured Rocks by Analytical Methods: A 3D Computational Study
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Berre, I.; Jakobsen, M.; Lien, M.
2013-12-01
Analytical upscaling methods have been proposed in the literature to predict the effective hydraulic permeability of a fractured rock from its micro-scale parameters (fracture aperture, fracture orientation, fracture content, etc.). In this presentation, we put special emphasis on three effective medium methods (the symmetric and asymmetric self-consistent methods, and the differential method), and evaluate their accuracy for a wide range of parameter values. The analytical predictions are computed using our recently developed effective medium formulations, which are specifically adapted for fractured media. Compared to previous formulations, the new expressions have improved numerical stability properties, and require fewer input parameters. To assess their accuracy, the analytical predictions have been compared with 3D finite element simulations. Specifically, we generated realizations of several different fracture geometries, each consisting of 102 fractures within a unit cube. We applied unit potential difference on two opposing sides, and no-flux conditions on the remaining sides. A commercial finite-element solver was used to calculate the mean flux, from which the effective conductivity was found. This process was repeated for fracture densities up to ɛ = 1.0. Also, a wide range of fracture permeabilities was considered, from completely blocking to infinitely permeable fractures. The results were used to determine the range of applicability for each analytical method, which excels in different regions of the parameter space. For blocking fractures, the differential method is very accurate throughout the investigated parameter range. The symmetric self-consistent method also agrees well with the numerical results on sealed fractures, while the asymmetric self-consistent method is more unreliable. For permeable fractures, the performance of the methods depends on the dimensionless quantity λ = (Kfrac a)/(r Kmat ), describing the contrast between fracture and
Analytical solutions of moisture flow equations and their numerical evaluation
Gibbs, A.G.
1981-04-01
The role of analytical solutions of idealized moisture flow problems is discussed. Some different formulations of the moisture flow problem are reviewed. A number of different analytical solutions are summarized, including the case of idealized coupled moisture and heat flow. The evaluation of special functions which commonly arise in analytical solutions is discussed, including some pitfalls in the evaluation of expressions involving combinations of special functions. Finally, perturbation theory methods are summarized which can be used to obtain good approximate analytical solutions to problems which are too complicated to solve exactly, but which are close to an analytically solvable problem.
Accurate measurements of dynamics and reproducibility in small genetic networks
Dubuis, Julien O; Samanta, Reba; Gregor, Thomas
2013-01-01
Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845
Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai
2016-04-01
A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%. PMID:26292335
Enzymes in Analytical Chemistry.
ERIC Educational Resources Information Center
Fishman, Myer M.
1980-01-01
Presents tabular information concerning recent research in the field of enzymes in analytic chemistry, with methods, substrate or reaction catalyzed, assay, comments and references listed. The table refers to 128 references. Also listed are 13 general citations. (CS)
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
Analytical Improvements in PV Degradation Rate Determination
Jordan, D. C.; Kurtz, S. R.
2011-02-01
As photovoltaic (PV) penetration of the power grid increases, it becomes vital to know how decreased power output may affect cost over time. In order to predict power delivery, the decline or degradation rates must be determined accurately. For non-spectrally corrected data several complete seasonal cycles (typically 3-5 years) are required to obtain reasonably accurate degradation rates. In a rapidly evolving industry such a time span is often unacceptable and the need exists to determine degradation rates accurately in a shorter period of time. Occurrence of outliers and data shifts are two examples of analytical problems leading to greater uncertainty and therefore to longer observation times. In this paper we compare three methodologies of data analysis for robustness in the presence of outliers, data shifts and shorter measurement time periods.
Extreme Scale Visual Analytics
Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E
2013-01-01
Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.
NASA Astrophysics Data System (ADS)
Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.
2009-12-01
A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
NASA Astrophysics Data System (ADS)
Yovanovich, M. M.; Culham, J. R.; Lemczyk, T. F.
1986-01-01
One and two-dimensional solutions are obtained for annular fins of constant cross-section having uniform base, end and side conductances. The solutions are dependent upon one geometric parameter and three fin parameters which relate the internal conductive resistance to the three boundary resistances. The two and one-dimensional solutions are compared by means of the heat flow rate or fin efficiency ratios. Simple polynomials are developed for fast, accurate numerical computation of the modified Bessel functions which appear in the solutions. For annular fins used in typical microelectronic applications the analytical expressions are also reduced to alternate expressions which are shown to be expressible by means of simple polynomials which converge to unity for large values of the arguments. Numerical computations were performed on an IBM-PC and some typical results are reported in graphical form. These plots give the heat loss ratio as a function of the dimensionless geometric and fin parameters.
Cerebral cortical activity associated with non-experts' most accurate motor performance.
Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W
2014-10-01
This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Remote balance weighs accurately amid high radiation
NASA Technical Reports Server (NTRS)
Eggenberger, D. N.; Shuck, A. B.
1969-01-01
Commercial beam-type balance, modified and outfitted with electronic controls and digital readout, can be remotely controlled for use in high radiation environments. This allows accurate weighing of breeder-reactor fuel pieces when they are radioactively hot.
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Understanding the Code: keeping accurate records.
Griffith, Richard
2015-10-01
In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met. PMID:26418404
Importance of Accurate Measurements in Nutrition Research: Dietary Flavonoids as a Case Study
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate measurements of the secondary metabolites in natural products and plant foods are critical to establishing diet/health relationships. There are as many as 50,000 secondary metabolites which may influence human health. Their structural and chemical diversity present a challenge to analytic...
An analytic formula for the extrapolated range of electrons in condensed materials
NASA Astrophysics Data System (ADS)
Tabata, Tatsuo; Andreo, Pedro; Shinoda, Kunihiko
1996-12-01
A single analytic formula for the extrapolated range rex of electrons in condensed materials of atomic numbers from 4 to 92 is given. It has the form of the product of the continuous-slowing-down approximation (CSDA) range r0 and a factor fd related to multiple scattering detours. The factor fd is expressed as a function of incident electron energy T0 and atomic number Z of medium. Values of adjustable parameters in fd have been optimized for data on the ratio {r ex}/{r 0}, in which the Monte Carlo evaluated values of Tabata et al. [Nucl. Instr. Meth. B 95 (1995) 289] (from 0.1 to 100 MeV) and experimental data collected from literature (from 1 keV to 0.1 MeV) for rex have been used together with NIST-database values of r0. For r0 in the extrapolated-range formula, accurate database values or an approximate analytic expression developed as a function of T0, Z, atomic weight A and mean excitation energy I of medium can be used. The maximum deviation of the resultant formula from the Monte Carlo data is about 2% for either option of r0. The determination of the expression for fd at energies below 0.1 MeV is tentative. By using an effective atomic number and atomic weight, the formula can also be applied to light compounds and mixtures.
An analytical study on the diffraction quality factor of open cavities
Huang, Y. J.; Chu, K. R.; Yeh, L. H.
2014-10-15
Open cavities are often employed as interaction structures in a new generation of coherent millimeter, sub-millimeter, and terahertz (THz) radiation sources called the gyrotron. One of the open ends of the cavity is intended for rapid extraction of the radiation generated by a powerful electron beam. Up to the sub-THz regime, the diffraction loss from this open end dominates over the Ohmic losses on the walls, which results in a much lower diffraction quality factor (Q{sub d}) than the Ohmic quality factor (Q{sub ohm}). Early analytical studies have led to various expressions for Q{sub d} and shed much light on its properties. In this study, we begin with a review of these studies, and then proceed with the derivation of an analytical expression for Q{sub d} accurate to high order. Its validity is verified with numerical solutions for a step-tunable cavity commonly employed for the development of sub-THz and THz gyrotrons. On the basis of the results, a simplified equation is obtained which explicitly expresses the scaling laws of Q{sub d} with respect to mode indices and cavity dimensions.
An analytical study on the diffraction quality factor of open cavities
NASA Astrophysics Data System (ADS)
Huang, Y. J.; Yeh, L. H.; Chu, K. R.
2014-10-01
Open cavities are often employed as interaction structures in a new generation of coherent millimeter, sub-millimeter, and terahertz (THz) radiation sources called the gyrotron. One of the open ends of the cavity is intended for rapid extraction of the radiation generated by a powerful electron beam. Up to the sub-THz regime, the diffraction loss from this open end dominates over the Ohmic losses on the walls, which results in a much lower diffraction quality factor (Qd) than the Ohmic quality factor (Qohm). Early analytical studies have led to various expressions for Qd and shed much light on its properties. In this study, we begin with a review of these studies, and then proceed with the derivation of an analytical expression for Qd accurate to high order. Its validity is verified with numerical solutions for a step-tunable cavity commonly employed for the development of sub-THz and THz gyrotrons. On the basis of the results, a simplified equation is obtained which explicitly expresses the scaling laws of Qd with respect to mode indices and cavity dimensions.
Advances in analytical chemistry
NASA Technical Reports Server (NTRS)
Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.
1991-01-01
Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.
Frontiers in analytical chemistry
Amato, I.
1988-12-15
Doing more with less was the modus operandi of R. Buckminster Fuller, the late science genius, and inventor of such things as the geodesic dome. In late September, chemists described their own version of this maxim--learning more chemistry from less material and in less time--in a symposium titled Frontiers in Analytical Chemistry at the 196th National Meeting of the American Chemical Society in Los Angeles. Symposium organizer Allen J. Bard of the University of Texas at Austin assembled six speakers, himself among them, to survey pretty widely different areas of analytical chemistry.
Accurate 12D dipole moment surfaces of ethylene
NASA Astrophysics Data System (ADS)
Delahaye, Thibault; Nikitin, Andrei V.; Rey, Michael; Szalay, Péter G.; Tyuterev, Vladimir G.
2015-10-01
Accurate ab initio full-dimensional dipole moment surfaces of ethylene are computed using coupled-cluster approach and its explicitly correlated counterpart CCSD(T)-F12 combined respectively with cc-pVQZ and cc-pVTZ-F12 basis sets. Their analytical representations are provided through 4th order normal mode expansions. First-principles prediction of the line intensities using variational method up to J = 30 are in excellent agreement with the experimental data in the range of 0-3200 cm-1. Errors of 0.25-6.75% in integrated intensities for fundamental bands are comparable with experimental uncertainties. Overall calculated C2H4 opacity in 600-3300 cm-1 range agrees with experimental determination better than to 0.5%.
Accurate ab initio energy gradients in chemical compound space.
Anatole von Lilienfeld, O
2009-10-28
Analytical potential energy derivatives, based on the Hellmann-Feynman theorem, are presented for any pair of isoelectronic compounds. Since energies are not necessarily monotonic functions between compounds, these derivatives can fail to predict the right trends of the effect of alchemical mutation. However, quantitative estimates without additional self-consistency calculations can be made when the Hellmann-Feynman derivative is multiplied with a linearization coefficient that is obtained from a reference pair of compounds. These results suggest that accurate predictions can be made regarding any molecule's energetic properties as long as energies and gradients of three other molecules have been provided. The linearization coefficent can be interpreted as a quantitative measure of chemical similarity. Presented numerical evidence includes predictions of electronic eigenvalues of saturated and aromatic molecular hydrocarbons. PMID:19894922
Integrated Array/Metadata Analytics
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2015-04-01
Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.
NASA Astrophysics Data System (ADS)
Choudhury, Raja Roy; Choudhury, Arundhati Roy; Ghose, Mrinal Kanti
2013-09-01
To characterize nonlinear optical fiber, a semi-analytical formulation using variational principle and the Nelder-Mead Simplex method for nonlinear unconstrained minimization is proposed. The number of optimizing parameters in order to optimize core parameter U has been increased to incorporate more flexibility in the formulation of an innovative form of fundamental modal field. This formulation provides accurate analytical expressions for modal dispersion parameter (g) of optical fiber with Kerr nonlinearity. The minimization of core parameter (U), which involves Kerr nonlinearity through the nonstationary expression of propagation constant, is carried out by the Nelder-Mead Simplex method of nonlinear unconstrained minimization, suitable for problems with nonsmooth functions as the method does not require any derivative information. This formulation has less computational burden for calculation of modal parameters than full numerical methods.
Accurate strain measurements in highly strained Ge microbridges
NASA Astrophysics Data System (ADS)
Gassenq, A.; Tardif, S.; Guilloy, K.; Osvaldo Dias, G.; Pauc, N.; Duchemin, I.; Rouchon, D.; Hartmann, J.-M.; Widiez, J.; Escalante, J.; Niquet, Y.-M.; Geiger, R.; Zabel, T.; Sigg, H.; Faist, J.; Chelnokov, A.; Rieutord, F.; Reboud, V.; Calvo, V.
2016-06-01
Ge under high strain is predicted to become a direct bandgap semiconductor. Very large deformations can be introduced using microbridge devices. However, at the microscale, strain values are commonly deduced from Raman spectroscopy using empirical linear models only established up to ɛ100 = 1.2% for uniaxial stress. In this work, we calibrate the Raman-strain relation at higher strain using synchrotron based microdiffraction. The Ge microbridges show unprecedented high tensile strain up to 4.9% corresponding to an unexpected Δω = 9.9 cm-1 Raman shift. We demonstrate experimentally and theoretically that the Raman strain relation is not linear and we provide a more accurate expression.
Analytical Services Management System
Church, Shane; Nigbor, Mike; Hillman, Daniel
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standard chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.
Analytical Chemistry Laboratory
NASA Technical Reports Server (NTRS)
Anderson, Mark
2013-01-01
The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.
Analytical Services Management System
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standardmore » chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.« less
Analytics: Changing the Conversation
ERIC Educational Resources Information Center
Oblinger, Diana G.
2013-01-01
In this third and concluding discussion on analytics, the author notes that we live in an information culture. We are accustomed to having information instantly available and accessible, along with feedback and recommendations. We want to know what people think and like (or dislike). We want to know how we compare with "others like me."…
ERIC Educational Resources Information Center
Buckingham Shum, Simon; Ferguson, Rebecca
2012-01-01
We propose that the design and implementation of effective "Social Learning Analytics (SLA)" present significant challenges and opportunities for both research and enterprise, in three important respects. The first is that the learning landscape is extraordinarily turbulent at present, in no small part due to technological drivers. Online social…
Challenges for Visual Analytics
Thomas, James J.; Kielman, Joseph
2009-09-23
Visual analytics has seen unprecedented growth in its first five years of mainstream existence. Great progress has been made in a short time, yet great challenges must be met in the next decade to provide new technologies that will be widely accepted by societies throughout the world. This paper sets the stage for some of those challenges in an effort to provide the stimulus for the research, both basic and applied, to address and exceed the envisioned potential for visual analytics technologies. We start with a brief summary of the initial challenges, followed by a discussion of the initial driving domains and applications, as well as additional applications and domains that have been a part of recent rapid expansion of visual analytics usage. We look at the common characteristics of several tools illustrating emerging visual analytics technologies, and conclude with the top ten challenges for the field of study. We encourage feedback and collaborative participation by members of the research community, the wide array of user communities, and private industry.
ERIC Educational Resources Information Center
Freeman, Elisabeth
1996-01-01
Presents a brief history of Ada Byron King, Countess of Lovelace, focusing on her primary role in the development of the Analytical Engine--the world's first computer. Describes the Ada Project (TAP), a centralized World Wide Web site that serves as a clearinghouse for information related to women in computing, and provides a Web address for…
Analytical Instrument Obsolescence Examined.
ERIC Educational Resources Information Center
Haggin, Joseph
1982-01-01
The threat of instrument obsolescence and tight federal budgets have conspired to threaten the existence of research analytical laboratories. Despite these and other handicaps most existing laboratories expect to keep operating in support of basic research, though there may be serious penalties in the future unless funds are forthcoming. (Author)
The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
A highly accurate interatomic potential for argon
NASA Astrophysics Data System (ADS)
Aziz, Ronald A.
1993-09-01
A modified potential based on the individually damped model of Douketis, Scoles, Marchetti, Zen, and Thakkar [J. Chem. Phys. 76, 3057 (1982)] is presented which fits, within experimental error, the accurate ultraviolet (UV) vibration-rotation spectrum of argon determined by UV laser absorption spectroscopy by Herman, LaRocque, and Stoicheff [J. Chem. Phys. 89, 4535 (1988)]. Other literature potentials fail to do so. The potential also is shown to predict a large number of other properties and is probably the most accurate characterization of the argon interaction constructed to date.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
A general, accurate procedure for calculating molecular interaction force.
Yang, Pinghai; Qian, Xiaoping
2009-09-15
The determination of molecular interaction forces, e.g., van der Waals force, between macroscopic bodies is of fundamental importance for understanding sintering, adhesion and fracture processes. In this paper, we develop an accurate, general procedure for van der Waals force calculation. This approach extends a surface formulation that converts a six-dimensional (6D) volume integral into a 4D surface integral for the force calculation. It uses non-uniform rational B-spline (NURBS) surfaces to represent object surfaces. Surface integrals are then done on the parametric domain of the NURBS surfaces. It has combined advantages of NURBS surface representation and surface formulation, including (1) molecular interactions between arbitrary-shaped objects can be represented and evaluated by the NURBS model further common geometries such as spheres, cones, planes can be represented exactly and interaction forces are thus calculated accurately; (2) calculation efficiency is improved by converting the volume integral to the surface integral. This approach is implemented and validated via its comparison with analytical solutions for simple geometries. Calculation of van der Waals force between complex geometries with surface roughness is also demonstrated. A tutorial on the NURBS approach is given in Appendix A. PMID:19596335
Metabolomics and Diabetes: Analytical and Computational Approaches
Sas, Kelli M.; Karnovsky, Alla; Michailidis, George
2015-01-01
Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
NASA Astrophysics Data System (ADS)
Biswal, Sudhansu Mohan; Baral, Biswajit; De, Debashis; Sarkar, Angsuman
2015-06-01
In this paper, we propose a new two-dimensional (2-D) analytical model of dual material junctionless surrounding gate MOSFET (DMJLSRG MOSFET). The expressions of potential and Electric Field of the gate engineered MOSFET structure have been obtained by solving the 2-D Poisson's equation in subthreshold regime using a parabolic potential approximation considering effective conduction path effect (ECPE). The developed potential model accurately predicts the perceivable step function in the potential profile, responsible for effective screening of the drain potential variation in order to reduce DIBL and threshold voltage roll-off. In this work, effectiveness of dual material gate engineered (DM) design for junctionless MOSFET was scrutinized by comparing the results with a single material gate junctionless surrounding gate MOSFET (SMJLSRG MOSFET) of same dimension. From the developed potential model, a simple and accurate analytical expression of threshold voltage is also derived. Results reveal that DMJLSRG devices offer superior performance as compared to SMJLSRG devices. An improvement of hot-carrier effects (HCEs) and a reduction of short-channel effects (SCEs) have been demonstrated for gate-engineered DMJLDG device over the corresponding conventional (SMJLDG) device. The proposed model can be used as a basic design guideline for gate-engineered junctionless surrounding gate MOSFETs.
Mass inflation in Eddington-inspired Born-Infeld black holes: Analytical scaling solutions
NASA Astrophysics Data System (ADS)
Avelino, P. P.
2016-05-01
We study the inner dynamics of accreting Eddington-inspired Born-Infeld black holes using the homogeneous approximation and taking charge as a surrogate for angular momentum. We show that there is a minimum of the accretion rate below which mass inflation does not occur, and we derive an analytical expression for this threshold as a function of the fundamental scale of the theory, the accretion rate, the mass, and the charge of the black hole. Our result explicitly demonstrates that, no matter how close Eddington-inspired Born-Infeld gravity is to general relativity, there is always a minimum accretion rate below which there is no mass inflation. For larger accretion rates, mass inflation takes place inside the black hole as in general relativity until the extremely rapid density variations bring it to an abrupt end. We derive analytical scaling solutions for the value of the energy density and of the Misner-Sharp mass attained at the end of mass inflation as a function of the fundamental scale of the theory, the accretion rate, the mass, and the charge of the black hole, and compare these with the corresponding numerical solutions. We find that, except for unreasonably high accretion rates, our analytical results appear to provide an accurate description of homogeneous mass inflation inside accreting Eddington-inspired Born-Infeld black holes.
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1987-01-01
This document discusses the determination of caustic surfaces in terms of rays, reflectors, and wavefronts. Analytical caustics are obtained as a family of lines, a set of points, and several types of equations for geometries encountered in optics and microwave applications. Standard methods of differential geometry are applied under different approaches: directly to reflector surfaces, and alternatively, to wavefronts, to obtain analytical caustics of two sheets or branches. Gauss/Seidel aberrations are introduced into the wavefront approach, forcing the retention of all three coefficients of both the first- and the second-fundamental forms of differential geometry. An existing method for obtaining caustic surfaces through exploitation of the singularities in flux density is examined, and several constant-intensity contour maps are developed using only the intrinsic Gaussian, mean, and normal curvatures of the reflector. Numerous references are provided for extending the material of the present document to the morphologies of caustics and their associated diffraction patterns.
Requirements for Predictive Analytics
Troy Hiltbrand
2012-03-01
It is important to have a clear understanding of how traditional Business Intelligence (BI) and analytics are different and how they fit together in optimizing organizational decision making. With tradition BI, activities are focused primarily on providing context to enhance a known set of information through aggregation, data cleansing and delivery mechanisms. As these organizations mature their BI ecosystems, they achieve a clearer picture of the key performance indicators signaling the relative health of their operations. Organizations that embark on activities surrounding predictive analytics and data mining go beyond simply presenting the data in a manner that will allow decisions makers to have a complete context around the information. These organizations generate models based on known information and then apply other organizational data against these models to reveal unknown information.
Brune, D.; Forkman, B.; Persson, B.
1984-01-01
This book covers the general theories and techniques of nuclear chemical analysis, directed at applications in analytical chemistry, nuclear medicine, radiophysics, agriculture, environmental sciences, geological exploration, industrial process control, etc. The main principles of nuclear physics and nuclear detection on which the analysis is based are briefly outlined. An attempt is made to emphasise the fundamentals of activation analysis, detection and activation methods, as well as their applications. The book provides guidance in analytical chemistry, agriculture, environmental and biomedical sciences, etc. The contents include: the nuclear periodic system; nuclear decay; nuclear reactions; nuclear radiation sources; interaction of radiation with matter; principles of radiation detectors; nuclear electronics; statistical methods and spectral analysis; methods of radiation detection; neutron activation analysis; charged particle activation analysis; photon activation analysis; sample preparation and chemical separation; nuclear chemical analysis in biological and medical research; the use of nuclear chemical analysis in the field of criminology; nuclear chemical analysis in environmental sciences, geology and mineral exploration; and radiation protection.
Analytic holographic superconductor
NASA Astrophysics Data System (ADS)
Herzog, Christopher P.
2010-06-01
We investigate a holographic superconductor that admits an analytic treatment near the phase transition. In the dual 3+1-dimensional field theory, the phase transition occurs when a scalar operator of scaling dimension two gets a vacuum expectation value. We calculate current-current correlation functions along with the speed of second sound near the critical temperature. We also make some remarks about critical exponents. An analytic treatment is possible because an underlying Heun equation describing the zero mode of the phase transition has a polynomial solution. Amusingly, the treatment here may generalize for an order parameter with any integer spin, and we propose a Lagrangian for a spin-two holographic superconductor.
Cowell, Andrew J.; Cowell, Amanda K.
2009-08-29
This paper discusses the design and use of anthropomorphic computer characters as nonplayer characters (NPC’s) within analytical games. These new environments allow avatars to play a central role in supporting training and education goals instead of planning the supporting cast role. This new ‘science’ of gaming, driven by high-powered but inexpensive computers, dedicated graphics processors and realistic game engines, enables game developers to create learning and training opportunities on par with expensive real-world training scenarios. However, there needs to be care and attention placed on how avatars are represented and thus perceived. A taxonomy of non-verbal behavior is presented and its application to analytical gaming discussed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Davenport, Thomas H
2006-01-01
We all know the power of the killer app. It's not just a support tool; it's a strategic weapon. Companies questing for killer apps generally focus all their firepower on the one area that promises to create the greatest competitive advantage. But a new breed of organization has upped the stakes: Amazon, Harrah's, Capital One, and the Boston Red Sox have all dominated their fields by deploying industrial-strength analytics across a wide variety of activities. At a time when firms in many industries offer similar products and use comparable technologies, business processes are among the few remaining points of differentiation--and analytics competitors wring every last drop of value from those processes. Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions. In companies that compete on analytics, senior executives make it clear--from the top down--that analytics is central to strategy. Such organizations launch multiple initiatives involving complex data and statistical analysis, and quantitative activity is managed atthe enterprise (not departmental) level. In this article, professor Thomas H. Davenport lays out the characteristics and practices of these statistical masters and describes some of the very substantial changes other companies must undergo in order to compete on quantitative turf. As one would expect, the transformation requires a significant investment in technology, the accumulation of massive stores of data, and the formulation of company-wide strategies for managing the data. But, at least as important, it also requires executives' vocal, unswerving commitment and willingness to change the way employees think, work, and are treated. PMID:16447373
Industrial Analytics Corporation
Industrial Analytics Corporation
2004-01-30
The lost foam casting process is sensitive to the properties of the EPS patterns used for the casting operation. In this project Industrial Analytics Corporation (IAC) has developed a new low voltage x-ray instrument for x-ray radiography of very low mass EPS patterns. IAC has also developed a transmitted visible light method for characterizing the properties of EPS patterns. The systems developed are also applicable to other low density materials including graphite foams.
Analytical theories for spacecraft entry into planetary atmospheres and design of planetary probes
NASA Astrophysics Data System (ADS)
Saikia, Sarag J.
This dissertation deals with the development of analytical theories for spacecraft entry into planetary atmospheres and the design of entry spacecraft or probes for planetary science and human exploration missions. Poincare's method of small parameters is used to develop an improved approximate analytical solution for Yaroshevskii's classical planetary entry equation for the ballistic entry of a spacecraft into planetary atmospheres. From this solution, other important expressions are developed including deceleration, stagnation-point heat rate, and stagnation-point integrated heat load. The accuracy of the solution is assessed via numerical integration of the exact equations of motion. The solution is also compared to the classical solutions of Yaroshevskii and Allen and Eggers. The new second-order analytical solution is more accurate than Yaroshevskii's fifth-order solution for a range of shallow (-3 deg) to steep (up to -90 deg) entry flight path angles, thereby extending the range of applicability of the solution as compared to the classical Yaroshevskii solution, which is restricted to an entry flight path of approximately -40 deg. Universal planetary entry equations are used to develop a new analytical theory for ballistic entry of spacecraft for moderate to large initial flight path angles. Chapman's altitude variable is used as the independent variable. Poincare's method of small parameters is used to develop an analytical solution for the velocity and the flight path angle. The new solution is used to formulate key expressions for range, time-of-flight, deceleration, and aerodynamic heating parameters (e.g., stagnation-point heat rate, total stagnation-point heat load, and average heat input). The classical approximate solution of Chapman's entry equation appears as the zero-order term in the new solution. The new solution represents an order of magnitude enhancement in the accuracy compared to existing analytical solutions for moderate to large entry
Competing on talent analytics.
Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy
2010-10-01
Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise. PMID:20929194
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1989-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potenital and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1990-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potential and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Time-domain Raman analytical forward solvers.
Martelli, Fabrizio; Binzoni, Tiziano; Sekar, Sanathana Konugolu Venkata; Farina, Andrea; Cavalieri, Stefano; Pifferi, Antonio
2016-09-01
A set of time-domain analytical forward solvers for Raman signals detected from homogeneous diffusive media is presented. The time-domain solvers have been developed for two geometries: the parallelepiped and the finite cylinder. The potential presence of a background fluorescence emission, contaminating the Raman signal, has also been taken into account. All the solvers have been obtained as solutions of the time dependent diffusion equation. The validation of the solvers has been performed by means of comparisons with the results of "gold standard" Monte Carlo simulations. These forward solvers provide an accurate tool to explore the information content encoded in the time-resolved Raman measurements. PMID:27607645
Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian
2006-07-20
Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Analytic treatment of solar neutrion oscillations
Beacom, J.F.; Baalntekin, A.B.
1995-10-01
Recently, Bruggen et al. have derived analytic expressions for the electron neutrino survival probability for neutrinos undergoing matter-enhanced oscillations as they escape from the sun. These are derived using Landau-Zener oscillation formulas for nonadiabatic level crossings. For the solar density, they assume either a linear or an exponential form. However, the solar density is only roughly exponential. Using a uniform approximation, we generalize this method to an arbitrary monotonic solar density.
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
Analytical and Numerical Investigations into Hemisphere-Shaped Electrostatic Sensors
Lin, Jun; Chen, Zhong-Sheng; Hu, Zheng; Yang, Yong-Min; Tang, Xin
2014-01-01
Electrostatic sensors have been widely used in many applications due to their advantages of low cost and robustness. Their spatial sensitivity and time-frequency characteristics are two important performance parameters. In this paper, an analytical model of the induced charge on a novel hemisphere-shaped electrostatic sensor was presented to investigate its accurate sensing characteristics. Firstly a Poisson model was built for electric fields produced by charged particles. Then the spatial sensitivity and time-frequency response functions were directly derived by the Green function. Finally, numerical calculations were done to validate the theoretical results. The results demonstrate that the hemisphere-shaped sensors have highly 3D-symmetrical spatial sensitivity expressed in terms of elementary function, and the spatial sensitivity is higher and less homogeneous near the hemispherical surface and vice versa. Additionally, the whole monitoring system, consisting of an electrostatic probe and a signal conditioner circuit, acts as a band-pass filter. The time-frequency characteristics depend strongly on the spatial position and velocity of the charged particle, the radius of the probe as well as the equivalent resistance and capacitance of the circuit. PMID:25090419
A semi-analytical guidance algorithm for autonomous landing
NASA Astrophysics Data System (ADS)
Lunghi, Paolo; Lavagna, Michèle; Armellin, Roberto
2015-06-01
One of the main challenges posed by the next space systems generation is the high level of autonomy they will require. Hazard Detection and Avoidance is a key technology in this context. An adaptive guidance algorithm for landing that updates the trajectory to the surface by means of an optimal control problem solving is here presented. A semi-analytical approach is proposed. The trajectory is expressed in a polynomial form of minimum order to satisfy a set of boundary constraints derived from initial and final states and attitude requirements. By imposing boundary conditions, a fully determined guidance profile is obtained, function of a restricted set of parameters. The guidance computation is reduced to the determination of these parameters in order to satisfy path constraints and other additional constraints not implicitly satisfied by the polynomial formulation. The algorithm is applied to two different scenarios, a lunar landing and an asteroidal landing, to highlight its general validity. An extensive Monte Carlo test campaign is conducted to verify the versatility of the algorithm in realistic cases, by the introduction of attitude control systems, thrust modulation, and navigation errors. The proposed approach proved to be flexible and accurate, granting a precision of a few meters at touchdown.
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Analytical Model of Nano-Electromechanical (NEM) Nonvolatile Memory Cells
NASA Astrophysics Data System (ADS)
Han, Boram; Choi, Woo Young
The fringe field effects of nano-electromechanical (NEM) nonvolatile memory cells have been investigated analytically for the accurate evaluation of NEM memory cells. As the beam width is scaled down, fringe field effect becomes more severe. It has been observed that pull-in, release and hysteresis voltage decrease more than our prediction. Also, the fringe field on cell characteristics has been discussed.
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Analytical Modeling of Squeeze Film Damping in Dual Axis Torsion Microactuators
NASA Astrophysics Data System (ADS)
Moeenfard, Hamid
2015-10-01
In this paper, problem of squeeze film damping in dual axis torsion microactuators is modeled and closed form expressions are provided for damping torques around tilting axes of the actuator. The Reynolds equation which governs the pressure distribution underneath the actuator is linearized. The resulting equation is then solved analytically. The obtained pressure distribution is used to calculate the normalized damping torques around tilting axes of the actuator. Dependence of the damping torques on the design parameters of the dual axis torsion actuator is studied. It is observed that with proper selection of the actuator's aspect ratio, damping torque along one of the tilting directions can be eliminated. It is shown that when the tilting angles of the actuator are small, squeeze film damping would act like a linear viscous damping. The results of this paper can be used for accurate dynamical modeling and control of torsion dual axis microactuators.
Bruce, S D; Higinbotham, J; Marshall, I; Beswick, P H
2000-01-01
The approximation of the Voigt line shape by the linear summation of Lorentzian and Gaussian line shapes of equal width is well documented and has proved to be a useful function for modeling in vivo (1)H NMR spectra. We show that the error in determining peak areas is less than 0.72% over a range of simulated Voigt line shapes. Previous work has concentrated on empirical analysis of the Voigt function, yielding accurate expressions for recovering the intrinsic Lorentzian component of simulated line shapes. In this work, an analytical approach to the approximation is presented which is valid for the range of Voigt line shapes in which either the Lorentzian or Gaussian component is dominant. With an empirical analysis of the approximation, the direct recovery of T(2) values from simulated line shapes is also discussed. PMID:10617435
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; McInerney, M. A.; Tamkin, G. S.; Thompson, J. H.; Gill, R.; Grieg, C. M.
2012-12-01
MERRA Analytic Services (MERRA/AS) is a cyberinfrastructure resource for developing and evaluating a new generation of climate data analysis capabilities. MERRA/AS supports OBS4MIP activities by reducing the time spent in the preparation of Modern Era Retrospective-Analysis for Research and Applications (MERRA) data used in data-model intercomparison. It also provides a testbed for experimental development of high-performance analytics. MERRA/AS is a cloud-based service built around the Virtual Climate Data Server (vCDS) technology that is currently used by the NASA Center for Climate Simulation (NCCS) to deliver Intergovernmental Panel on Climate Change (IPCC) data to the Earth System Grid Federation (ESGF). Crucial to its effectiveness, MERRA/AS's servers will use a workflow-generated realizable object capability to perform analyses over the MERRA data using the MapReduce approach to parallel storage-based computation. The results produced by these operations will be stored by the vCDS, which will also be able to host code sets for those who wish to explore the use of MapReduce for more advanced analytics. While the work described here will focus on the MERRA collection, these technologies can be used to publish other reanalysis, observational, and ancillary OBS4MIP data to ESGF and, importantly, offer an architectural approach to climate data services that can be generalized to applications and customers beyond the traditional climate research community. In this presentation, we describe our approach, experiences, lessons learned,and plans for the future.; (A) MERRA/AS software stack. (B) Example MERRA/AS interfaces.
Khalsa, Siri Sahib; Siegel, Nathan Phillip; Ho, Clifford Kuofei
2010-04-01
This paper introduces a new analytical 'stretch' function that accurately predicts the flux distribution from on-axis point-focus collectors. Different dish sizes and slope errors can be assessed using this analytical function with a ratio of the focal length to collector diameter fixed at 0.6 to yield the maximum concentration ratio. Results are compared to data, and the stretch function is shown to provide more accurate flux distributions than other analytical methods employing cone optics.
Automation of analytical isotachophoresis
NASA Technical Reports Server (NTRS)
Thormann, Wolfgang
1985-01-01
The basic features of automation of analytical isotachophoresis (ITP) are reviewed. Experimental setups consisting of narrow bore tubes which are self-stabilized against thermal convection are considered. Sample detection in free solution is discussed, listing the detector systems presently used or expected to be of potential use in the near future. The combination of a universal detector measuring the evolution of ITP zone structures with detector systems specific to desired components is proposed as a concept of an automated chemical analyzer based on ITP. Possible miniaturization of such an instrument by means of microlithographic techniques is discussed.
NASA Astrophysics Data System (ADS)
Daeppen, W.
1980-11-01
In the free energy method statistical mechanical models are used to construct a free energy function of the plasma. The equilibrium composition for given temperature and density is found where the free energy is a minimum. Until now the free energy could not be expressed analytically, because the contributions from the partially degenerate electrons and from the inner degrees of freedom of the bound particles had to be evaluated numerically. In the present paper further simplifications are made to obtain an analytic expression for the free energy. Thus the minimum is rapidly found using a second order algorithm, whereas until now numerical first order derivatives and a steepest- descent method had to be used. Consequently time-consuming computations are avoided and the analytical version of the free energy method has successfully been incorporated into the stellar evolution programmes at Geneva Observatory. No use of thermodynamical tables is made, either. Although some accuracy is lost by the simplified analytical expression, the main advantages of the free energy method over simple ideal-gas and Sacha-equation subprogrammes (as used in the stellar programmes mentioned) are still kept. The relative errors of the simplifications made here are estimated and they are shown not to exceed 10% altogether. Densities up to those encountered in low-mass main-sequence stars can be treated within the region of validity of the method. Higher densities imply less accurate results. Nonetheless they are consistent so that they cannot disturb the numerical integration of the equilibrium equation in the stellar evolution model. The input quantities of the free energy method presented here are either temperature and density or temperature and pressure, the latter require a rapid numerical Legendre transformation which has been developed here.
Quality Indicators for Learning Analytics
ERIC Educational Resources Information Center
Scheffel, Maren; Drachsler, Hendrik; Stoyanov, Slavi; Specht, Marcus
2014-01-01
This article proposes a framework of quality indicators for learning analytics that aims to standardise the evaluation of learning analytics tools and to provide a mean to capture evidence for the impact of learning analytics on educational practices in a standardised manner. The criteria of the framework and its quality indicators are based on…
Learning Analytics: Readiness and Rewards
ERIC Educational Resources Information Center
Friesen, Norm
2013-01-01
This position paper introduces the relatively new field of learning analytics, first by considering the relevant meanings of both "learning" and "analytics," and then by looking at two main levels at which learning analytics can be or has been implemented in educational organizations. Although integrated turnkey systems or…
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
NASA Astrophysics Data System (ADS)
Mirkov, Mirko; Sherr, Evan A.; Sierra, Rafael A.; Lloyd, Jenifer R.; Tanghetti, Emil
2006-06-01
Detailed understanding of the thermal processes in biological targets undergoing laser irradiation continues to be a challenging problem. For example, the contemporary pulsed dye laser (PDL) delivers a complex pulse format which presents specific challenges for theoretical understanding and further development. Numerical methods allow for adequate description of the thermal processes, but are lacking for clarifying the effects of the laser parameters. The purpose of this work is to derive a simplified analytical model that can guide the development of future laser designs. A mathematical model of heating and cooling processes in tissue is developed. Exact analytical solutions of the model are found when applied to specific temporal and spatial profiles of heat sources. Solutions are reduced to simple algebraic expressions. An algorithm is presented for approximating realistic cases of laser heating of skin structures by heat sources of the type found to have exact solutions. The simple algebraic expressions are used to provide insight into realistic laser irradiation cases. The model is compared with experiments on purpura threshold radiant exposure for PDL. These include data from four independent groups over a period of 20 years. Two of the data sets are taken from previously published articles. Two more data sets were collected from two groups of patients that were treated with two PDLs (585 and 595 nm) on normal buttocks skin. Laser pulse durations were varied between 0.5 and 40 ms radiant exposures were varied between 3 and 20 J/cm2. Treatment sites were evaluated 0.5, 1, and 24 hours later to determine purpuric threshold. The analytical model is in excellent agreement with a wide range of experimental data for purpura threshold radiant exposure. The data collected by independent research groups over the last 20 years with PDLs with wavelengths ranged from 577 to 595 nm were described accurately by this model. The simple analytical model provides an accurate
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Are Kohn-Sham conductances accurate?
Mera, H; Niquet, Y M
2010-11-19
We use Fermi-liquid relations to address the accuracy of conductances calculated from the single-particle states of exact Kohn-Sham (KS) density functional theory. We demonstrate a systematic failure of this procedure for the calculation of the conductance, and show how it originates from the lack of renormalization in the KS spectral function. In certain limits this failure can lead to a large overestimation of the true conductance. We also show, however, that the KS conductances can be accurate for single-channel molecular junctions and systems where direct Coulomb interactions are strongly dominant. PMID:21231333
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835
Analytical Chemistry Core Capability Assessment - Preliminary Report
Barr, Mary E.; Farish, Thomas J.
2012-05-16
The concept of 'core capability' can be nebulous one. Even at a fairly specific level, where core capability equals maintaining essential services, it is highly dependent upon the perspective of the requestor. Samples are submitted to analytical services because the requesters do not have the capability to conduct adequate analyses themselves. Some requests are for general chemical information in support of R and D, process control, or process improvement. Many analyses, however, are part of a product certification package and must comply with higher-level customer quality assurance requirements. So which services are essential to that customer - just those for product certification? Does the customer also (indirectly) need services that support process control and improvement? And what is the timeframe? Capability is often expressed in terms of the currently utilized procedures, and most programmatic customers can only plan a few years out, at best. But should core capability consider the long term where new technologies, aging facilities, and personnel replacements must be considered? These questions, and a multitude of others, explain why attempts to gain long-term consensus on the definition of core capability have consistently failed. This preliminary report will not try to define core capability for any specific program or set of programs. Instead, it will try to address the underlying concerns that drive the desire to determine core capability. Essentially, programmatic customers want to be able to call upon analytical chemistry services to provide all the assays they need, and they don't want to pay for analytical chemistry services they don't currently use (or use infrequently). This report will focus on explaining how the current analytical capabilities and methods evolved to serve a variety of needs with a focus on why some analytes have multiple analytical techniques, and what determines the infrastructure for these analyses. This information will be
Analytical formulation of the quantum electromagnetic cross section
NASA Astrophysics Data System (ADS)
Brandsema, Matthew J.; Narayanan, Ram M.; Lanzagorta, Marco
2016-05-01
It has been found that the quantum radar cross section (QRCS) equation can be written in terms of the Fourier transform of the surface atom distribution of the object. This paper uses this form to provide an analytical formulation of the quantum radar cross section by deriving closed form expressions for various geometries. These expressions are compared to the classical radar cross section (RCS) expressions and the quantum advantages are discerned from the differences in the equations. Multiphoton illumination is also briefly discussed.
NASA Astrophysics Data System (ADS)
Vizireanu, D. N.; Halunga, S. V.
2012-04-01
A simple, fast and accurate amplitude estimation algorithm of sinusoidal signals for DSP based instrumentation is proposed. It is shown that eight samples, used in two steps, are sufficient. A practical analytical formula for amplitude estimation is obtained. Numerical results are presented. Simulations have been performed when the sampled signal is affected by white Gaussian noise and when the samples are quantized on a given number of bits.
Microarrays, Integrated Analytical Systems
NASA Astrophysics Data System (ADS)
Combinatorial chemistry is used to find materials that form sensor microarrays. This book discusses the fundamentals, and then proceeds to the many applications of microarrays, from measuring gene expression (DNA microarrays) to protein-protein interactions, peptide chemistry, carbodhydrate chemistry, electrochemical detection, and microfluidics.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate pose estimation for forensic identification
NASA Astrophysics Data System (ADS)
Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk
2010-04-01
In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
Analytic Model of Antenna Sheaths
NASA Astrophysics Data System (ADS)
D'Ippolito, D. A.; Myra, J. R.
2008-11-01
RF sheaths are generated on ICRF antennas whenever the launched fast wave also drives a slow wave, e.g. when the magnetic field is tilted (not perpendicular to the current straps). A new approach to sheath modeling was recently proposed in which the RF waves are computed using a modified boundary condition at the sheath surface to describe the plasma-sheath coupling. Here, we illustrate the use of the sheath BC for antenna sheaths by a model electromagnetic perturbation calculation, treating the B field tilt as a small parameter. Analytic expressions are obtained for the sheath voltage and the rf electric field parallel to B in both sheath and plasma regions, including the Child-Langmuir (self-consistency) constraint. It is shown that the plasma corrections to the sheath voltage (which screen the rf field) can be important. The simple vacuum-field sheath-voltage estimate is obtained as a limiting case. Implications for antenna codes such as TOPICA will be discussed. D.A. D'Ippolito and J.R. Myra, Phys. Plasmas 13, 102508 (2006). V. Lancellotti et al., Nucl. Fusion 46, S476 (2006).
Large-scale analytical Fourier transform of photomask layouts using graphics processing units
NASA Astrophysics Data System (ADS)
Sakamoto, Julia A.
2015-10-01
Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates servedmore » within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercial firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.« less
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates served within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercial firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.
Analytical sensor redundancy assessment
NASA Technical Reports Server (NTRS)
Mulcare, D. B.; Downing, L. E.; Smith, M. K.
1988-01-01
The rationale and mechanization of sensor fault tolerance based on analytical redundancy principles are described. The concept involves the substitution of software procedures, such as an observer algorithm, to supplant additional hardware components. The observer synthesizes values of sensor states in lieu of their direct measurement. Such information can then be used, for example, to determine which of two disagreeing sensors is more correct, thus enhancing sensor fault survivability. Here a stability augmentation system is used as an example application, with required modifications being made to a quadruplex digital flight control system. The impact on software structure and the resultant revalidation effort are illustrated as well. Also, the use of an observer algorithm for wind gust filtering of the angle-of-attack sensor signal is presented.
Normality in Analytical Psychology
Myers, Steve
2013-01-01
Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Accurate molecular classification of cancer using simple rules
Wang, Xiaosheng; Gotoh, Osamu
2009-01-01
Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV) of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML]), lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML). Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction. PMID:19874631
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
Activated Corrosion Product Analysis. Analytical Approach.
Golubov, Stanislav I; Busby, Jeremy T; Stoller, Roger E
2010-01-01
The presence of activated corrosion products (ACPs) in a water cooling system is a key factor in the licensing of ITER and affects nuclear classification, which governs design and operation. The objective of this study is to develop a method to accurately estimate radionuclide concentrations during ITER operation in support of nuclear classification. A brief overview of the PACTITER numerical code, which is currently used for ACP estimation, is presented. An alternative analytical approach for calculation of ACPs, which can also be used for validation of existing numerical codes, including PACTITER, has been proposed. A continuity equation describing the kinetics of accumulation of radioactive isotopes in a water cooling system in the form of a closed ring has been formulated, taking into account the following processes: production of radioactive elements and their decay, filtration, and ACP accumulation in filter system. Additional work is needed to more accurately assess the ACP inventory in the cooling water system, including more accurate simulation of the Tokamak cooling water system (TCWS) operating cycle and consideration of material corrosion, release, and deposition rates.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
The thermodynamic cost of accurate sensory adaptation
NASA Astrophysics Data System (ADS)
Tu, Yuhai
2015-03-01
Living organisms need to obtain and process environment information accurately in order to make decisions critical for their survival. Much progress have been made in identifying key components responsible for various biological functions, however, major challenges remain to understand system-level behaviors from the molecular-level knowledge of biology and to unravel possible physical principles for the underlying biochemical circuits. In this talk, we will present some recent works in understanding the chemical sensory system of E. coli by combining theoretical approaches with quantitative experiments. We focus on addressing the questions on how cells process chemical information and adapt to varying environment, and what are the thermodynamic limits of key regulatory functions, such as adaptation.
Accurate numerical solutions of conservative nonlinear oscillators
NASA Astrophysics Data System (ADS)
Khan, Najeeb Alam; Nasir Uddin, Khan; Nadeem Alam, Khan
2014-12-01
The objective of this paper is to present an investigation to analyze the vibration of a conservative nonlinear oscillator in the form u" + lambda u + u^(2n-1) + (1 + epsilon^2 u^(4m))^(1/2) = 0 for any arbitrary power of n and m. This method converts the differential equation to sets of algebraic equations and solve numerically. We have presented for three different cases: a higher order Duffing equation, an equation with irrational restoring force and a plasma physics equation. It is also found that the method is valid for any arbitrary order of n and m. Comparisons have been made with the results found in the literature the method gives accurate results.
Accurate Telescope Mount Positioning with MEMS Accelerometers
NASA Astrophysics Data System (ADS)
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities. PMID:12747164
Accurate Weather Forecasting for Radio Astronomy
NASA Astrophysics Data System (ADS)
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D
George L Mesina; David Aumiller; Francis Buschman
2014-07-01
Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.
Comparison of finite-difference and analytic microwave calculation methods
Friedlander, F.I.; Jackson, H.W.; Barmatz, M.; Wagner, P.
1996-12-31
Normal modes and power absorption distributions in microwave cavities containing lossy dielectric samples were calculated for problems of interest in materials processing. The calculations were performed both using a commercially available finite-difference electromagnetic solver and by numerical evaluation of exact analytic expressions. Results obtained by the two methods applied to identical physical situations were compared. The studies validate the accuracy of the finite-difference electromagnetic solver. Relative advantages of the analytic and finite-difference methods are discussed.
Development of analytical orbit propagation technique with drag
NASA Technical Reports Server (NTRS)
1979-01-01
Two orbit computation methods were used: (1) numerical method- The solution to the satellite differential equations were solved in a step-by-step manner, using a mathematical algorithm taken from numerical analysis; and (2) analytical method - The solution was expressed by explicit functions of the independent variable. Analytical drag modules, tesseral terms initialization module, second order and long period terms module, and verification testing of the ASOP program were also considered.
MAGNETARS AS HIGHLY MAGNETIZED QUARK STARS: AN ANALYTICAL TREATMENT
Orsaria, M.; Ranea-Sandoval, Ignacio F.; Vucetich, H.
2011-06-10
We present an analytical model of a magnetar as a high-density magnetized quark bag. The effect of strong magnetic fields (B > 5 x 10{sup 16} G) in the equation of state is considered. An analytic expression for the mass-radius relationship is found from the energy variational principle in general relativity. Our results are compared with observational evidence of possible quark and/or hybrid stars.
Analytical Solutions for Sequentially Reactive Transport with Different Retardation Factors
Sun, Y; Buscheck, T A; Mansoor, K; Lu, X
2001-08-01
Integral transforms have been widely used for deriving analytical solutions for solute transport systems. Often, analytical solutions can only be written in closed form in frequency domains and numerical inverse-transforms have to be involved to obtain semi-analytical solutions in the time domain. For this reason, previously published closed form solutions are restricted either to a small number of species or to the same retardation assumption. In this paper, we applied the solution scheme proposed by Bauer et al. in the time domain. Using available analytical solutions of a single species transport with first-order decay without coupling with its parent species concentration as fundamental solutions, a daughter species concentration can be expressed as a linear function of those fundamental solutions. The implementation of the solution scheme is straight forward and exact analytical solutions are derived for one- and three-dimensional transport systems.
Analytic structure of one-loop coefficients
NASA Astrophysics Data System (ADS)
Feng, Bo; Wang, Honghui
2013-05-01
By the unitarity cut method, analytic expressions of one-loop coefficients have been given in spinor forms. In this paper, we present one-loop coefficients of various bases in Lorentz-invariant contraction forms of external momenta. Using these forms, the analytic structure of these coefficients becomes manifest. Firstly, coefficients of bases contain only second-type singularities while the first-type singularities are included inside scalar bases. Secondly, the highest degree of each singularity is correlated with the degree of the inner momentum in the numerator. Thirdly, the same singularities will appear in different coefficients, thus our explicit results could be used to provide a clear physical picture under various limits (such as soft or collinear limits) when combining contributions from all bases.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Analytical orbit predictions with air drag using K-S uniformly regular canonical elements
NASA Astrophysics Data System (ADS)
Xavier James Raj, M.; Sharma, R. K.
elements are constant for unperturbed motion and the equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion (Stiefel and Scheifele, 1971, p250) are found to provide accurate short- and long- term orbit predictions numerically, with Earth's zonal harmonic terms J2 to J36 (Sharma and James Raj, 1988). Recently these equations were utilized by the authors to generate the analytical solutions for short-term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4 (James Raj and Sharma 2003). In this paper we have extended the K-S uniform regular canonical equations of motion for inclusion of the canonical forces and analytically integrated the resulting equations of motion by a series expansion method with air drag force, by assuming the atmosphere to be symmetrically spherical with constant density scale height. A non-singular solution up to third-order terms in eccentricity is obtained. Only two of the nine equations are solved analytically to compute the state vector and change in energy at the end of each revolution, due to symmetry in the equations of motion. For comparison purpose these equations are integrated numerically with a fixed step size 4th order Runge-Kutta-Gill method with a small step size of half degree in eccentric anomaly. Numerical experimentation with the analytical solution for a wide range of perigee altitude, eccentricity and orbital inclination has been carried out up to 100 revolutions. The results obtained from the analytical expressions match quite well with the numerically integrated values as well as show improvement over the results obtained from the third-order theories of King-Hele, Cook & Walker(1960) and Sharma(1992), which are generated with the same atmospheric model.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Analytic vortex dynamics in an annular Bose-Einstein condensate
NASA Astrophysics Data System (ADS)
Toikka, L. A.; Suominen, K.-A.
2016-05-01
We consider analytically the dynamics of an arbitrary number and configuration of vortices in an annular Bose-Einstein condensate obtaining expressions for the free energy and vortex precession rates to logarithmic accuracy. We also obtain lower bounds for the lifetime of a single vortex in the annulus. Our results enable a closed-form analytic treatment of vortex-vortex interactions in the annulus that is exact in the incompressible limit. The incompressible hydrodynamics that is developed here paves the way for more general analytical treatments of vortex dynamics in non-simply-connected geometries.
Hanford transuranic analytical capability
McVey, C.B.
1995-02-24
With the current DOE focus on ER/WM programs, an increase in the quantity of waste samples that requires detailed analysis is forecasted. One of the prime areas of growth is the demand for DOE environmental protocol analyses of TRU waste samples. Currently there is no laboratory capacity to support analysis of TRU waste samples in excess of 200 nCi/gm. This study recommends that an interim solution be undertaken to provide these services. By adding two glove boxes in room 11A of 222S the interim waste analytical needs can be met for a period of four to five years or until a front end facility is erected at or near the 222-S facility. The yearly average of samples is projected to be approximately 600 samples. The figure has changed significantly due to budget changes and has been downgraded from 10,000 samples to the 600 level. Until these budget and sample projection changes become firmer, a long term option is not recommended at this time. A revision to this document is recommended by March 1996 to review the long term option and sample projections.
Analytics for Metabolic Engineering
Petzold, Christopher J.; Chan, Leanne Jade G.; Nhan, Melissa; Adams, Paul D.
2015-01-01
Realizing the promise of metabolic engineering has been slowed by challenges related to moving beyond proof-of-concept examples to robust and economically viable systems. Key to advancing metabolic engineering beyond trial-and-error research is access to parts with well-defined performance metrics that can be readily applied in vastly different contexts with predictable effects. As the field now stands, research depends greatly on analytical tools that assay target molecules, transcripts, proteins, and metabolites across different hosts and pathways. Screening technologies yield specific information for many thousands of strain variants, while deep omics analysis provides a systems-level view of the cell factory. Efforts focused on a combination of these analyses yield quantitative information of dynamic processes between parts and the host chassis that drive the next engineering steps. Overall, the data generated from these types of assays aid better decision-making at the design and strain construction stages to speed progress in metabolic engineering research. PMID:26442249
The SILAC Fly Allows for Accurate Protein Quantification in Vivo*
Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias
2010-01-01
Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996
Cates, Joshua W; Vinke, Ruud; Levin, Craig S
2015-07-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector's timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3 × 3 × 20 mm(3) LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162 ± 1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-01-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559
Downs-Kelly, Erinn; Pettay, James; Hicks, David; Skacel, Marek; Yoder, Brian; Rybicki, Lisa; Myles, Jonathan; Sreenan, Joseph; Roche, Patrick; Powell, Richard; Hainfeld, James; Grogan, Thomas; Tubbs, Raymond
2005-11-01
Fluorescence in situ hybridization (FISH) has both excellent sensitivity and specificity in detecting HER2 gene amplification in invasive breast carcinoma. FISH has not been widely implemented in clinical practice because of reagent costs and the special instrumentation and expertise required to perform and integrate the assay. Immunohistochemistry (IHC) for HER2 protein is widely used, but false-positive and false-negative results are problematic. We developed a bright-field assay to visualize HER2 gene amplification and concomitant HER2 protein expression (EnzMet GenePro). This assay detects HER2 gene amplification via deposition of metallic silver by enzyme metallographytrade mark (EnzMettrade mark, Nanoprobes, Yaphank, NY) combined with HER2 protein detection by IHC using alkaline phosphatase and fast red K substrate visualization (CB11;Ventana, Tucson, AZ). The assay was performed on 94 invasive breast carcinomas, for which FISH (PathVysiontrade mark, Vysis, Downer's Grove, IL), conventional IHC (CB11), and enzyme metallography (EnzMettrade mark) results were known. The EnzMettrade mark component of the assay was scored as either HER2 gene amplified, polysomic, or nonamplified. The IHC component was scored using the conventional FDA scale of 0 to 3+. Concordance of the EnzMet component of the assay versus FISH was assessed and showed an excellent correlation (Pearson coefficient of 0.95; P < 0.001). The combination of gene and protein detection (EnzMet GenePro) displayed a specificity of 100% and an accuracy of 92.6% (95% confidence interval 85.3-97.0), facilitated recognition of gene/protein discordances, and allowed for efficient interpretation of the slide by conventional light microscopy. The interobserver kappa for each component was excellent (IHC, kappa = 0.94; and EnzMettrade mark, kappa = 0.96). EnzMet is the first bright-field ISH assay in our experience that routinely and nonambiguously detects endogenous HER2 signals, essential for a reliable
Analytic boosted boson discrimination
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-01
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
NASA Astrophysics Data System (ADS)
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-01
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the
Accurate Fission Data for Nuclear Safety
NASA Astrophysics Data System (ADS)
Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.
2014-05-01
The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.
Fast and Provably Accurate Bilateral Filtering
NASA Astrophysics Data System (ADS)
Chaudhury, Kunal N.; Dabhade, Swapnil D.
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires $O(S)$ operations per pixel, where $S$ is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to $O(1)$ per pixel for any arbitrary $S$. The algorithm has a simple implementation involving $N+1$ spatial filterings, where $N$ is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to to estimate the order $N$ required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with state-of-the-art methods in terms of speed and accuracy.
Accurate Prediction of Docked Protein Structure Similarity.
Akbal-Delibas, Bahar; Pomplun, Marc; Haspel, Nurit
2015-09-01
One of the major challenges for protein-protein docking methods is to accurately discriminate nativelike structures. The protein docking community agrees on the existence of a relationship between various favorable intermolecular interactions (e.g. Van der Waals, electrostatic, desolvation forces, etc.) and the similarity of a conformation to its native structure. Different docking algorithms often formulate this relationship as a weighted sum of selected terms and calibrate their weights against specific training data to evaluate and rank candidate structures. However, the exact form of this relationship is unknown and the accuracy of such methods is impaired by the pervasiveness of false positives. Unlike the conventional scoring functions, we propose a novel machine learning approach that not only ranks the candidate structures relative to each other but also indicates how similar each candidate is to the native conformation. We trained the AccuRMSD neural network with an extensive dataset using the back-propagation learning algorithm. Our method achieved predicting RMSDs of unbound docked complexes with 0.4Å error margin. PMID:26335807
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
How Accurate are SuperCOSMOS Positions?
NASA Astrophysics Data System (ADS)
Schaefer, Adam; Hunstead, Richard; Johnston, Helen
2014-02-01
Optical positions from the SuperCOSMOS Sky Survey have been compared in detail with accurate radio positions that define the second realisation of the International Celestial Reference Frame (ICRF2). The comparison was limited to the IIIaJ plates from the UK/AAO and Oschin (Palomar) Schmidt telescopes. A total of 1 373 ICRF2 sources was used, with the sample restricted to stellar objects brighter than BJ = 20 and Galactic latitudes |b| > 10°. Position differences showed an rms scatter of
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
MEMS accelerometers in accurate mount positioning systems
NASA Astrophysics Data System (ADS)
Mészáros, László; Pál, András.; Jaskó, Attila
2014-07-01
In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Accurate lineshape spectroscopy and the Boltzmann constant.
Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
Keck, B D; Ognibene, T; Vogel, J S
2010-02-05
Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of any separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg
Analytical analysis of particle-core dynamics
Batygin, Yuri K
2010-01-01
Particle-core interaction is a well-developed model of halo formation in high-intensity beams. In this paper, we present an analytical solution for averaged, single particle dynamics, around a uniformly charged beam. The problem is analyzed through a sequence of canonical transformations of the Hamiltonian, which describes nonlinear particle oscillations. A closed form expression for maximum particle deviation from the axis is obtained. The results of this study are in good agreement with numerical simulations and with previously obtained data.
Analytical laboratory quality audits
Kelley, William D.
2001-06-11
Analytical Laboratory Quality Audits are designed to improve laboratory performance. The success of the audit, as for many activities, is based on adequate preparation, precise performance, well documented and insightful reporting, and productive follow-up. Adequate preparation starts with definition of the purpose, scope, and authority for the audit and the primary standards against which the laboratory quality program will be tested. The scope and technical processes involved lead to determining the needed audit team resources. Contact is made with the auditee and a formal audit plan is developed, approved and sent to the auditee laboratory management. Review of the auditee's quality manual, key procedures and historical information during preparation leads to better checklist development and more efficient and effective use of the limited time for data gathering during the audit itself. The audit begins with the opening meeting that sets the stage for the interactions between the audit team and the laboratory staff. Arrangements are worked out for the necessary interviews and examination of processes and records. The information developed during the audit is recorded on the checklists. Laboratory management is kept informed of issues during the audit so there are no surprises at the closing meeting. The audit report documents whether the management control systems are effective. In addition to findings of nonconformance, positive reinforcement of exemplary practices provides balance and fairness. Audit closure begins with receipt and evaluation of proposed corrective actions from the nonconformances identified in the audit report. After corrective actions are accepted, their implementation is verified. Upon closure of the corrective actions, the audit is officially closed.
NASA Astrophysics Data System (ADS)
Ansari, R.; Mirnezhad, M.; Sahmani, S.
2015-04-01
Molecular mechanics theory has been widely used to investigate the mechanical properties of nanostructures analytically. However, there is a limited number of research in which molecular mechanics model is utilized to predict the elastic properties of boron nitride nanotubes (BNNTs). In the current study, the mechanical properties of chiral single-walled BNNTs are predicted analytically based on an accurate molecular mechanics model. For this purpose, based upon the density functional theory (DFT) within the framework of the generalized gradient approximation (GGA), the exchange correlation of Perdew-Burke-Ernzerhof is adopted to evaluate force constants used in the molecular mechanics model. Afterwards, based on the principle of molecular mechanics, explicit expressions are given to calculate surface Young's modulus and Poisson's ratio of the single-walled BNNTs for different values of tube diameter and types of chirality. Moreover, the values of surface Young's modulus, Poisson's ratio and bending stiffness of boron nitride sheets are obtained via the DFT as byproducts. The results predicted by the present model are in reasonable agreement with those reported by other models in the literature.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
The Case for Assessment Analytics
ERIC Educational Resources Information Center
Ellis, Cath
2013-01-01
Learning analytics is a relatively new field of inquiry and its precise meaning is both contested and fluid (Johnson, Smith, Willis, Levine & Haywood, 2011; LAK, n.d.). Ferguson (2012) suggests that the best working definition is that offered by the first Learning Analytics and Knowledge (LAK) conference: "the measurement, collection,…
Understanding Education Involving Geovisual Analytics
ERIC Educational Resources Information Center
Stenliden, Linnea
2013-01-01
Handling the vast amounts of data and information available in contemporary society is a challenge. Geovisual Analytics provides technology designed to increase the effectiveness of information interpretation and analytical task solving. To date, little attention has been paid to the role such tools can play in education and to the extent to which…
Analytical modeling of the steady radiative shock
NASA Astrophysics Data System (ADS)
Boireau, L.; Bouquet, S.; Michaut, C.; Clique, C.
2006-06-01
In a paper dated 2000 [1], a fully analytical theory of the radiative shock has been presented. This early model had been used to design [2] radiative shock experiments at the Laboratory for the Use of Intense Lasers (LULI) [3 5]. It became obvious from numerical simulations [6, 7] that this model had to be improved in order to accurately recover experiments. In this communication, we present a new theory in which the ionization rates in the unshocked (bar{Z_1}) and shocked (bar{Z_2} neq bar{Z_1}) material, respectively, are included. Associated changes in excitation energy are also taken into account. We study the influence of these effects on the compression and temperature in the shocked medium.
Analytic prediction of airplane equilibrium spin characteristics
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.
1972-01-01
The nonlinear equations of motion are solved algebraically for conditions for which an airplane is in an equilibrium spin. Constrained minimization techniques are employed in obtaining the solution. Linear characteristics of the airplane about the equilibrium points are also presented and their significance in identifying the stability characteristics of the equilibrium points is discussed. Computer time requirements are small making the method appear potentially applicable in airplane design. Results are obtained for several configurations and are compared with other analytic-numerical methods employed in spin prediction. Correlation with experimental results is discussed for one configuration for which a rather extensive data base was available. A need is indicated for higher Reynolds number data taken under conditions which more accurately simulate a spin.
Comparing numerical and analytic approximate gravitational waveforms
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
Road transportable analytical laboratory (RTAL) system
Finger, S.M.
1996-12-31
Remediation of DOE contaminated areas requires extensive sampling and analysis. Reliable, road transportable, fully independent laboratory systems that could perform on-site a full range of analyses meeting high levels of quality assurance and control, would accelerate and thereby reduce the cost of cleanup and remediation efforts by (1) providing critical analytical data more rapidly, and (2) eliminating the handling, shipping, and manpower associated with sample shipments. Goals of RTAL are to meet the needs of DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. The system consists of a set of individual laboratory modules deployable independently or together, to meet specific site needs: radioanalytical lab, organic chemical analysis lab, inorganic chemical analysis lab, aquatic biomonitoring lab, field analytical lab, robotics base station, decontamination/sample screening module, and operations control center. Goal of this integrated system is a sample throughput of 20 samples/day, providing a full range of accurate analyses on each sample within 16 h (after sample preparation), compared with the 45- day turnaround time in commercial laboratories. A prototype RTAL consisting of 5 modules was built and demonstrated at Fernald(FEMP)`s OU-1 Waste Pits, during the 1st-3rd quarters of FY96 (including the `96 Blizzard). All performance and operational goals were met or exceeded: as many as 50 sample analyses/day were achieved, depending on the procedure, sample turnaround times were 50-67% less than FEMP`s best times, and RTAL costs were projected to be 30% less than FEMP costs for large volume analyses in fixed laboratories.
NASA Astrophysics Data System (ADS)
Joshi, A.; Suryanarayan, S.
1989-03-01
The problem of free vibration of beams having different end conditions and subjected to static initial loads has been studied with the aim of arriving at good closed-form analytical solutions. Elementary beam theory is used as a starting point to obtain the transverse vibration frequencies for various cases of classical homogeneous end conditions and for various values of the static axial load and end moment. These results indicate that it is possible to identify simple algebraic expressions which accurately represent the solution for various boundary conditions. It is also found that reasonably accurate estimates of the predominantly flexural frequency of coupled flexural-torsional vibration can be obtained from the uncoupled flexural vibration frequency of beam-columns. This is achieved by defining an effective axial load parameter, which is a combination of the axial load, the end moment and the slenderness parameter. Finally, the study also brings out that the various expressions, corresponding to different end conditions, can be combined together into a single expression for the predominantly flexural frequency. This expression is common for the boundary conditions considered here and use is made of various normalizing factors which depend on the boundary conditions, and are obtainable from the corresponding free vibration and stability analyses of beam-columns.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Analytical techniques for direct identification of biosignatures and microorganisms
NASA Astrophysics Data System (ADS)
Cid, C.; Garcia-Descalzo, L.; Garcia-Lopez, E.; Postigo, M.; Alcazar, A.; Baquero, F.
2012-09-01
Rover missions to potentially habitable ecosystems require portable instruments that use minimal power, require no sample preparation, and provide suitably diagnostic information to an Earth-based exploration team. In exploration of terrestrial analogue environments of potentially habitable ecosystems it is important to screen rapidly for the presence of biosignatures and microorganisms and especially to identify them accurately. In this study, several analytical techniques for the direct identification of biosignatures and microorganisms in different Earth analogues of habitable ecosystems are compared.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
An analytical solution for quantum size effects on Seebeck coefficient
NASA Astrophysics Data System (ADS)
Karabetoglu, S.; Sisman, A.; Ozturk, Z. F.
2016-03-01
There are numerous experimental and numerical studies about quantum size effects on Seebeck coefficient. In contrast, in this study, we obtain analytical expressions for Seebeck coefficient under quantum size effects. Seebeck coefficient of a Fermi gas confined in a rectangular domain is considered. Analytical expressions, which represent the size dependency of Seebeck coefficient explicitly, are derived in terms of confinement parameters. A fundamental form of Seebeck coefficient based on infinite summations is used under relaxation time approximation. To obtain analytical results, summations are calculated using the first two terms of Poisson summation formula. It is shown that they are in good agreement with the exact results based on direct calculation of summations as long as confinement parameters are less than unity. The analytical results are also in good agreement with experimental and numerical ones in literature. Maximum relative errors of analytical expressions are less than 3% and 4% for 2D and 1D cases, respectively. Dimensional transitions of Seebeck coefficient are also examined. Furthermore, a detailed physical explanation for the oscillations in Seebeck coefficient is proposed by considering the relative standard deviation of total variance of particle number in Fermi shell.
Analytical derivation of DC SQUID response
NASA Astrophysics Data System (ADS)
Soloviev, I. I.; Klenov, N. V.; Schegolev, A. E.; Bakurskiy, S. V.; Kupriyanov, M. Yu
2016-09-01
We consider voltage and current response formation in DC superconducting quantum interference device (SQUID) with overdamped Josephson junctions in resistive and superconducting state in the context of a resistively shunted junction (RSJ) model. For simplicity we neglect the junction capacitance and the noise effect. Explicit expressions for the responses in resistive state were obtained for a SQUID which is symmetrical with respect to bias current injection point. Normalized SQUID inductance l=2{{eI}}{{c}}L/{\\hslash } (where I c is the critical current of Josephson junction, L is the SQUID inductance, e is the electron charge and ℏ is the Planck constant) was assumed to be within the range l ≤ 1, subsequently expanded up to l≈ 7 using two fitting parameters. SQUID current response in the superconducting state was considered for arbitrary value of the inductance. The impact of small technological spread of parameters relevant to low-temperature superconductor (LTS) technology was studied, using a generalization of the developed analytical approach, for the case of a small difference of critical currents and shunt resistances of the Josephson junctions, and inequality of SQUID inductive shoulders for both resistive and superconducting states. Comparison with numerical calculation results shows that developed analytical expressions can be used in practical LTS SQUIDs and SQUID-based circuits design, e.g. large serial SQIF, drastically decreasing the time of simulation.
Road Transportable Analytical Laboratory (RTAL) system
Finger, S.M.
1995-12-01
U.S. Department of Energy (DOE) facilities around the country have, over the years, become contaminated with radionuclides and a range of organic and inorganic wastes. Many of the DOE sites encompass large land areas and were originally sited in relatively unpopulated regions of the country to minimize risk to surrounding populations. In addition, wastes were sometimes stored underground at the sites in 55-gallon drums, wood boxes or other containers until final disposal methods could be determined. Over the years, these containers have deteriorated, releasing contaminants into the surrounding environment. This contamination has spread, in some cases polluting extensive areas. Remediation of these sites requires extensive sampling to determine the extent of the contamination, to monitor clean-up and remediation progress, and for post-closure monitoring of facilities. The DOE would benefit greatly if it had reliable, road transportable, fully independent laboratory systems that could perform on-site the full range of analyses required. Such systems would accelerate and thereby reduce the cost of clean-up and remediation efforts by (1) providing critical analytical data more rapidly, and (2) eliminating the handling, shipping and manpower associated with sample shipments. The goal of the Road Transportable Analytical Laboratory (RTAL) Project is the development and demonstration of a system to meet the unique needs of the DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. This laboratory system has been designed to provide the field and laboratory analytical equipment necessary to detect and quantify radionuclides, organics, heavy metals and other inorganic compounds. The laboratory system consists of a set of individual laboratory modules deployable independently or as an interconnected group to meet each DOE site`s specific needs.
The Science of Analytic Reporting
Chinchor, Nancy; Pike, William A.
2009-09-23
The challenge of visually communicating analysis results is central to the ability of visual analytics tools to support decision making and knowledge construction. The benefit of emerging visual methods will be improved through more effective exchange of the insights generated through the use of visual analytics. This paper outlines the major requirements for next-generation reporting systems in terms of eight major research needs: the development of best practices, design automation, visual rhetoric, context and audience, connecting analysis to presentation, evidence and argument, collaborative environments, and interactive and dynamic documents. It also describes an emerging technology called Active Products that introduces new techniques for analytic process capture and dissemination.
Analytic barrage attack model. Final report, January 1986-January 1989
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
1989-01-01
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for the analytic model and for a numerical model used to check the analytic results.
Piezoresistive Cantilever Performance—Part I: Analytical Model for Sensitivity
Park, Sung-Jin; Doll, Joseph C.; Pruitt, Beth L.
2010-01-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Piezoresistive Cantilever Performance-Part I: Analytical Model for Sensitivity.
Park, Sung-Jin; Doll, Joseph C; Pruitt, Beth L
2010-02-01
An accurate analytical model for the change in resistance of a piezoresistor is necessary for the design of silicon piezoresistive transducers. Ion implantation requires a high-temperature oxidation or annealing process to activate the dopant atoms, and this treatment results in a distorted dopant profile due to diffusion. Existing analytical models do not account for the concentration dependence of piezoresistance and are not accurate for nonuniform dopant profiles. We extend previous analytical work by introducing two nondimensional factors, namely, the efficiency and geometry factors. A practical benefit of this efficiency factor is that it separates the process parameters from the design parameters; thus, designers may address requirements for cantilever geometry and fabrication process independently. To facilitate the design process, we provide a lookup table for the efficiency factor over an extensive range of process conditions. The model was validated by comparing simulation results with the experimentally determined sensitivities of piezoresistive cantilevers. We performed 9200 TSUPREM4 simulations and fabricated 50 devices from six unique process flows; we systematically explored the design space relating process parameters and cantilever sensitivity. Our treatment focuses on piezoresistive cantilevers, but the analytical sensitivity model is extensible to other piezoresistive transducers such as membrane pressure sensors. PMID:20336183
Analytical scatter kernels for portal imaging at 6 MV.
Spies, L; Bortfeld, T
2001-04-01
X-ray photon scatter kernels for 6 MV electronic portal imaging are investigated using an analytical and a semi-analytical model. The models are tested on homogeneous phantoms for a range of uniform circular fields and scatterer-to-detector air gaps relevant for clinical use. It is found that a fully analytical model based on an exact treatment of photons undergoing a single Compton scatter event and an approximate treatment of second and higher order scatter events, assuming a multiple-scatter source at the center of the scatter volume, is accurate within 1% (i.e., the residual scatter signal is less than 1% of the primary signal) for field sizes up to 100 cm2 and air gaps over 30 cm, but shows significant discrepancies for larger field sizes. Monte Carlo results are presented showing that the effective multiple-scatter source is located toward the exit surface of the scatterer, rather than at its center. A second model is therefore investigated where second and higher-order scattering is instead modeled by fitting an analytical function describing a nonstationary isotropic point-scatter source to Monte Carlo generated data. This second model is shown to be accurate to within 1% for air gaps down to 20 cm, for field sizes up to 900 cm2 and phantom thicknesses up to 50 cm. PMID:11339752
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. PMID:24378297
Rectangular shape distributed piezoelectric actuator: analytical analysis
NASA Astrophysics Data System (ADS)
Sun, Bohua; Qiu, Yan
2004-04-01
This paper is focused on the development of distributed piezoelectric actuators (DPAs) with rectangular shapes by using PZT materials. Analytical models of rectangular shape DPAs have been constructed in order to analyse and test the performance of DPA products. Firstly, based on the theory of electromagnetics, DPAs have been considered as a type of capacitor. The charge distributed density on the interdigitated electrodes (IDEs), which has been applied in the actuators, and the capacitance of the DPAs have been calculated. The accurate distribution and intensity of electrical field in DPA element have also been calculated completely. Secondly, based on the piezoelectric constitutive relations and the compound plates theory, models for mechanical strain and stress fields of DPAs have been developed, and the performances of rectangular shape DPAs have been discussed. Finally, on the basis of the models that have been developed in this paper, an improved design of a rectangular shape DPA has been discussed and summed up. Due to the minimum hypotheses that have been used during the processes of calculation, the characteristics of this paper are that the accurate distribution and intensity of electrical fields in DPAs have been concluded. The proposed accurate calculations have not been seen in the literature, and can be used in DPA design and manufacture processes in order to improve mechanical performance and reduce the cost of DPA products in further applications. In this paper, all the processes of analysis and calculation have been done by MATLAB and MathCAD. The FEM results used for comparison were obtained using the ABAQUS program.
Analytical drafting curves provide exact equations for plotted data
NASA Technical Reports Server (NTRS)
Stewart, R. B.
1967-01-01
Analytical drafting curves provide explicit mathematical expressions for any numerical data that appears in the form of graphical plots. The curves each have a reference coordinate axis system indicated on the curve as well as the mathematical equation from which the curve was generated.
A simple analytic approximation for dusty stromgren spheres.
NASA Technical Reports Server (NTRS)
Petrosian, V.; Silk, J.; Field, G. B.
1972-01-01
We interpret recent far-infrared observations of H II regions in terms of true absorption by internal dust of a significant fraction of the Lyman-continuum photons. We present approximate analytic expressions describing the effects of internal dust on the ionization structure of H II regions, and outline a procedure for deducing the properties of this dust from optical and infrared observations.
Polanski, A; Kimmel, M
2003-01-01
We present new methodology for calculating sampling distributions of single-nucleotide polymorphism (SNP) frequencies in populations with time-varying size. Our approach is based on deriving analytical expressions for frequencies of SNPs. Analytical expressions allow for computations that are faster and more accurate than Monte Carlo simulations. In contrast to other articles showing analytical formulas for frequencies of SNPs, we derive expressions that contain coefficients that do not explode when the genealogy size increases. We also provide analytical formulas to describe the way in which the ascertainment procedure modifies SNP distributions. Using our methods, we study the power to test the hypothesis of exponential population expansion vs. the hypothesis of evolution with constant population size. We also analyze some of the available SNP data and we compare our results of demographic parameters estimation to those obtained in previous studies in population genetics. The analyzed data seem consistent with the hypothesis of past population growth of modern humans. The analysis of the data also shows a very strong sensitivity of estimated demographic parameters to changes of the model of the ascertainment procedure. PMID:14504247
Trends in Analytical Scale Separations.
ERIC Educational Resources Information Center
Jorgenson, James W.
1984-01-01
Discusses recent developments in the instrumentation and practice of analytical scale operations. Emphasizes detection devices and procedures in gas chromatography, liquid chromatography, electrophoresis, supercritical fluid chromatography, and field-flow fractionation. (JN)
Liposomes: Technologies and Analytical Applications
NASA Astrophysics Data System (ADS)
Jesorka, Aldo; Orwar, Owe
2008-07-01
Liposomes are structurally and functionally some of the most versatile supramolecular assemblies in existence. Since the beginning of active research on lipid vesicles in 1965, the field has progressed enormously and applications are well established in several areas, such as drug and gene delivery. In the analytical sciences, liposomes serve a dual purpose: Either they are analytes, typically in quality-assessment procedures of liposome preparations, or they are functional components in a variety of new analytical systems. Liposome immunoassays, for example, benefit greatly from the amplification provided by encapsulated markers, and nanotube-interconnected liposome networks have emerged as ultrasmall-scale analytical devices. This review provides information about new developments in some of the most actively researched liposome-related topics.
Laboratory Workhorse: The Analytical Balance.
ERIC Educational Resources Information Center
Clark, Douglas W.
1979-01-01
This report explains the importance of various analytical balances in the water or wastewater laboratory. Stressed is the proper procedure for utilizing the equipment as well as the mechanics involved in its operation. (CS)
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Analytical Chemistry: A Literary Approach.
ERIC Educational Resources Information Center
Lucy, Charles A.
2000-01-01
Provides an anthology of references to descriptions of analytical chemistry techniques from history, popular fiction, and film which can be used to capture student interest and frame discussions of chemical techniques. (WRM)
Cautions Concerning Electronic Analytical Balances.
ERIC Educational Resources Information Center
Johnson, Bruce B.; Wells, John D.
1986-01-01
Cautions chemists to be wary of ferromagnetic samples (especially magnetized samples), stray electromagnetic radiation, dusty environments, and changing weather conditions. These and other conditions may alter readings obtained from electronic analytical balances. (JN)
Accurate and efficient linear scaling DFT calculations with universal applicability.
Mohr, Stephan; Ratcliff, Laura E; Genovese, Luigi; Caliste, Damien; Boulanger, Paul; Goedecker, Stefan; Deutsch, Thierry
2015-12-21
Density functional theory calculations are computationally extremely expensive for systems containing many atoms due to their intrinsic cubic scaling. This fact has led to the development of so-called linear scaling algorithms during the last few decades. In this way it becomes possible to perform ab initio calculations for several tens of thousands of atoms within reasonable walltimes. However, even though the use of linear scaling algorithms is physically well justified, their implementation often introduces some small errors. Consequently most implementations offering such a linear complexity either yield only a limited accuracy or, if one wants to go beyond this restriction, require a tedious fine tuning of many parameters. In our linear scaling approach within the BigDFT package, we were able to overcome this restriction. Using an ansatz based on localized support functions expressed in an underlying Daubechies wavelet basis - which offers ideal properties for accurate linear scaling calculations - we obtain an amazingly high accuracy and a universal applicability while still keeping the possibility of simulating large system with linear scaling walltimes requiring only a moderate demand of computing resources. We prove the effectiveness of our method on a wide variety of systems with different boundary conditions, for single-point calculations as well as for geometry optimizations and molecular dynamics. PMID:25958954
How Accurate Are Transition States from Simulations of Enzymatic Reactions?
2015-01-01
The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275
AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz
Perley, R. A.; Butler, B. J. E-mail: BButler@nrao.edu
2013-02-15
We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.
Accurate Prediction of Binding Thermodynamics for DNA on Surfaces
Vainrub, Arnold; Pettitt, B. Montgomery
2011-01-01
For DNA mounted on surfaces for microarrays, microbeads and nanoparticles, the nature of the random attachment of oligonucleotide probes to an amorphous surface gives rise to a locally inhomogeneous probe density. These fluctuations of the probe surface density are inherent to all common surface or bead platforms, regardless if they exploit either an attachment of pre-synthesized probes or probes synthesized in situ on the surface. Here, we demonstrate for the first time the crucial role of the probe surface density fluctuations in performance of DNA arrays. We account for the density fluctuations with a disordered two-dimensional surface model and derive the corresponding array hybridization isotherm that includes a counter-ion screened electrostatic repulsion between the assayed DNA and probe array. The calculated melting curves are in excellent agreement with published experimental results for arrays with both pre-synthesized and in-situ synthesized oligonucleotide probes. The approach developed allows one to accurately predict the melting curves of DNA arrays using only the known sequence dependent hybridization enthalpy and entropy in solution and the experimental macroscopic surface density of probes. This opens the way to high precision theoretical design and optimization of probes and primers in widely used DNA array-based high-throughput technologies for gene expression, genotyping, next-generation sequencing, and surface polymerase extension. PMID:21972932
Functionalized magnetic nanoparticle analyte sensor
Yantasee, Wassana; Warner, Maryin G; Warner, Cynthia L; Addleman, Raymond S; Fryxell, Glen E; Timchalk, Charles; Toloczko, Mychailo B
2014-03-25
A method and system for simply and efficiently determining quantities of a preselected material in a particular solution by the placement of at least one superparamagnetic nanoparticle having a specified functionalized organic material connected thereto into a particular sample solution, wherein preselected analytes attach to the functionalized organic groups, these superparamagnetic nanoparticles are then collected at a collection site and analyzed for the presence of a particular analyte.
Visual Analytics Technology Transition Progress
Scholtz, Jean; Cook, Kristin A.; Whiting, Mark A.; Lemon, Douglas K.; Greenblatt, Howard
2009-09-23
The authors provide a description of the transition process for visual analytic tools and contrast this with the transition process for more traditional software tools. This paper takes this into account and describes a user-oriented approach to technology transition including a discussion of key factors that should be considered and adapted to each situation. The progress made in transitioning visual analytic tools in the past five years is described and the challenges that remain are enumerated.
Analytical multikinks in smooth potentials
NASA Astrophysics Data System (ADS)
de Brito, G. P.; Correa, R. A. C.; de Souza Dutra, A.
2014-03-01
In this work we present an approach that can be systematically used to construct nonlinear systems possessing analytical multikink profile configurations. In contrast with previous approaches to the problem, we are able to do it by using field potentials that are considerably smoother than the ones of the doubly quadratic family of potentials. This is done without losing the capacity of writing exact analytical solutions. The resulting field configurations can be applied to the study of problems from condensed matter to braneworld scenarios.
An analytical method for Mathieu oscillator based on method of variation of parameter
NASA Astrophysics Data System (ADS)
Li, Xianghong; Hou, Jingyu; Chen, Jufeng
2016-08-01
A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
A spectroscopic transfer standard for accurate atmospheric CO measurements
NASA Astrophysics Data System (ADS)
Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker
2016-04-01
Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been
A parsimonious analytical model for simulating multispecies plume migration
NASA Astrophysics Data System (ADS)
Chen, J.-S.; Liang, C.-P.; Liu, C.-W.; Li, L. Y.
2015-09-01
A parsimonious analytical model for rapidly predicting the long-term plume behavior of decaying contaminant such as radionuclide and dissolved chlorinated solvent is presented in this study. Generalized analytical solutions in compact format are derived for the two-dimensional advection-dispersion equations coupled with sequential first-order decay reactions involving an arbitrary number of species in groundwater system. The solution techniques involve the sequential applications of the Laplace, finite Fourier cosine, and generalized integral transforms to reduce the coupled partial differential equation system to a set of linear algebraic equations. The system of algebraic equations is next solved for each species in the transformed domain, and the solutions in the original domain are then obtained through consecutive integral transform inversions. Explicit form solutions for a special case are derived using the generalized analytical solutions and are verified against the numerical solutions. The analytical results indicate that the parsimonious analytical solutions are robust and accurate. The solutions are useful for serving as simulation or screening tools for assessing plume behaviors of decaying contaminants including the radionuclides and dissolved chlorinated solvents in groundwater systems.
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical advantages of multivariate data processing. One, two, three, infinity?
Olivieri, Alejandro C
2008-08-01
Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646
Climate Analytics as a Service
NASA Technical Reports Server (NTRS)
Schnase, John L.; Duffy, Daniel Q.; McInerney, Mark A.; Webster, W. Phillip; Lee, Tsengdar J.
2014-01-01
Climate science is a big data domain that is experiencing unprecedented growth. In our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). CAaaS combines high-performance computing and data-proximal analytics with scalable data management, cloud computing virtualization, the notion of adaptive analytics, and a domain-harmonized API to improve the accessibility and usability of large collections of climate data. MERRA Analytic Services (MERRA/AS) provides an example of CAaaS. MERRA/AS enables MapReduce analytics over NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of key climate variables. The effectiveness of MERRA/AS has been demonstrated in several applications. In our experience, CAaaS is providing the agility required to meet our customers' increasing and changing data management and data analysis needs.
The transfer of analytical procedures.
Ermer, J; Limberger, M; Lis, K; Wätzig, H
2013-11-01
Analytical method transfers are certainly among the most discussed topics in the GMP regulated sector. However, they are surprisingly little regulated in detail. General information is provided by USP, WHO, and ISPE in particular. Most recently, the EU emphasized the importance of analytical transfer by including it in their draft of the revised GMP Guideline. In this article, an overview and comparison of these guidelines is provided. The key to success for method transfers is the excellent communication between sending and receiving unit. In order to facilitate this communication, procedures, flow charts and checklists for responsibilities, success factors, transfer categories, the transfer plan and report, strategies in case of failed transfers, tables with acceptance limits are provided here, together with a comprehensive glossary. Potential pitfalls are described such that they can be avoided. In order to assure an efficient and sustainable transfer of analytical procedures, a practically relevant and scientifically sound evaluation with corresponding acceptance criteria is crucial. Various strategies and statistical tools such as significance tests, absolute acceptance criteria, and equivalence tests are thoroughly descibed and compared in detail giving examples. Significance tests should be avoided. The success criterion is not statistical significance, but rather analytical relevance. Depending on a risk assessment of the analytical procedure in question, statistical equivalence tests are recommended, because they include both, a practically relevant acceptance limit and a direct control of the statistical risks. However, for lower risk procedures, a simple comparison of the transfer performance parameters to absolute limits is also regarded as sufficient. PMID:23978903
Accurate and semi-automated analysis of bacterial association with mammalian cells.
Murphy, C M; Paré, S; Galea, G; Simpson, J C; Smith, S G J
2016-03-01
To efficiently and accurately quantify the interactions of bacteria with mammalian cells, a reliable fluorescence microscopy assay was developed. Bacteria were engineered to become rapidly and stably fluorescent using Green Fluorescent Protein (GFP) expressed from an inducible Tet promoter. Upon application of the fluorescent bacteria onto a monolayer, extracellular bacteria could be discriminated from intracellular bacteria by antibody staining and microscopy. All bacteria could be detected by GFP expression. External bacteria stained orange, whereas internalised bacteria did not. Internalised bacteria could thus be discriminated from external bacteria by virtue of being green but not orange fluorescent. Image acquisition and counting of various fluorophore-stained entities were accomplished with a high-content screening platform. This allowed for semi-automated and accurate counting of intracellular and extracellular bacteria. PMID:26769557
Accurate modelling of flow induced stresses in rigid colloidal aggregates
NASA Astrophysics Data System (ADS)
Vanni, Marco
2015-07-01
A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
An Analytical Model for the Influence of Contact Resistance on Thermoelectric Efficiency
NASA Astrophysics Data System (ADS)
Bjørk, Rasmus
2016-03-01
An analytical model is presented that can account for both electrical and hot and cold thermal contact resistances when calculating the efficiency of a thermoelectric generator. The model is compared to a numerical model of a thermoelectric leg for 16 different thermoelectric materials, as well as to the analytical models of Ebling et al. (J Electron Mater 39:1376, 2010) and Min and Rowe (J Power Sour 38:253, 1992). The model presented here is shown to accurately calculate the efficiency for all systems and all contact resistances considered, with an average difference in efficiency between the numerical model and the analytical model of -0.07 ± 0.35pp. This makes the model more accurate than previously published models. The maximum absolute difference in efficiency between the analytical model and the numerical model is 1.14pp for all materials and all contact resistances considered.
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Big Data Analytics in Healthcare
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Navidi, Fatemeh; Beard, Daniel A.; Najarian, Kayvan
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957
NASA Astrophysics Data System (ADS)
Picoult, Evan
2003-03-01
Risk Analytical Units within Wall Street firms are responsible for developing the methods used to quantify the different forms of risk inherent in the firms' activities. This talk is an overview of risk analytics. It will cover: the function and validation of valuation models; the measurement of market risk; and the measurement of the different aspects of and forms of credit risk, including the simulation of the potential counterparty credit exposure of derivatives, the estimation of obligor default probability and the simulation of the potential loss distribution of loan portfolios. Risk Analytics is an applied field that integrates finance theory, mathematics and statistical analysis. It is a field in that has attracted many physicists and one in which many physicists have flourished. The talk will conclude with an analysis of why this is so.
Big Data Analytics in Healthcare.
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957
An Analytic Approach to Projectile Motion in a Linear Resisting Medium
ERIC Educational Resources Information Center
Stewart, Sean M.
2006-01-01
The time of flight, range and the angle which maximizes the range of a projectile in a linear resisting medium are expressed in analytic form in terms of the recently defined Lambert W function. From the closed-form solutions a number of results characteristic to the motion of the projectile in a linear resisting medium are analytically confirmed,…
NASA Astrophysics Data System (ADS)
Salamin, Yousef I.
2015-12-01
Analytic expressions for the electric and magnetic fields of an ultrashort, tightly focused, linearly polarized laser pulse are derived, to lowest order of a truncated power-series expansion, from vector and scalar potentials. Clear steps are described for the analytic and numerical evaluation of higher-order terms in the series, to any desired accuracy.
Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.
ERIC Educational Resources Information Center
Borman, Stuart A.
1982-01-01
Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…
Analytic models of plausible gravitational lens potentials
Baltz, Edward A.; Marshall, Phil; Oguri, Masamune E-mail: pjm@physics.ucsb.edu
2009-01-15
Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sersic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sersic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses.
Analytic Models of Plausible Gravitational Lens Potentials
Baltz, Edward A.; Marshall, Phil; Oguri, Masamune
2007-05-04
Gravitational lenses on galaxy scales are plausibly modeled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sersic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasizing that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential.We also provide analytic formulae for the lens potentials of Sersic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modeled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses.
On maximal analytical extension of the Vaidya metric
NASA Astrophysics Data System (ADS)
Berezin, V. A.; Dokuchaev, V. I.; Eroshenko, Yu N.
2016-07-01
The classical Vaidya metric is transformed to the special diagonal coordinates in the case of the linear mass function allowing rather easy treatment. We find the exact analytical expressions for metric functions in these diagonal coordinates. Using these coordinates, we elaborate the maximum analytic extension of the Vaidya metric with a linear growth of the black hole mass and construct the corresponding Carter–Penrose diagrams for different specific cases. The derived global geometry is also seemingly valid for a more general behavior of the black hole mass in the Vaidya metric.
IRIS: Towards an Accurate and Fast Stage Weight Prediction Method
NASA Astrophysics Data System (ADS)
Taponier, V.; Balu, A.
2002-01-01
The knowledge of the structural mass fraction (or the mass ratio) of a given stage, which affects the performance of a rocket, is essential for the analysis of new or upgraded launchers or stages, whose need is increased by the quick evolution of the space programs and by the necessity of their adaptation to the market needs. The availability of this highly scattered variable, ranging between 0.05 and 0.15, is of primary importance at the early steps of the preliminary design studies. At the start of the staging and performance studies, the lack of frozen weight data (to be obtained later on from propulsion, trajectory and sizing studies) leads to rely on rough estimates, generally derived from printed sources and adapted. When needed, a consolidation can be acquired trough a specific analysis activity involving several techniques and implying additional effort and time. The present empirical approach allows thus to get approximated values (i.e. not necessarily accurate or consistent), inducing some result inaccuracy as well as, consequently, difficulties of performance ranking for a multiple option analysis, and an increase of the processing duration. This forms a classical harsh fact of the preliminary design system studies, insufficiently discussed to date. It appears therefore highly desirable to have, for all the evaluation activities, a reliable, fast and easy-to-use weight or mass fraction prediction method. Additionally, the latter should allow for a pre selection of the alternative preliminary configurations, making possible a global system approach. For that purpose, an attempt at modeling has been undertaken, whose objective was the determination of a parametric formulation of the mass fraction, to be expressed from a limited number of parameters available at the early steps of the project. It is based on the innovative use of a statistical method applicable to a variable as a function of several independent parameters. A specific polynomial generator
Scicinski, J J; Congreve, M S; Jamieson, C; Ley, S V; Newman, E S; Vinader, V M; Carr, R A
2001-01-01
The development of a 1-hydroxybenzotriazole linker for the synthesis of heterocyclic derivatives is described, utilizing analytical construct methodology to facilitate the analysis of resin samples. A UV-chromophore-containing analytical construct enabled the accurate determination of resin loading and the automated monitoring of key reactions using only small quantities of resin. The syntheses of an array of isoxazole derivatives are reported. PMID:11442396
ANALYTICAL STAR FORMATION RATE FROM GRAVOTURBULENT FRAGMENTATION
Hennebelle, Patrick; Chabrier, Gilles
2011-12-20
We present an analytical determination of the star formation rate (SFR) in molecular clouds, based on a time-dependent extension of our analytical theory of the stellar initial mass function. The theory yields SFRs in good agreement with observations, suggesting that turbulence is the dominant, initial process responsible for star formation. In contrast to previous SFR theories, the present one does not invoke an ad hoc density threshold for star formation; instead, the SFR continuously increases with gas density, naturally yielding two different characteristic regimes, thus two different slopes in the SFR versus gas density relationship, in agreement with observational determinations. Besides the complete SFR derivation, we also provide a simplified expression, which reproduces the complete calculations reasonably well and can easily be used for quick determinations of SFRs in cloud environments. A key property at the heart of both our complete and simplified theory is that the SFR involves a density-dependent dynamical time, characteristic of each collapsing (prestellar) overdense region in the cloud, instead of one single mean or critical freefall timescale. Unfortunately, the SFR also depends on some ill-determined parameters, such as the core-to-star mass conversion efficiency and the crossing timescale. Although we provide estimates for these parameters, their uncertainty hampers a precise quantitative determination of the SFR, within less than a factor of a few.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Analytic Modeling of Pressurization and Cryogenic Propellant
NASA Technical Reports Server (NTRS)
Corpening, Jeremy H.
2010-01-01
An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.
Lu, Amy; Ong, Yi-Hong
2016-01-01
Accurate determination of in-vivo light fluence rate is critical for preclinical and clinical studies involving photodynamic therapy (PDT). This study compares the longitudinal light fluence distribution inside biological tissue in the central axis of a 1 cm diameter circular uniform light field for a range of in-vivo tissue optical properties (absorption coefficients (μa) between 0.01 and 1 cm−1 and reduced scattering coefficients (μs’) between 2 and 40 cm−1). This was done using Monte-Carlo simulations for a semi-infinite turbid medium in an air-tissue interface. The end goal is to develop an analytical expression that would fit the results from the Monte Carlo simulation for both the 1 cm diameter circular beam and the broad beam. Each of these parameters is expressed as a function of tissue optical properties. These results can then be compared against the existing expressions in the literature for broad beam for analysis in both accuracy and applicable range. Using the 6-parameter model, the range and accuracy for light transport through biological tissue is improved and may be used in the future as a guide in PDT for light fluence distribution for known tissue optical properties. PMID:27053827
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
Simple Expressions for the Design of Linear Tapers in Overmoded Corrugated Waveguides
NASA Astrophysics Data System (ADS)
Schaub, S. C.; Shapiro, M. A.; Temkin, R. J.
2016-01-01
Simple analytical formulae are presented for the design of linear tapers with very low mode conversion loss in overmoded corrugated waveguides. For tapers from waveguide radius a 2 to a 1, with a 1< a 2, the optimal length of the taper is 3.198 a 1 a 2/ λ. Here, λ is the wavelength of radiation. The fractional loss of the HE 11 mode in an optimized taper is 0.0293 (a2-a1)4/{a12}{a22}. These formulae are accurate when a 2≲2 a 1. Slightly more complex formulae, accurate for a 2≤4 a 1, are also presented in this paper. The loss in an overmoded corrugated linear taper is less than 1 % when a 2≤2.12 a 1 and less than 0.1 % when a 2≤1.53 a 1. The present analytic results have been benchmarked against a rigorous mode matching code and have been found to be very accurate. The results for linear tapers are compared with the analogous expressions for parabolic tapers. Parabolic tapers may provide lower loss, but linear tapers with moderate values of a 2/ a 1 may be attractive because of their simplicity of fabrication.
Simple Expressions for the Design of Linear Tapers in Overmoded Corrugated Waveguides
Schaub, S. C.; Shapiro, M. A.; Temkin, R. J.
2016-01-01
Simple analytical formulae are presented for the design of linear tapers with very low mode conversion loss in overmoded corrugated waveguides. For tapers from waveguide radius a2 to a1, with a1 < a2, the optimal length of the taper is 3.198a1a2/λ. Here, λ is the wavelength of radiation. The fractional loss of the HE11 mode in an optimized taper is 0.0293(a2−a1)4∕a12a22. These formulae are accurate when a2 ≲ 2a1. Slightly more complex formulae, accurate for a2 ≤ 4a1, are also presented in this paper. The loss in an overmoded corrugated linear taper is less than 1 % when a2 ≤ 2.12a1 and less than 0.1 % when a2 ≤ 1.53a1. The present analytic results have been benchmarked against a rigorous mode matching code and have been found to be very accurate. The results for linear tapers are compared with the analogous expressions for parabolic tapers. Parabolic tapers may provide lower loss, but linear tapers with moderate values of a2/a1 may be attractive because of their simplicity of fabrication. PMID:27053963
Big data analytics in immunology: a knowledge-based approach.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677
Big Data Analytics in Immunology: A Knowledge-Based Approach
Zhang, Guang Lan
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677
NASA Astrophysics Data System (ADS)
Worley, Christopher G.; Havrilla, George J.
2000-07-01
Accurately determining the concentration of certain elements in plutonium is of vital importance in manufacturing nuclear weapons. X-ray fluorescence (XRF) provides a means of obtaining this type of elemental information accurately, quickly, with high precision, and often with little sample preparation. In the present work, a novel method was developed to analyze the gallium concentration in plutonium samples using wavelength-dispersive XRF. A description of the analytical method will be discussed.
Microcomputer Applications in Analytical Chemistry.
ERIC Educational Resources Information Center
Long, Joseph W.
The first part of this paper addresses the following topics: (1) the usefulness of microcomputers; (2) applications for microcomputers in analytical chemistry; (3) costs; (4) major microcomputer systems and subsystems; and (5) which microcomputer to buy. Following these brief comments, the major focus of the paper is devoted to a discussion of…
Exploratory Analysis in Learning Analytics
ERIC Educational Resources Information Center
Gibson, David; de Freitas, Sara
2016-01-01
This article summarizes the methods, observations, challenges and implications for exploratory analysis drawn from two learning analytics research projects. The cases include an analysis of a games-based virtual performance assessment and an analysis of data from 52,000 students over a 5-year period at a large Australian university. The complex…
Analytical Chemistry and the Microchip.
ERIC Educational Resources Information Center
Lowry, Robert K.
1986-01-01
Analytical techniques used at various points in making microchips are described. They include: Fourier transform infrared spectrometry (silicon purity); optical emission spectroscopy (quantitative thin-film composition); X-ray photoelectron spectroscopy (chemical changes in thin films); wet chemistry, instrumental analysis (process chemicals);…
Visual Analytics Science and Technology
Wong, Pak C.
2007-03-01
It is an honor to welcome you to the first theme issue of information visualization (IVS) dedicated entirely to the study of visual analytics. It all started from the establishment of the U.S. Department of Homeland Security (DHS) sponsored National Visualization and Analytics Center™ (NVAC™) at the Pacific Northwest National Laboratory (PNNL) in 2004. In 2005, under the leadership of NVAC, a team of the world’s best and brightest multidisciplinary scholars coauthored its first research and development (R&D) agenda Illuminating the Path, which defines the study as “the science of analytical reasoning facilitated by interactive visual interfaces.” Among the most exciting, challenging, and educational events developed since then was the first IEEE Symposium on Visual Analytics Science and Technology (VAST) held in Baltimore, Maryland in October 2006. This theme issue features seven outstanding articles selected from the IEEE VAST proceedings and a commentary article contributed by Jim Thomas, the director of NVAC, on the status and progress of the center.
Analytical Utility of Campylobacter Methodologies
Technology Transfer Automated Retrieval System (TEKTRAN)
The National Advisory Committee on Microbiological Criteria for Foods (NACMCF, or the Committee) was asked to address the analytical utility of Campylobacter methodologies in preparation for an upcoming United States Food Safety and Inspection Service (FSIS) baseline study to enumerate Campylobacter...