Robust Accurate Non-Invasive Analyte Monitor
Robinson, Mark R.
1998-11-03
An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
NASA Astrophysics Data System (ADS)
Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang
2016-08-01
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Accurate expressions for solar cell fill factors including series and shunt resistances
NASA Astrophysics Data System (ADS)
Green, Martin A.
2016-02-01
Together with open-circuit voltage and short-circuit current, fill factor is a key solar cell parameter. In their classic paper on limiting efficiency, Shockley and Queisser first investigated this factor's analytical properties showing, for ideal cells, it could be expressed implicitly in terms of the maximum power point voltage. Subsequently, fill factors usually have been calculated iteratively from such implicit expressions or from analytical approximations. In the absence of detrimental series and shunt resistances, analytical fill factor expressions have recently been published in terms of the Lambert W function available in most mathematical computing software. Using a recently identified perturbative relationship, exact expressions in terms of this function are derived in technically interesting cases when both series and shunt resistances are present but have limited impact, allowing a better understanding of their effect individually and in combination. Approximate expressions for arbitrary shunt and series resistances are then deduced, which are significantly more accurate than any previously published. A method based on the insights developed is also reported for deducing one-diode fits to experimental data.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Analytical expressions for electrostatics of graphene structures
NASA Astrophysics Data System (ADS)
Georgantzinos, S. K.; Giannopoulos, G. I.; Fatsis, A.; Vlachakis, N. V.
2016-10-01
This study focuses on electrostatics of various graphene structures as graphene monolayer, graphene nanoribbons, as well as multi-layer graphene or graphene flakes. An atomistic moment method based on classical electrostatics is utilized in order to evaluate the charge distribution in each nanostructure. Assuming a freestanding graphene structure in an infinite or in a semi-infinite space limited by a grounded infinite plane, the effect of the length, width, number of layers and position of the nanostructure on its electrostatic charge distributions and total charge and capacitance is examined through a parametric analysis. The results of the present show good agreement with corresponding available data in the literature, obtained from different theoretical approaches. Performing nonlinear regression analysis on the numerical results, where it is possible, simple analytical expressions are proposed for the total charge and charge distribution prediction based on structure geometry.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Analytical expressions for fringe fields in multipole magnets
NASA Astrophysics Data System (ADS)
Muratori, B. D.; Jones, J. K.; Wolski, A.
2015-06-01
Fringe fields in multipole magnets can have a variety of effects on the linear and nonlinear dynamics of particles moving along an accelerator beam line. An accurate model of an accelerator must include realistic models of the magnet fringe fields. Fringe fields for dipoles are well understood and can be modeled at an early stage of accelerator design in such codes as mad8, madx, gpt or elegant. Existing techniques for quadrupole and higher order multipoles rely either on the use of a numerical field map, or on a description of the field in the form of a series expansion about a chosen axis. Usually, it is not until the later stages of a design project that such descriptions (based on magnet modeling or measurement) become available. Furthermore, series expansions rely on the assumption that the beam travels more or less on axis throughout the beam line; but in some types of machines (for example, Fixed Field Alternating Gradients or FFAGs) this is not a good assumption. Furthermore, some tracking codes, such as gpt, use methods for including space charge effects that require fields to vary smoothly and continuously along a beam line: in such cases, realistic fringe field models are of significant importance. In this paper, a method for constructing analytical expressions for multipole fringe fields is presented. Such expressions allow fringe field effects to be included in beam dynamics simulations from the start of an accelerator design project, even before detailed magnet design work has been undertaken. The magnetostatic Maxwell equations are solved analytically and a solution that fits all orders of multipoles is derived. Quadrupole fringe fields are considered in detail as these are the ones that give the strongest effects. The analytic expressions for quadrupole fringe fields are compared with data obtained from numerical modeling codes in two cases: a magnet in the high luminosity upgrade of the Large Hadron Collider inner triplet, and a magnet in the
Analytic expressions for ULF wave radiation belt radial diffusion coefficients
Ozeke, Louis G; Mann, Ian R; Murphy, Kyle R; Jonathan Rae, I; Milling, David K
2014-01-01
We present analytic expressions for ULF wave-derived radiation belt radial diffusion coefficients, as a function of L and Kp, which can easily be incorporated into global radiation belt transport models. The diffusion coefficients are derived from statistical representations of ULF wave power, electric field power mapped from ground magnetometer data, and compressional magnetic field power from in situ measurements. We show that the overall electric and magnetic diffusion coefficients are to a good approximation both independent of energy. We present example 1-D radial diffusion results from simulations driven by CRRES-observed time-dependent energy spectra at the outer boundary, under the action of radial diffusion driven by the new ULF wave radial diffusion coefficients and with empirical chorus wave loss terms (as a function of energy, Kp and L). There is excellent agreement between the differential flux produced by the 1-D, Kp-driven, radial diffusion model and CRRES observations of differential electron flux at 0.976 MeV—even though the model does not include the effects of local internal acceleration sources. Our results highlight not only the importance of correct specification of radial diffusion coefficients for developing accurate models but also show significant promise for belt specification based on relatively simple models driven by solar wind parameters such as solar wind speed or geomagnetic indices such as Kp. Key Points Analytic expressions for the radial diffusion coefficients are presented The coefficients do not dependent on energy or wave m value The electric field diffusion coefficient dominates over the magnetic PMID:26167440
Fast and Accurate Digital Morphometry of Facial Expressions.
Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan
2015-10-01
Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated.
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Ghittorelli, Matteo; Torricelli, Fabrizio; Kovács-Vajna, Zsolt Miklos
2015-12-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of Thin-Film Transistors (TFTs) and, in turn, of Organic Thin-Film Transistors (OTFTs), available today. However, the need for iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not enough accurate to model OTFTs and, in particular, transconductances and transcapacitances. In this paper we present an accurate and computationally efficient closed-form approximation of the surface potential, based on the Lagrange Reversion Theorem, that can be exploited in advanced surface-potential-based OTFTs and TFTs device models.
Interpolation method for accurate affinity ranking of arrayed ligand-analyte interactions.
Schasfoort, Richard B M; Andree, Kiki C; van der Velde, Niels; van der Kooi, Alex; Stojanović, Ivan; Terstappen, Leon W M M
2016-05-01
The values of the affinity constants (kd, ka, and KD) that are determined by label-free interaction analysis methods are affected by the ligand density. This article outlines a surface plasmon resonance (SPR) imaging method that yields high-throughput globally fitted affinity ranking values using a 96-plex array. A kinetic titration experiment without a regeneration step has been applied for various coupled antibodies binding to a single antigen. Globally fitted rate (kd and ka) and dissociation equilibrium (KD) constants for various ligand densities and analyte concentrations are exponentially interpolated to the KD at Rmax = 100 RU response level (KD(R100)).
NASA Astrophysics Data System (ADS)
Qiao, Yaojun; Li, Ming; Yang, Qiuhong; Xu, Yanfei; Ji, Yuefeng
2015-01-01
Closed-form expressions of nonlinear interference of dense wavelength-division-multiplexed (WDM) systems with dispersion managed transmission (DMT) are derived. We carry out a simulative validation by addressing an ample and significant set of the Nyquist-WDM systems based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) subcarriers at a baud rate of 32 Gbaud per channel. Simulation results show the simple closed-form analytical expressions can provide an effective tool for the quick and accurate prediction of system performance in DMT coherent optical systems.
Analytic expression for poloidal flow velocity in the banana regime
Taguchi, M.
2013-01-15
The poloidal flow velocity in the banana regime is calculated by improving the l = 1 approximation for the Fokker-Planck collision operator [M. Taguchi, Plasma Phys. Controlled Fusion 30, 1897 (1988)]. The obtained analytic expression for this flow, which can be used for general axisymmetric toroidal plasmas, agrees quite well with the recently calculated numerical results by Parker and Catto [Plasma Phys. Controlled Fusion 54, 085011 (2012)] in the full range of aspect ratio.
Dismer, Florian; Hansen, Sigrid; Oelmeier, Stefan Alexander; Hubbuch, Jürgen
2013-03-01
Chromatography is the method of choice for the separation of proteins, at both analytical and preparative scale. Orthogonal purification strategies for industrial use can easily be implemented by combining different modes of adsorption. Nevertheless, with flexibility comes the freedom of choice and optimal conditions for consecutive steps need to be identified in a robust and reproducible fashion. One way to address this issue is the use of mathematical models that allow for an in silico process optimization. Although this has been shown to work, model parameter estimation for complex feedstocks becomes the bottleneck in process development. An integral part of parameter assessment is the accurate measurement of retention times in a series of isocratic or gradient elution experiments. As high-resolution analytics that can differentiate between proteins are often not readily available, pure protein is mandatory for parameter determination. In this work, we present an approach that has the potential to solve this problem. Based on the uniqueness of UV absorption spectra of proteins, we were able to accurately measure retention times in systems of up to four co-eluting compounds. The presented approach is calibration-free, meaning that prior knowledge of pure component absorption spectra is not required. Actually, pure protein spectra can be determined from co-eluting proteins as part of the methodology. The approach was tested for size-exclusion chromatograms of 38 mixtures of co-eluting proteins. Retention times were determined with an average error of 0.6 s (1.6% of average peak width), approximated and measured pure component spectra showed an average coefficient of correlation of 0.992.
Analytical expressions for vibrational matrix elements of Morse oscillators
Zuniga, J.; Hidalgo, A.; Frances, J.M.; Requena, A.; Lopez Pineiro, A.; Olivares del Valle, F.J.
1988-10-15
Several exact recursion relations connecting different Morse oscillator matrix elements associated with the operators q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/ and q/sup ..cap alpha../e/sup -//sup ..beta..//sup aq/(d/dr) are derived. Matrix elements of the other useful operators may then be obtained easily. In particular, analytical expressions for (y/sup k/d/dr) and (y/sup k/d/dr+(d/dr)y/sup k/), matrix elements of interest in the study of the internuclear motion in polyatomic molecules, are obtained.
Analytical Expressions for the REM Model of Recognition Memory
Montenegro, Maximiliano; Myung, Jay I.; Pitt, Mark A.
2014-01-01
An inordinate amount of computation is required to evaluate predictions of simulation-based models. Following Myung et al (2007), we derived an analytic form expression of the REM model of recognition memory using a Fourier transform technique, which greatly reduces the time required to perform model simulations. The accuracy of the derivation is verified by showing a close correspondence between its predictions and those reported in Shiffrin and Steyvers (1997). The derivation also shows that REM’s predictions depend upon the vector length parameter, and that model parameters are not identifiable unless one of the parameters is fixed. PMID:25089060
Expressions of homosexuality and the perspective of analytical psychology.
Miller, Barry
2010-02-01
Homosexuality, as a description and category of human experience, has a long, complicated and problematic history. It has been utilized as a carrier of theological, political, and psychological ideologies of all sorts, with varying and contradictory influences into the lives of us all. Analytical psychology, emphasizing the purposiveness found in manifestations of the psyche, offers a unique approach to this subject. The focus moves from causation to the meanings embedded in erotic expressions, fantasies, and dreams. Consequently, homosexuality loses its predetermined meaning and finds its definition in the psychology of the individual. Categories of 'sexual orientation' may defend against personal analysis, deflecting the essential fluidity and mystery of Eros. This is illustrated with samples of the variety found in 'homosexual' material.
Analytic expression for in-field scattered light distribution
NASA Astrophysics Data System (ADS)
Peterson, Gary L.
2004-01-01
Light that is scattered from lenses and mirrors in an optical system produces a halo of stray light around bright objects within the field of view. The angular distribution of scattered light from any one component is usually described by the Harvey model. This paper presents analytic expressions for the scattered irradiance at a focal plane from optical components that scatter light in accordance with the Harvey model. It is found that the irradiance is independent of the location of an optical element within the system, provided the element is not located at or near an intermediate image plane. It is also found that the irradiance has little or no dependence on the size of the element.
Analytic expressions for geometric measure of three-qubit states
Tamaryan, Levon; Park, DaeKil; Tamaryan, Sayatnova
2008-02-15
A method is developed to derive algebraic equations for the geometric measure of entanglement of three-qubit pure states. The equations are derived explicitly and solved in the cases of most interest. These equations allow one to derive analytic expressions of the geometric entanglement measure in a wide range of three-qubit systems, including the general class of W states and states which are symmetric under the permutation of two qubits. The nearest separable states are not necessarily unique, and highly entangled states are surrounded by a one-parametric set of equally distant separable states. A possibility for physical applications of the various three-qubit states to quantum teleportation and superdense coding is suggested from the aspect of entanglement.
NASA Astrophysics Data System (ADS)
Guo, Kongming; Jiang, Jun; Xu, Yalan
2016-09-01
In this paper, a simple but accurate semi-analytical method to approximate probability density function of stochastic closed curve attractors is proposed. The expression of distribution applies to systems with strong nonlinearities, while only weak noise condition is needed. With the understanding that additive noise does not change the longitudinal distribution of the attractors, the high-dimensional probability density distribution is decomposed into two low-dimensional distributions: the longitudinal and the transverse probability density distributions. The longitudinal distribution can be calculated from the deterministic systems, while the probability density in the transverse direction of the curve can be approximated by the stochastic sensitivity function method. The effectiveness of this approach is verified by comparing the expression of distribution with the results of Monte Carlo numerical simulations in several planar systems.
NASA Astrophysics Data System (ADS)
Ronen, Michal; Rosenberg, Revital; Shraiman, Boris I.; Alon, Uri
2002-08-01
A basic challenge in systems biology is to understand the dynamical behavior of gene regulation networks. Current approaches aim at determining the network structure based on genomic-scale data. However, the network connectivity alone is not sufficient to define its dynamics; one needs to also specify the kinetic parameters for the regulation reactions. Here, we ask whether effective kinetic parameters can be assigned to a transcriptional network based on expression data. We present a combined experimental and theoretical approach based on accurate high temporal-resolution measurement of promoter activities from living cells by using green fluorescent protein (GFP) reporter plasmids. We present algorithms that use these data to assign effective kinetic parameters within a mathematical model of the network. To demonstrate this, we employ a well defined network, the SOS DNA repair system of Escherichia coli. We find a strikingly detailed temporal program of expression that correlates with the functional role of the SOS genes and is driven by a hierarchy of effective kinetic parameter strengths for the various promoters. The calculated parameters can be used to determine the kinetics of all SOS genes given the expression profile of just one representative, allowing a significant reduction in complexity. The concentration profile of the master SOS transcriptional repressor can be calculated, demonstrating that relative protein levels may be determined from purely transcriptional data. This finding opens the possibility of assigning kinetic parameters to transcriptional networks on a genomic scale.
NASA Technical Reports Server (NTRS)
Lieske, J. H.
1975-01-01
A method for improving Sampson's (1910, 1912, 1921) original work in developing series expressions for accurate coordinates of the Galilean satellites is discussed. The method, which utilizes computer-based algebraic manipulation software, was developed to reconstruct Sampson's theory, remove existing errors, introduce neglected effects, and provide analytic expressions for the coordinates as well as for the partial derivatives with respect to orbital parameters, Jupiter's mass and oblateness, the satellite masses, and Jupiter's pole and rotation period. The software system, capable of handling Poisson series with up to 73 polynomial variables and 28 trigonometric arguments, is described. The preliminary solution is presented, and procedures are outlined for calculating perturbations and eliminating auxiliary parameters.
Transcriptional Bursting in Gene Expression: Analytical Results for General Stochastic Models.
Kumar, Niraj; Singh, Abhyudai; Kulkarni, Rahul V
2015-10-01
Gene expression in individual cells is highly variable and sporadic, often resulting in the synthesis of mRNAs and proteins in bursts. Such bursting has important consequences for cell-fate decisions in diverse processes ranging from HIV-1 viral infections to stem-cell differentiation. It is generally assumed that bursts are geometrically distributed and that they arrive according to a Poisson process. On the other hand, recent single-cell experiments provide evidence for complex burst arrival processes, highlighting the need for analysis of more general stochastic models. To address this issue, we invoke a mapping between general stochastic models of gene expression and systems studied in queueing theory to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions. These results are then used to derive noise signatures, i.e. explicit conditions based entirely on experimentally measurable quantities, that determine if the burst distributions deviate from the geometric distribution or if burst arrival deviates from a Poisson process. For non-Poisson arrivals, we develop approaches for accurate estimation of burst parameters. The proposed approaches can lead to new insights into transcriptional bursting based on measurements of steady-state mRNA/protein distributions. PMID:26474290
Fast and accurate approximate inference of transcript expression from RNA-seq data
Hensman, James; Papastamoulis, Panagiotis; Glaus, Peter; Honkela, Antti; Rattray, Magnus
2015-01-01
Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared with competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximization algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability and implementation: The methods were implemented in R and C++, and are available as part of the BitSeq project at github.com/BitSeq. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via github.com/BitSeq/BitSeqVB_benchmarking. Contact: james.hensman@sheffield.ac.uk or panagiotis.papastamoulis@manchester.ac.uk or Magnus.Rattray@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online
Accurate Gene Expression-Based Biodosimetry Using a Minimal Set of Human Gene Transcripts
Tucker, James D.; Joiner, Michael C.; Thomas, Robert A.; Grever, William E.; Bakhmutsky, Marina V.; Chinkhota, Chantelle N.; Smolinski, Joseph M.; Divine, George W.; Auner, Gregory W.
2014-03-15
Purpose: Rapid and reliable methods for conducting biological dosimetry are a necessity in the event of a large-scale nuclear event. Conventional biodosimetry methods lack the speed, portability, ease of use, and low cost required for triaging numerous victims. Here we address this need by showing that polymerase chain reaction (PCR) on a small number of gene transcripts can provide accurate and rapid dosimetry. The low cost and relative ease of PCR compared with existing dosimetry methods suggest that this approach may be useful in mass-casualty triage situations. Methods and Materials: Human peripheral blood from 60 adult donors was acutely exposed to cobalt-60 gamma rays at doses of 0 (control) to 10 Gy. mRNA expression levels of 121 selected genes were obtained 0.5, 1, and 2 days after exposure by reverse-transcriptase real-time PCR. Optimal dosimetry at each time point was obtained by stepwise regression of dose received against individual gene transcript expression levels. Results: Only 3 to 4 different gene transcripts, ASTN2, CDKN1A, GDF15, and ATM, are needed to explain ≥0.87 of the variance (R{sup 2}). Receiver-operator characteristics, a measure of sensitivity and specificity, of 0.98 for these statistical models were achieved at each time point. Conclusions: The actual and predicted radiation doses agree very closely up to 6 Gy. Dosimetry at 8 and 10 Gy shows some effect of saturation, thereby slightly diminishing the ability to quantify higher exposures. Analyses of these gene transcripts may be advantageous for use in a field-portable device designed to assess exposures in mass casualty situations or in clinical radiation emergencies.
NASA Astrophysics Data System (ADS)
Boué, G.; Montalto, M.; Boisse, I.; Oshagh, M.; Santos, N. C.
2013-02-01
The Rossiter-McLaughlin (hereafter RM) effect is a key tool for measuring the projected spin-orbit angle between stellar spin axes and orbits of transiting planets. However, the measured radial velocity (RV) anomalies produced by this effect are not intrinsic and depend on both instrumental resolution and data reduction routines. Using inappropriate formulas to model the RM effect introduces biases, at least in the projected velocity Vsini⋆ compared to the spectroscopic value. Currently, only the iodine cell technique has been modeled, which corresponds to observations done by, e.g., the HIRES spectrograph of the Keck telescope. In this paper, we provide a simple expression of the RM effect specially designed to model observations done by the Gaussian fit of a cross-correlation function (CCF) as in the routines performed by the HARPS team. We derived a new analytical formulation of the RV anomaly associated to the iodine cell technique. For both formulas, we modeled the subplanet mean velocity vp and dispersion βp accurately taking the rotational broadening on the subplanet profile into account. We compare our formulas adapted to the CCF technique with simulated data generated with the numerical software SOAP-T and find good agreement up to Vsini⋆ ≲ 20 km s-1. In contrast, the analytical models simulating the two different observation techniques can disagree by about 10σ in Vsini⋆ for large spin-orbit misalignments. It is thus important to apply the adapted model when fitting data. A public code implementing the expressions derived in this paper is available at http://www.astro.up.pt/resources/arome. A copy of the code is also available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/550/A53
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are
On the analytical form of the Earth's magnetic attraction expressed as a function of time
NASA Technical Reports Server (NTRS)
Carlheim-Gyllenskold, V.
1983-01-01
An attempt is made to express the Earth's magnetic attraction in simple analytical form using observations during the 16th to 19th centuries. Observations of the magnetic inclination in the 16th and 17th centuries are discussed.
Analytic Expressions for the BCDMEM Model of Recognition Memory
Myung, Jay I.; Montenegro, Maximiliano; Pitt, Mark A.
2007-01-01
We introduce a Fourier Transformation technique that enables one to derive closed-form expressions of performance measures (e.g., hit and false alarm rates) of simulation-based models of recognition memory. Application of the technique is demonstrated using the bind cue decide model of episodic memory (BCDMEM; Dennis & Humphreys, 2001). In addition to reducing the time required to test the model, which for models like BCDMEM can be excessive, asymptotic expressions of the measures reveal heretofore unknown properties of the model, such as model predictions being dependent on vector length. PMID:18516213
Analytical Expressions for Deformation from an Arbitrarily Oriented Spheroid in a Half-Space
NASA Astrophysics Data System (ADS)
Cervelli, P. F.
2013-12-01
Deformation from magma chambers can be modeled by an elastic half-space with an embedded cavity subject to uniform pressure change along its interior surface. For a small number of cavity shapes, such as a sphere or a prolate spheroid, closed-form, analytical expressions for deformation have been derived, although these only approximate the uniform-pressure-change boundary condition, with the approximation becoming more accurate as the ratio of source depth to source dimension increases. Using the method of Elshelby [1957] and Yang [1988], which consists of a distribution of double forces and centers of dilatation along the vertical axis, I have derived expressions for displacement from a finite spheroid of arbitrary orientation and aspect ratio that are exact in an infinite elastic medium and approximate in a half-space. The approximation, like those for other cavity shapes, becomes increasingly accurate as the depth to source ratio grows larger, and is accurate to within a few percent in most real-world cases. I have also derived expressions for the deformation-gradient tensor, i.e., the derivatives of each component of displacement with respect to each coordinate direction. These can be transformed easily into the strain and stress tensors. The expressions give deformation both at the surface and at any point within the half-space, and include conditional statements that account for limiting cases that would otherwise prove singular. I have developed MATLAB code for these expressions (and their derivatives), which I use to demonstrate the accuracy of the approximation by showing how well the uniform-pressure-change boundary condition is satisfied in a variety of cases. I also show that a vertical, oblate spheroid with a zero-length vertical axis is equivalent to the penny-shaped crack of Fialko [2001] in an infinite medium and an excellent approximation in a half-space. Finally, because, in many cases, volume change is more tangible than pressure change, I have
Accurate Analytic Potential Functions for the a ^3Π_1 and X ^1Σ^+ States of {IBr}
NASA Astrophysics Data System (ADS)
Yukiya, Tokio; Nishimiya, Nobuo; Suzuki, Masao; Le Roy, Robert
2014-06-01
Spectra of IBr in various wavelength regions have been measured by a number of researchers using traditional diffraction grating and microwave methods, as well as using high-resolution laser techniques combined with a Fourier transform spectrometer. In a previous paper at this meeting, we reported a preliminary determination of analytic potential energy functions for the A ^3Π_1 and X ^1Σ^+ states of IBr from a direct-potential-fit (DPF) analysis of all of the data available at that time. That study also confirmed the presence of anomalous fluctuations in the v--dependence of the first differences of the inertial rotational constant, Δ Bv=Bv+1-Bv in the A ^3Π_1 state for vibrational levels with v'(A) in the mid 20's. However, our previous experience in a recent study of the analogous A ^3Π_1-X ^1Σ_g^+ system of Br_2 suggested that the effect of such fluctuations may be overcome if sufficient data are available. The present work therefore reports new measurements of transitions to levels in the v'(A)=23-26 region, together with a new global DPF analysis that uses ``robust" least-squares fits to average properly over the effect of such fluctuations in order to provide an optimum delineation of the underlying potential energy curve(s). L.E.Selin,Ark. Fys. 21,479(1962) E. Tiemann and Th. Moeller, Z. Naturforsch. A 30,986 (1975) E.M. Weinstock and A. Preston, J. Mol. Spectrosc. 70, 188 (1978) D.R.T. Appadoo, P.F. Bernath, and R.J. Le Roy, Can. J. Phys. 72, 1265 (1994) N. Nishimiya, T. Yukiya and M. Suzuki, J. Mol. Spectrosc. 173, 8 (1995). T. Yukiya, N. Nishimiya, and R.J. Le Roy, Paper MF12 at the 65th Ohio State University International Symposium on Molecular Spectroscopy, Columbus, Ohio, June 20-24, 2011. T. Yukiya, N. Nishimiya, Y. Samajima, K. Yamaguchi, M. Suzuki, C.D. Boone, I. Ozier and R.J. Le Roy, J. Mol. Spectrosc. 283, 32 (2013) J.K.G. Watson, J. Mol. Spectrosc. 219, 326 (2003).
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
Analytical expressions for partial wave two-body Coulomb transition matrices at ground-state energy
NASA Astrophysics Data System (ADS)
Kharchenko, V. F.
2016-11-01
Leaning upon the Fock method of the stereographic projection of the three-dimensional momentum space onto the four-dimensional unit sphere the possibility of the analytical solving of the Lippmann-Schwinger integral equation for the partial wave two-body Coulomb transition matrix at the ground bound state energy has been studied. In this case new expressions for the partial p-, d- and f-wave two-body Coulomb transition matrices have been obtained in the simple analytical form. The developed approach can also be extended to determine analytically the partial wave Coulomb transition matrices at the energies of excited bound states.
Robel, Laurence; Vaivre-Douret, Laurence; Neveu, Xavier; Piana, Hélène; Perier, Antoine; Falissard, Bruno; Golse, Bernard
2008-12-01
We investigated the recognition of pairs of faces (same or different facial identities and expressions) in two groups of 14 children aged 6-10 years, with either an expressive language disorder (ELD), or a mixed language disorder (MLD), and two groups of 14 matched healthy controls. When looking at their global performances, children with either expressive (ELD) or MLD have few differences from controls in either face or emotional recognition. At contrary, we found that children with MLD, but not those with ELD, take identical faces to be different if their expressions change. Since children with mixed language disorders are socially more impaired than children with ELD, we think that these features may partly underpin the social difficulties of these children.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Walji, Sadru; Sentjens, Katherine
2013-06-01
Alkali hydride diatomic molecules have long been the object of spectroscopic studies. However, their small reduced mass makes them species for which the conventional semiclassical-based methods of analysis tend to have the largest errors. To date, the only quantum-mechanically accurate direct-potential-fit (DPF) analysis for one of these molecules was the one for LiH reported by Coxon and Dickinson. The present paper extends this level of analysis to NaH, and reports a DPF analysis of all available spectroscopic data for the A ^1Σ^+-X ^1Σ^+ system of NaH which yields analytic potential energy functions for these two states that account for those data (on average) to within the experimental uncertainties. W.C. Stwalley, W.T. Zemke and S.C. Yang, J. Phys. Chem. Ref. Data {20}, 153-187 (1991). J.A. Coxon and C.S. Dickinson, J. Chem. Phys. {121}, 8378 (2004).
NASA Astrophysics Data System (ADS)
Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.
2016-10-01
A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.
Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.
2016-01-01
The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113
Analytical expressions for pH-regulated electroosmotic flow in microchannels.
Hsu, Jyh-Ping; Huang, Chih-Hua
2012-05-01
We derived analytical expressions for the pH-regulated electroosmotic flow in a microchannel for arbitrary level of surface potential and type of electrolyte solution; previous results are almost always based on the conditions of low, constant surface potential, which are inaccurate and unrealistic. In addition, an analytical expression for the dependence of the surface potential on the electrolyte concentration and solution pH is obtained, which is capable of explaining the behavior of the empirical relation used in the literature. The analytical results derived are readily applicable to further electrokinetic analyses, and to interpret experimental observations and/or design devices involving electroosmosis such as biosensors and lab-on-a-chip. PMID:22236502
NASA Astrophysics Data System (ADS)
Dewangan, D. P.
2008-01-01
We give an exact quantum formula for the z-component of the dipole matrix element between parabolic states of a hydrogen atom in terms of the Jacobi polynomials. The formula extends the range of numerical computation to larger values of the parabolic quantum numbers for which computation from the standard textbook formula, which is in terms of the hypergeometric functions, is defined. We obtain an accurate quantum expression of the z-dipole matrix element in terms of the ordinary Bessel functions for transition between nearby Rydberg parabolic states. We derive for the first time the formula of the z-dipole matrix element of the correspondence principle method directly from the quantum expression, and in the process of derivation, clarify the nature of classical-quantum correspondence. The expressions obtained in this work solve the problem of computation of the z-dipole matrix element of hydrogen to a large extent.
High expression of CD26 accurately identifies human bacteria-reactive MR1-restricted MAIT cells
Sharma, Prabhat K; Wong, Emily B; Napier, Ruth J; Bishai, William R; Ndung'u, Thumbi; Kasprowicz, Victoria O; Lewinsohn, Deborah A; Lewinsohn, David M; Gold, Marielle C
2015-01-01
Mucosa-associated invariant T (MAIT) cells express the semi-invariant T-cell receptor TRAV1–2 and detect a range of bacteria and fungi through the MHC-like molecule MR1. However, knowledge of the function and phenotype of bacteria-reactive MR1-restricted TRAV1–2+ MAIT cells from human blood is limited. We broadly characterized the function of MR1-restricted MAIT cells in response to bacteria-infected targets and defined a phenotypic panel to identify these cells in the circulation. We demonstrated that bacteria-reactive MR1-restricted T cells shared effector functions of cytolytic effector CD8+ T cells. By analysing an extensive panel of phenotypic markers, we determined that CD26 and CD161 were most strongly associated with these T cells. Using FACS to sort phenotypically defined CD8+ subsets we demonstrated that high expression of CD26 on CD8+ TRAV1–2+ cells identified with high specificity and sensitivity, bacteria-reactive MR1-restricted T cells from human blood. CD161hi was also specific for but lacked sensitivity in identifying all bacteria-reactive MR1-restricted T cells, some of which were CD161dim. Using cell surface expression of CD8, TRAV1–2, and CD26hi in the absence of stimulation we confirm that bacteria-reactive T cells are lacking in the blood of individuals with active tuberculosis and are restored in the blood of individuals undergoing treatment for tuberculosis. PMID:25752900
Accurate RT-qPCR gene expression analysis on cell culture lysates
Van Peer, Gert; Mestdagh, Pieter; Vandesompele, Jo
2012-01-01
Gene expression quantification on cultured cells using the reverse transcription quantitative polymerase chain reaction (RT-qPCR) typically involves an RNA purification step that limits sample processing throughput and precludes parallel analysis of large numbers of samples. An approach in which cDNA synthesis is carried out on crude cell lysates instead of on purified RNA samples can offer a fast and straightforward alternative. Here, we evaluate such an approach, benchmarking Ambion's Cells-to-CT kit with the classic workflow of RNA purification and cDNA synthesis, and demonstrate its good accuracy and superior sensitivity. PMID:22355736
RiboTALE: A modular, inducible system for accurate gene expression control
Rai, Navneet; Ferreiro, Aura; Neckelmann, Alexander; Soon, Amy; Yao, Andrew; Siegel, Justin; Facciotti, Marc T.; Tagkopoulos, Ilias
2015-01-01
A limiting factor in synthetic gene circuit design is the number of independent control elements that can be combined together in a single system. Here, we present RiboTALEs, a new class of inducible repressors that combine the specificity of TALEs with the ability of riboswitches to recognize exogenous signals and differentially control protein abundance. We demonstrate the capacity of RiboTALEs, constructed through different combinations of TALE proteins and riboswitches, to rapidly and reproducibly control the expression of downstream targets with a dynamic range of 243.7 ± 17.6-fold, which is adequate for many biotechnological applications. PMID:26023068
A Stationary Wavelet Entropy-Based Clustering Approach Accurately Predicts Gene Expression
Nguyen, Nha; Vo, An; Choi, Inchan
2015-01-01
Abstract Studying epigenetic landscapes is important to understand the condition for gene regulation. Clustering is a useful approach to study epigenetic landscapes by grouping genes based on their epigenetic conditions. However, classical clustering approaches that often use a representative value of the signals in a fixed-sized window do not fully use the information written in the epigenetic landscapes. Clustering approaches to maximize the information of the epigenetic signals are necessary for better understanding gene regulatory environments. For effective clustering of multidimensional epigenetic signals, we developed a method called Dewer, which uses the entropy of stationary wavelet of epigenetic signals inside enriched regions for gene clustering. Interestingly, the gene expression levels were highly correlated with the entropy levels of epigenetic signals. Dewer separates genes better than a window-based approach in the assessment using gene expression and achieved a correlation coefficient above 0.9 without using any training procedure. Our results show that the changes of the epigenetic signals are useful to study gene regulation. PMID:25383910
Simple analytical expression for the peak-frequency shifts of plasmonic resonances for sensing.
Yang, Jianji; Giessen, Harald; Lalanne, Philippe
2015-05-13
We derive a closed-form expression that accurately predicts the peak frequency shift and broadening induced by tiny perturbations of plasmonic nanoresonators without critically relying on repeated electrodynamic simulations of the spectral response of nanoresonator for various locations, sizes, or shapes of the perturbing objects. In comparison with other approaches of the same kind, the force of the present approach is that the derivation is supported by a mathematical formalism based on a rigorous normalization of the resonance modes of nanoresonators consisting of lossy and dispersive materials. Accordingly, accurate predictions are obtained for a large range of nanoparticle shapes and sizes used in various plasmonic nanosensors even beyond the quasistatic limit. The expression gives quantitative insight and, combined with an open-source code, provides accurate and fast predictions that are ideally suited for preliminary designs or for interpretation of experimental data. It is also valid for photonic resonators with large mode volumes. PMID:25844813
Wood, David L. A.; Nones, Katia; Steptoe, Anita; Christ, Angelika; Harliwong, Ivon; Newell, Felicity; Bruxner, Timothy J. C.; Miller, David; Cloonan, Nicole; Grimmond, Sean M.
2015-01-01
Genetic variation modulates gene expression transcriptionally or post-transcriptionally, and can profoundly alter an individual’s phenotype. Measuring allelic differential expression at heterozygous loci within an individual, a phenomenon called allele-specific expression (ASE), can assist in identifying such factors. Massively parallel DNA and RNA sequencing and advances in bioinformatic methodologies provide an outstanding opportunity to measure ASE genome-wide. In this study, matched DNA and RNA sequencing, genotyping arrays and computationally phased haplotypes were integrated to comprehensively and conservatively quantify ASE in a single human brain and liver tissue sample. We describe a methodological evaluation and assessment of common bioinformatic steps for ASE quantification, and recommend a robust approach to accurately measure SNP, gene and isoform ASE through the use of personalized haplotype genome alignment, strict alignment quality control and intragenic SNP aggregation. Our results indicate that accurate ASE quantification requires careful bioinformatic analyses and is adversely affected by sample specific alignment confounders and random sampling even at moderate sequence depths. We identified multiple known and several novel ASE genes in liver, including WDR72, DSP and UBD, as well as genes that contained ASE SNPs with imbalance direction discordant with haplotype phase, explainable by annotated transcript structure, suggesting isoform derived ASE. The methods evaluated in this study will be of use to researchers performing highly conservative quantification of ASE, and the genes and isoforms identified as ASE of interest to researchers studying those loci. PMID:25965996
Zhang, Jing; Teixeira da Silva, Jaime A.; Wang, ChunXia; Sun, HongMei
2015-01-01
Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446
Li, XueYan; Cheng, JinYun; Zhang, Jing; Teixeira da Silva, Jaime A; Wang, ChunXia; Sun, HongMei
2015-01-01
Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt) and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes. PMID:26509446
Mooring systems design based on analytical expressions of catastrophes of slow motion dynamics
Bernitsas, M.M.; Garza-Rios, L.O.
1996-12-31
Analytical expressions of the necessary and sufficient conditions for stability of mooring systems representing bifurcation boundaries, and expressions defining the morphogeneses occurring across boundaries are presented. These expressions provide means for evaluating the stability of a mooring system around an equilibrium position and constructing catastrophe sets in any parametric design space. These expressions allow the designer to select appropriate values for the mooring parameters without resorting to trial and error. A number of realistic applications are provided for barge and tanker mooring systems which exhibit qualitatively different nonlinear dynamics. The mathematical model consists of the nonlinear, third order maneuvering equations of the horizontal plane slow motion dynamics of a vessel moored to one or more terminals. Mooring lines are modeled by synthetic nylon ropes, chains, or steel cables. External excitation consists of time independent current, wind, and mean wave drift forces. The analytical expressions presented in this paper apply to nylon ropes and current excitation. Expressions for other combinations of lines and excitation can be derived.
Rong, Zimei; Ye, Zhihui
2016-01-01
We have derived an analytical expression for the time dependent NO concentration from NONOate donors in the presence of oxygen for the process of NO release from NO donors following autoxidation. This analytical solution incorporates the kinetics of the releases with the autoxidation and is used to fit the simulated NO concentration profile to the experimental data. This allows one to determine the NO release rate constant, k 1, the NO release stoichiometric coefficient, v NO, and the NO autoxidation reaction rate constant, k 2. This analytical solution also allows us to predict the real NO concentration released from NO donors under aerobic conditions, while v NO is reportedly two under aerobic conditions, it falls to lower values in the presence of oxygen. PMID:27526174
Analytical expression for femtosecond-pulsed Z scans on instantaneous nonlinearity
NASA Astrophysics Data System (ADS)
Gu, Bing; Ji, Wei; Huang, Xiao-Qin
2008-03-01
By employing the Gaussian decomposition method, the analytical formulas of the Gaussian-beam Z-scan traces have been derived for an optically thin material exhibiting both refractive and absorptive parts of third-order nonlinearity, with Gaussian or hyperbolic secant squared laser pulses of femtosecond duration. The formulas have been verified experimentally with femtosecond-pulsed Z scans on a carbon disulfide and acetone solution of a chalcone derivative (0.95C18H17ClO4·0.05C17H14Cl2O3). An efficient yet accurate analytical technique has been demonstrated for extracting both the nonlinear refraction coefficient and the nonlinear absorption coefficient from a single closed-aperture Z-scan trace.
NASA Astrophysics Data System (ADS)
Ewing, Michael A.; Zucker, Steven M.; Valentine, Stephen J.; Clemmer, David E.
2013-04-01
Mathematical expressions for the analytical duty cycle associated with different overtones in overtone mobility spectrometry are derived from the widths of the transmitted packets of ions under different instrumental operating conditions. Support for these derivations is provided through ion trajectory simulations. The outcome of the theory and simulations indicates that under all operating conditions there exists a limit or maximum observable overtone that will result in ion transmission. Implications of these findings on experimental design are discussed.
Analytic expressions for the proximity energy, the fusion process and the α emission
NASA Astrophysics Data System (ADS)
Moustabchir, R.; Royer, G.
2001-02-01
The entrance and exit channels through quasimolecular shapes are compatible with the experimental data on fusion, light nucleus and α emissions when the proximity energy is taken into account. Analytic expressions allowing to determine rapidly this proximity energy are presented as well as formulas for the fusion barrier heights and radii and for the α emission barriers. Predictions for half-lives of exotic α emissions are proposed.
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1974-01-01
Analytical expressions are derived to first order for the rms position error in the triangulation solution of a point object in space for several ideal observation-station configurations. These expressions provide insights into the nature of the dependence of the rms position error on certain of the experimental parameters involved. The station geometries examined are: (1) the configuration of two arbitrarily located stations; (2) the symmetrical circular configuration of two or more stations with equal elevation angles; and (3) the circular configuration of more than two stations with equal elevation angles, when one of the stations is permitted to drift around the circle from its position of symmetry. The expressions for the rms position error are expressed as functions of the rms line-of-sight errors, the total number of stations of interest, and the elevation angles.
Analytical Characteristics of a Noninvasive Gene Expression Assay for Pigmented Skin Lesions.
Yao, Zuxu; Allen, Talisha; Oakley, Margaret; Samons, Carol; Garrison, Darryl; Jansen, Burkhard
2016-08-01
We previously reported clinical performance of a novel noninvasive and quantitative PCR (qPCR)-based molecular diagnostic assay (the pigmented lesion assay; PLA) that differentiates primary cutaneous melanoma from benign pigmented skin lesions through two target gene signatures, LINC00518 (LINC) and preferentially expressed antigen in melanoma (PRAME). This study focuses on analytical characterization of this PLA, including qPCR specificity and sensitivity, optimization of RNA input in qPCR to achieve a desired diagnostic sensitivity and specificity, and analytical performance (repeatability and reproducibility) of this two-gene PLA. All target qPCRs demonstrated a good specificity (100%) and sensitivity (with a limit of detection of 1-2 copies), which allows reliable detection of gene expression changes of LINC and PRAME between melanomas and nonmelanomas. Through normalizing RNA input in qPCR, we converted the traditional gene expression analyses to a binomial detection of gene transcripts (i.e., detected or not detected). By combining the binomial qPCR results of the two genes, an improved diagnostic sensitivity (raised from 52%- 65% to 71% at 1 pg of total RNA input, and to 91% at 3 pg of total RNA input) was achieved. This two-gene PLA demonstrates a high repeatability and reproducibility (coefficient of variation <3%) and all required analytical performance characteristics for the commercial processing of clinical samples. PMID:27505074
Meaux, Emilie; Vuilleumier, Patrik
2016-11-01
The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by
Meaux, Emilie; Vuilleumier, Patrik
2016-11-01
The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by
A new expression of Ns versus Ef to an accurate control charge model for AlGaAs/GaAs
NASA Astrophysics Data System (ADS)
Bouneb, I.; Kerrour, F.
2016-03-01
Semi-conductor components become the privileged support of information and communication, particularly appreciation to the development of the internet. Today, MOS transistors on silicon dominate largely the semi-conductors market, however the diminution of transistors grid length is not enough to enhance the performances and respect Moore law. Particularly, for broadband telecommunications systems, where faster components are required. For this reason, alternative structures proposed like hetero structures IV-IV or III-V [1] have been.The most effective components in this area (High Electron Mobility Transistor: HEMT) on IIIV substrate. This work investigates an approach for contributing to the development of a numerical model based on physical and numerical modelling of the potential at heterostructure in AlGaAs/GaAs interface. We have developed calculation using projective methods allowed the Hamiltonian integration using Green functions in Schrodinger equation, for a rigorous resolution “self coherent” with Poisson equation. A simple analytical approach for charge-control in quantum well region of an AlGaAs/GaAs HEMT structure was presented. A charge-control equation, accounting for a variable average distance of the 2-DEG from the interface was introduced. Our approach which have aim to obtain ns-Vg characteristics is mainly based on: A new linear expression of Fermi-level variation with two-dimensional electron gas density in high electron mobility and also is mainly based on the notion of effective doping and a new expression of AEc
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2012-12-01
An analytical expression for the log-amplitude correlation function for plane wave propagation through anisotropic non-Kolmogorov turbulent atmosphere is derived. The closed-form analytic results are based on the Rytov approximation. These results agree well with wave optics simulation based on the more general Fresnel approximation as well as with numerical evaluations, for low-to-moderate strengths of turbulence. The new expression reduces correctly to the previously published analytic expressions for the cases of plane wave propagation through both nonisotropic Kolmogorov turbulence and isotropic non-Kolmogorov turbulence cases. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Ortega, Alejandra; Tong, Ling; D'hooge, Jan
2014-04-01
Essential to (cardiac) 3D ultrasound are 2D matrix array transducer technology and the associated (two-stage) beam forming. Given the large number of degrees of freedom and the complexity of this problem, simulation tools play an important role. Hereto, the impulse response (IR) method is commonly used. Unfortunately, given the large element count of 2D array transducers, simulation times become significant jeopardizing the efficacy of the design process. The aim of this study was therefore to derive a new analytical expression to more efficiently calculate the IR in order to speed up the calculation process. To compare accuracy and computation time, the reference and the proposed method were implemented in MATLAB and contrasted. For all points of observation tested, the IR with both methods was identical. The mean calculation time however reduced in average by a factor of 3.93±0.03 times. The proposed IR method therefore speeds up the calculation time of the IR of an individual transducer element while remaining perfectly accurate. This new expression will be particularly relevant for 2D matrix transducer design where computation times remain currently a bottle neck in the design process. PMID:24447860
Secondary Data Analytics of Aquaporin Expression Levels in Glioblastoma Stem-Like Cells
Isokpehi, Raphael D; Wollenberg Valero, Katharina C; Graham, Barbara E; Pacurari, Maricica; Sims, Jennifer N; Udensi, Udensi K; Ndebele, Kenneth
2015-01-01
Glioblastoma is the most common brain tumor in adults in which recurrence has been attributed to the presence of cancer stem cells in a hypoxic microenvironment. On the basis of tumor formation in vivo and growth type in vitro, two published microarray gene expression profiling studies grouped nine glioblastoma stem-like (GS) cell lines into one of two groups: full (GSf) or restricted (GSr) stem-like phenotypes. Aquaporin-1 (AQP1) and aquaporin-4 (AQP4) are water transport proteins that are highly expressed in primary glial-derived tumors. However, the expression levels of AQP1 and AQP4 have not been previously described in a panel of 92 glioma samples. Therefore, we designed secondary data analytics methods to determine the expression levels of AQP1 and AQP4 in GS cell lines and glioblastoma neurospheres. Our investigation also included a total of 2,566 expression levels from 28 Affymetrix microarray probe sets encoding 13 human aquaporins (AQP0–AQP12); CXCR4 (the receptor for stromal cell derived factor-1 [SDF-1], a potential glioma stem cell therapeutic target]); and PROM1 (gene encoding CD133, the widely used glioma stem cell marker). Interactive visual representation designs for integrating phenotypic features and expression levels revealed that inverse expression levels of AQP1 and AQP4 correlate with distinct phenotypes in a set of cell lines grouped into full and restricted stem-like phenotypes. Discriminant function analysis further revealed that AQP1 and AQP4 expression are better predictors for tumor formation and growth types in glioblastoma stem-like cells than are CXCR4 and PROM1. Future investigations are needed to characterize the molecular mechanisms for inverse expression levels of AQP1 and AQP4 in the glioblastoma stem-like neurospheres. PMID:26279619
Secondary Data Analytics of Aquaporin Expression Levels in Glioblastoma Stem-Like Cells.
Isokpehi, Raphael D; Wollenberg Valero, Katharina C; Graham, Barbara E; Pacurari, Maricica; Sims, Jennifer N; Udensi, Udensi K; Ndebele, Kenneth
2015-01-01
Glioblastoma is the most common brain tumor in adults in which recurrence has been attributed to the presence of cancer stem cells in a hypoxic microenvironment. On the basis of tumor formation in vivo and growth type in vitro, two published microarray gene expression profiling studies grouped nine glioblastoma stem-like (GS) cell lines into one of two groups: full (GSf) or restricted (GSr) stem-like phenotypes. Aquaporin-1 (AQP1) and aquaporin-4 (AQP4) are water transport proteins that are highly expressed in primary glial-derived tumors. However, the expression levels of AQP1 and AQP4 have not been previously described in a panel of 92 glioma samples. Therefore, we designed secondary data analytics methods to determine the expression levels of AQP1 and AQP4 in GS cell lines and glioblastoma neurospheres. Our investigation also included a total of 2,566 expression levels from 28 Affymetrix microarray probe sets encoding 13 human aquaporins (AQP0-AQP12); CXCR4 (the receptor for stromal cell derived factor-1 [SDF-1], a potential glioma stem cell therapeutic target]); and PROM1 (gene encoding CD133, the widely used glioma stem cell marker). Interactive visual representation designs for integrating phenotypic features and expression levels revealed that inverse expression levels of AQP1 and AQP4 correlate with distinct phenotypes in a set of cell lines grouped into full and restricted stem-like phenotypes. Discriminant function analysis further revealed that AQP1 and AQP4 expression are better predictors for tumor formation and growth types in glioblastoma stem-like cells than are CXCR4 and PROM1. Future investigations are needed to characterize the molecular mechanisms for inverse expression levels of AQP1 and AQP4 in the glioblastoma stem-like neurospheres.
Simple Analytic Expressions for the Magnetic Field of a Circular Current Loop
NASA Technical Reports Server (NTRS)
Simpson, James C.; Lane, John E.; Immer, Christopher D.; Youngquist, Robert C.
2001-01-01
Analytic expressions for the magnetic induction (magnetic flux density, B) of a simple planar circular current loop have been published in Cartesian and cylindrical coordinates [1,2], and are also known implicitly in spherical coordinates [3]. In this paper, we present explicit analytic expressions for B and its spatial derivatives in Cartesian, cylindrical, and spherical coordinates for a filamentary current loop. These results were obtained with extensive use of Mathematica "TM" and are exact throughout all space outside of the conductor. The field expressions reduce to the well-known limiting cases and satisfy V · B = 0 and V x B = 0 outside the conductor. These results are general and applicable to any model using filamentary circular current loops. Solenoids of arbitrary size may be easily modeled by approximating the total magnetic induction as the sum of those for the individual loops. The inclusion of the spatial derivatives expands their utility to magnetohydrodynamics where the derivatives are required. The equations can be coded into any high-level programming language. It is necessary to numerically evaluate complete elliptic integrals of the first and second kind, but this capability is now available with most programming packages.
Thuraisingham, Ranjit Arulnayagam
2011-11-01
A procedure based on the multiple expansion of the brain electrical generator is used here to derive analytical expressions for the transfer matrix necessary to obtain potentials referenced to infinity. Its features include: avoidance of computations that involve a large number of discrete dipole sources; faster evaluation compared to the use of the dipole layer; and a transparency showing the parameters that constitute the transfer matrix. The paper also proposes the construction of the standardization matrix without the use of the general inverse of a non-symmetrical matrix.
The first analytical expression to estimate photometric redshifts suggested by a machine
NASA Astrophysics Data System (ADS)
Krone-Martins, A.; Ishida, E. E. O.; de Souza, R. S.
2014-09-01
We report the first analytical expression purely constructed by a machine to determine photometric redshifts (zphot) of galaxies. A simple and reliable functional form is derived using 41 214 galaxies from the Sloan Digital Sky Survey Data Release 10 (SDSS-DR10) spectroscopic sample. The method automatically dropped the u and z bands, relying only on g, r and i for the final solution. Applying this expression to other 1417 181 SDSS-DR10 galaxies, with measured spectroscopic redshifts (zspec), we achieved a mean <(zphot - zspec)/(1 + zspec)> ≲ 0.0086 and a scatter σ(zphot - zspec)/(1 + zspec) ≲ 0.045 when averaged up to z ≲ 1.0. The method was also applied to the PHAT0 data set, confirming the competitiveness of our results when faced with other methods from the literature. This is the first use of symbolic regression in cosmology, representing a leap forward in astronomy-data-mining connection.
NASA Astrophysics Data System (ADS)
Annamalai, Subramanian; Balachandar, S.; Mehta, Yash
2015-11-01
The various inviscid and viscous forces experienced by an isolated spherical particle situated in a compressible fluid have been widely studied in literature and are well established. Further, these force expressions are used even in the context of particulate (multiphase) flows with appropriate empirical correction factors that depend on local particle volume fraction. Such approach can capture the mean effect of the neighboring particles, but fails to capture the effect of the precise arrangement of the neighborhood of particles. To capture this inherent dependence of force on local particle arrangement a more accurate evaluation of the drag forces proves necessary. Towards this end, we consider an acoustic wave of a given frequency to impinge on a sphere. Scattering due to this particle (reference) is computed and termed ``scattering coefficients.'' The effect of the reference particle on another particle in its vicinity, is analytically computed via the above mentioned ``scattering coefficients'' and as a function of distance between particles. In this study, we consider only the first-order scattering effect. Moreover, this theory is extended to compressible spheres and used to compute the pressure in the interior of the sphere and to shock interaction over an array of spheres. We would like to thank the center for compressible multiphase turbulence (CCMT) and acknowledge support from the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.
Kashiwa, B. A.
2010-12-01
Abstract A thermodynamically consistent and fully general equation–of– state (EOS) for multifield applications is described. EOS functions are derived from a Helmholtz free energy expressed as the sum of thermal (fluctuational) and collisional (condensed–phase) contributions; thus the free energy is of the Mie–Gr¨uneisen1 form. The phase–coexistence region is defined using a parameterized saturation curve by extending the form introduced by Guggenheim,2 which scales the curve relative to conditions at the critical point. We use the zero–temperature condensed–phase contribution developed by Barnes,3 which extends the Thomas–Fermi–Dirac equation to zero pressure. Thus, the functional form of the EOS could be called MGGB (for Mie– Gr¨uneisen–Guggenheim–Barnes). Substance–specific parameters are obtained by fitting the low–density energy to data from the Sesame4 library; fitting the zero–temperature pressure to the Sesame cold curve; and fitting the saturation curve and latent heat to laboratory data,5 if available. When suitable coexistence data, or Sesame data, are not available, then we apply the Principle of Corresponding States.2 Thus MGGB can be thought of as a numerical recipe for rendering the tabular Sesame EOS data in an analytic form that includes a proper coexistence region, and which permits the accurate calculation of derivatives associated with compressibility, expansivity, Joule coefficient, and specific heat, all of which are required for multifield applications. 1
Gudimetla, V S Rao; Holmes, Richard B; Riker, Jim F
2014-01-01
An analytical expression for the log-amplitude correlation function based on the Rytov approximation is derived for spherical wave propagation through an anisotropic non-Kolmogorov refractive turbulent atmosphere. The expression reduces correctly to the previously published analytic expressions for the case of spherical wave propagation through isotropic Kolmogorov turbulence. These results agree well with a wave-optics simulation based on the more general Fresnel approximation, as well as with numerical evaluations, for low-to-moderate strengths of turbulence. These results are useful for understanding the potential impact of deviations from the standard isotropic Kolmogorov spectrum.
Brain lateralization of holistic versus analytic processing of emotional facial expressions.
Calvo, Manuel G; Beltrán, David
2014-05-15
This study investigated the neurocognitive mechanisms underlying the role of the eye and the mouth regions in the recognition of facial happiness, anger, and surprise. To this end, face stimuli were shown in three formats (whole face, upper half visible, and lower half visible) and behavioral categorization, computational modeling, and ERP (event-related potentials) measures were combined. N170 (150-180 ms post-stimulus; right hemisphere) and EPN (early posterior negativity; 200-300 ms; mainly, right hemisphere) were modulated by expression of whole faces, but not by separate halves. This suggests that expression encoding (N170) and emotional assessment (EPN) require holistic processing, mainly in the right hemisphere. In contrast, the mouth region of happy faces enhanced left temporo-occipital activity (150-180 ms), and also the LPC (late positive complex; centro-parietal) activity (350-450 ms) earlier than the angry eyes (450-600 ms) or other face regions. Relatedly, computational modeling revealed that the mouth region of happy faces was also visually salient by 150 ms following stimulus onset. This suggests that analytical or part-based processing of the salient smile occurs early (150-180 ms) and lateralized (left), and is subsequently used as a shortcut to identify the expression of happiness (350-450 ms). This would account for the happy face advantage in behavioral recognition tasks when the smile is visible.
An Approximate Analytic Expression for the Flux Density of Scintillation Light at the Photocathode
Braverman, Joshua B; Harrison, Mark J; Ziock, Klaus-Peter
2012-01-01
The flux density of light exiting scintillator crystals is an important factor affecting the performance of radiation detectors, and is of particular importance for position sensitive instruments. Recent work by T. Woldemichael developed an analytic expression for the shape of the light spot at the bottom of a single crystal [1]. However, the results are of limited utility because there is generally a light pipe and photomultiplier entrance window between the bottom of the crystal and the photocathode. In this study, we expand Woldemichael s theory to include materials each with different indices of refraction and compare the adjusted light spot shape theory to GEANT 4 simulations [2]. Additionally, light reflection losses from index of refraction changes were also taken into account. We found that the simulations closely agree with the adjusted theory.
Analytic expressions for mode conversion in a plasma at the peak of a parabolic density profile
Hinkel-Lipsker, D.E.; Fried, B.D.; Morales, G.J. )
1992-07-01
For mode conversion in an unmagnetized plasma with a parabolic density profile of scale length {ital L}, analytic expressions, in terms of parabolic cylinder functions, for the energy flux coefficients (reflection, transmission, and mode conversion) and the fields for both the direct'' problem (incident electromagnetic wave converting to a Langmuir wave) and the inverse'' problem (incident Langmuir wave converting to an electromagnetic wave) are derived for the case where the incident wave frequency {omega} matches the electron plasma frequency {omega}{sub {ital p}} at the peak of the density profile. The mode conversion coefficient for the direct problem is equal in magnitude to that of the inverse problem, and the corresponding reflection and transmission coefficients satisfy energy conservation. In contrast to the linear profile problem, the conversion efficiency depends explicitly on the value of the collision frequency (in the cold, collisional limit) or electron temperature (in the warm, collisionless limit), but a transformation of parameters relates the results for these two limits.
Tolias, P.; Ratynskaia, S.; Angelis, U. de
2015-08-15
The soft mean spherical approximation is employed for the study of the thermodynamics of dusty plasma liquids, the latter treated as Yukawa one-component plasmas. Within this integral theory method, the only input necessary for the calculation of the reduced excess energy stems from the solution of a single non-linear algebraic equation. Consequently, thermodynamic quantities can be routinely computed without the need to determine the pair correlation function or the structure factor. The level of accuracy of the approach is quantified after an extensive comparison with numerical simulation results. The approach is solved over a million times with input spanning the whole parameter space and reliable analytic expressions are obtained for the basic thermodynamic quantities.
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Gender Differences in Emotion Expression in Children: A Meta-Analytic Review
Chaplin, Tara M.; Aldao, Amelia
2012-01-01
Emotion expression is an important feature of healthy child development that has been found to show gender differences. However, there has been no empirical review of the literature on gender and facial, vocal, and behavioral expressions of different types of emotions in children. The present study constitutes a comprehensive meta-analytic review of gender differences, and moderators of differences, in emotion expression from infancy through adolescence. We analyzed 555 effect sizes from 166 studies with a total of 21,709 participants. Significant, but very small, gender differences were found overall, with girls showing more positive emotions (g = −.08) and internalizing emotions (e.g., sadness, anxiety, sympathy; g = −.10) than boys, and boys showing more externalizing emotions (e.g., anger; g = .09) than girls. Notably, gender differences were moderated by age, interpersonal context, and task valence, underscoring the importance of contextual factors in gender differences. Gender differences in positive emotions were more pronounced with increasing age, with girls showing more positive emotions than boys in middle childhood (g = −.20) and adolescence (g = −.28). Boys showed more externalizing emotions than girls at toddler/preschool age (g = .17) and middle childhood (g = .13) and fewer externalizing emotions than girls in adolescence (g = −.27). Gender differences were less pronounced with parents and were more pronounced with unfamiliar adults (for positive emotions) and with peers/when alone (for externalizing emotions). Our findings of gender differences in emotion expression in specific contexts have important implications for gender differences in children’s healthy and maladaptive development. PMID:23231534
Analytical expressions for chatter analysis in milling operations with one dominant mode
NASA Astrophysics Data System (ADS)
Iglesias, A.; Munoa, J.; Ciurana, J.; Dombovari, Z.; Stepan, G.
2016-08-01
In milling, an accurate prediction of chatter is still one of the most complex problems in the field. The presence of these self-excited vibrations can spoil the surface of the part and can also cause a large reduction in tool life. The stability diagrams provide a practical selection of the optimum cutting conditions determined either by time domain or frequency domain based methods. Applying these methods parametric or parameter traced representations of the linear stability limits can be achieved by solving the corresponding eigenvalue problems. In this work, new analytical formulae are proposed related to the parameter domains of both Hopf and period doubling type stability boundaries emerging in the regenerative mechanical model of time periodical milling processes. These formulae are useful to enrich and speed up the currently used numerical methods. Also, the destabilization mechanism of double period chatter is explained, creating an analogy with the chatter related to the Hopf bifurcation, considering one dominant mode and using concepts established by the Pioneers of chatter research.
Lanman, Richard B; Mortimer, Stefanie A; Zill, Oliver A; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A; Divers, Stephen G; Hoon, Dave S B; Kopetz, E Scott; Lee, Jeeyun; Nikolinakos, Petros G; Baca, Arthur M; Kermani, Bahram G; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital Sequencing™ is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient's cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
Analytical expression for gas-particle equilibration time scale and its numerical evaluation
NASA Astrophysics Data System (ADS)
Anttila, Tatu; Lehtinen, Kari E. J.; Dal Maso, Miikka
2016-05-01
We have derived a time scale τeq that describes the characteristic time for a single compound i with a saturation vapour concentration Ceff,i to reach thermodynamic equilibrium between the gas and particle phases. The equilibration process was assumed to take place via gas-phase diffusion and absorption into a liquid-like phase present in the particles. It was further shown that τeq combines two previously derived and often applied time scales τa and τs that account for the changes in the gas and particle phase concentrations of i resulting from the equilibration, respectively. The validity of τeq was tested by comparing its predictions against results from a numerical model that explicitly simulates the transfer of i between the gas and particle phases. By conducting a large number of simulations where the values of the key input parameters were varied randomly, it was found out that τeq yields highly accurate results when i is a semi-volatile compound in the sense that the ratio of total (gas and particle phases) concentration of i to the saturation vapour concentration of i, μ, is below unity. On the other hand, the comparison of analytical and numerical time scales revealed that using τa or τs alone to calculate the equilibration time scale may lead to considerable errors. It was further shown that τeq tends to overpredict the equilibration time when i behaves as a non-volatile compound in a sense that μ > 1. Despite its simplicity, the time scale derived here has useful applications. First, it can be used to assess if semi-volatile compounds reach thermodynamic equilibrium during dynamic experiments that involve changes in the compound volatility. Second, the time scale can be used in modeling of secondary organic aerosol (SOA) to check whether SOA forming compounds equilibrate over a certain time interval.
Bicanic, Dane; Swarts, Jan; Luterotti, Svjetlana; Pietraperzia, Giangaetano; Dóka, Otto; de Rooij, Hans
2004-09-01
The concept of the optothermal window (OW) is proposed as a reliable analytical tool to rapidly determine the concentration of lycopene in a large variety of commercial tomato products in an extremely simple way (the determination is achieved without the need for pretreatment of the sample). The OW is a relative technique as the information is deduced from the calibration curve that relates the OW data (i.e., the product of the absorption coefficient beta and the thermal diffusion length micro) with the lycopene concentration obtained from spectrophotometric measurements. The accuracy of the method has been ascertained with a high correlation coefficient (R = 0.98) between the OW data and results acquired from the same samples by means of the conventional extraction spectrophotometric method. The intrinsic precision of the OW method is quite high (better than 1%), whereas the repeatability of the determination (RSD = 0.4-9.5%, n= 3-10) is comparable to that of spectrophotometry.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Gudimetla, V S Rao; Holmes, Richard B; Smith, Carey; Needham, Gregory
2012-05-01
The effect of anisotropic Kolmogorov turbulence on the log-amplitude correlation function for plane-wave fields is investigated using analysis, numerical integration, and simulation. A new analytical expression for the log-amplitude correlation function is derived for anisotropic Kolmogorov turbulence. The analytic results, based on the Rytov approximation, agree well with a more general wave-optics simulation based on the Fresnel approximation as well as with numerical evaluations, for low and moderate strengths of turbulence. The new expression reduces correctly to previously published analytic expressions for isotropic turbulence. The final results indicate that, as asymmetry becomes greater, the Rytov variance deviates from that given by the standard formula. This deviation becomes greater with stronger turbulence, up to moderate turbulence strengths. The anisotropic effects on the log-amplitude correlation function are dominant when the separation of the points is within the Fresnel length. In the direction of stronger turbulence, there is an enhanced dip in the correlation function at a separation close to the Fresnel length. The dip is diminished in the weak-turbulence axis, suggesting that energy redistribution via focusing and defocusing is dominated by the strong-turbulence axis. The new analytical expression is useful when anisotropy is observed in relevant experiments.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur; Sherrill, C. David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N6) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm-1) is fortuitously even better than that of CCSD(T) (50 cm-1), while the MAEs of CEPA(0) (184 cm-1) and CCSD (84 cm-1) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol-1, which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol-1), and comparing to MP2 (7.7 kcal mol-1) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is only 0.1 kcal
Bozkaya, Uğur; Sherrill, C David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N(6)) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm(-1)) is fortuitously even better than that of CCSD(T) (50 cm(-1)), while the MAEs of CEPA(0) (184 cm(-1)) and CCSD (84 cm(-1)) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol(-1), which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol(-1)), and comparing to MP2 (7.7 kcal mol(-1)) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is
Li, Meng-Yao; Song, Xiong; Wang, Feng; Xiong, Ai-Sheng
2016-01-01
Parsley, one of the most important vegetables in the Apiaceae family, is widely used in the food, medicinal, and cosmetic industries. Recent studies on parsley mainly focus on its chemical composition, and further research involving the analysis of the plant's gene functions and expressions is required. qPCR is a powerful method for detecting very low quantities of target transcript levels and is widely used to study gene expression. To ensure the accuracy of results, a suitable reference gene is necessary for expression normalization. In this study, four software, namely geNorm, NormFinder, BestKeeper, and RefFinder were used to evaluate the expression stabilities of eight candidate reference genes of parsley (GAPDH, ACTIN, eIF-4α, SAND, UBC, TIP41, EF-1α, and TUB) under various conditions, including abiotic stresses (heat, cold, salt, and drought) and hormone stimuli treatments (GA, SA, MeJA, and ABA). Results showed that EF-1α and TUB were the most stable genes for abiotic stresses, whereas EF-1α, GAPDH, and TUB were the top three choices for hormone stimuli treatments. Moreover, EF-1α and TUB were the most stable reference genes among all tested samples, and UBC was the least stable one. Expression analysis of PcDREB1 and PcDREB2 further verified that the selected stable reference genes were suitable for gene expression normalization. This study can guide the selection of suitable reference genes in gene expression in parsley. PMID:27746803
Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong
2016-06-01
Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050
Silvia, Saviozzi; Francesca, Cordero; Marco, Lo Iacono; Silvia, Novello; Giorgio V, Scagliotti; Raffaele, Calogero A
2006-01-01
Background In real-time RT quantitative PCR (qPCR) the accuracy of normalized data is highly dependent on the reliability of the reference genes (RGs). Failure to use an appropriate control gene for normalization of qPCR data may result in biased gene expression profiles, as well as low precision, so that only gross changes in expression level are declared statistically significant or patterns of expression are erroneously characterized. Therefore, it is essential to determine whether potential RGs are appropriate for specific experimental purposes. Aim of this study was to identify and validate RGs for use in the differentiation of normal and tumor lung expression profiles. Methods A meta-analysis of lung cancer transcription profiles generated with the GeneChip technology was used to identify five putative RGs. Their consistency and that of seven commonly used RGs was tested by using Taqman probes on 18 paired normal-tumor lung snap-frozen specimens obtained from non-small-cell lung cancer (NSCLC) patients during primary curative resection. Results The 12 RGs displayed showed a wide range of Ct values: except for rRNA18S (mean 9.8), the mean values of all the commercial RGs and ESD ranged from 19 to 26, whereas those of the microarray-selected RGs (BTF-3, YAP1, HIST1H2BC, RPL30) exceeded 26. RG expression stability within sample populations and under the experimental conditions (tumour versus normal lung specimens) was evaluated by: (1) descriptive statistic; (2) equivalence test; (3) GeNorm applet. All these approaches indicated that the most stable RGs were POLR2A, rRNA18S, YAP1 and ESD. Conclusion These data suggest that POLR2A, rRNA18S, YAP1 and ESD are the most suitable RGs for gene expression profile studies in NSCLC. Furthermore, they highlight the limitations of commercial RGs and indicate that meta-data analysis of genome-wide transcription profiling studies may identify new RGs. PMID:16872493
Yanguas-Gil, Angel; Elam, Jeffrey W.
2014-05-01
In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authors show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e. g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes. (C) 2014 American Vacuum Society
Wu, Gang
2016-08-01
The nuclear quadrupole transverse relaxation process of half-integer spins in liquid samples is known to exhibit multi-exponential behaviors. Within the framework of Redfield's relaxation theory, exact analytical expressions for describing such a process exist only for spin-3/2 nuclei. As a result, analyses of nuclear quadrupole transverse relaxation data for half-integer quadrupolar nuclei with spin >3/2 must rely on numerical diagonalization of the Redfield relaxation matrix over the entire motional range. In this work we propose an approximate analytical expression that can be used to analyze nuclear quadrupole transverse relaxation data of any half-integer spin in liquids over the entire motional range. The proposed equation yields results that are in excellent agreement with the exact numerical calculations. PMID:27343483
Mayrhofer, Patrick; Kratzer, Bernhard; Sommeregger, Wolfgang; Steinfellner, Willibald; Reinhart, David; Mader, Alexander; Turan, Soeren; Qiao, Junhua; Bode, Juergen; Kunert, Renate
2014-12-01
Over the years, Chinese hamster ovary (CHO) cells have emerged as the major host for expressing biotherapeutic proteins. Traditional methods to generate high-producer cell lines rely on random integration(s) of the gene of interest but have thereby left the identification of bottlenecks as a challenging task. For comparison of different producer cell lines derived from various transfections, a system that provides control over transgene expression behavior is highly needed. This motivated us to develop a novel "DUKX-B11 F3/F" cell line to target different single-chain antibody fragments into the same chromosomal target site by recombinase-mediated cassette exchange (RMCE) using the flippase (FLP)/FLP recognition target (FRT) system. The RMCE-competent cell line contains a gfp reporter fused to a positive/negative selection system flanked by heterospecific FRT (F) variants under control of an external CMV promoter, constructed as "promoter trap". The expression stability and FLP accessibility of the tagged locus was demonstrated by successive rounds of RMCE. As a proof of concept, we performed RMCE using cassettes encoding two different anti-HIV single-chain Fc fragments, 3D6scFv-Fc and 2F5scFv-Fc. Both targeted integrations yielded homogenous cell populations with comparable intracellular product contents and messenger RNA (mRNA) levels but product related differences in specific productivities. These studies confirm the potential of the newly available "DUKX-B11 F3/F" cell line to guide different transgenes into identical transcriptional control regions by RMCE and thereby generate clones with comparable amounts of transgene mRNA. This new host is a prerequisite for cell biology studies of independent transfections and transgenes.
Zhou, Xiang; Li, Rui; Michal, Jennifer J.; Wu, Xiao-Lin; Liu, Zhongzhen; Zhao, Hui; Xia, Yin; Du, Weiwei; Wildung, Mark R.; Pouchnik, Derek J.; Harland, Richard M.; Jiang, Zhihua
2016-01-01
Construction of next-generation sequencing (NGS) libraries involves RNA manipulation, which often creates noisy, biased, and artifactual data that contribute to errors in transcriptome analysis. In this study, a total of 19 whole transcriptome termini site sequencing (WTTS-seq) and seven RNA sequencing (RNA-seq) libraries were prepared from Xenopus tropicalis adult and embryo samples to determine the most effective library preparation method to maximize transcriptomics investigation. We strongly suggest that appropriate primers/adaptors are designed to inhibit amplification detours and that PCR overamplification is minimized to maximize transcriptome coverage. Furthermore, genome annotation must be improved so that missing data can be recovered. In addition, a complete understanding of sequencing platforms is critical to limit the formation of false-positive results. Technically, the WTTS-seq method enriches both poly(A)+ RNA and complementary DNA, adds 5′- and 3′-adaptors in one step, pursues strand sequencing and mapping, and profiles both gene expression and alternative polyadenylation (APA). Although RNA-seq is cost prohibitive, tends to produce false-positive results, and fails to detect APA diversity and dynamics, its combination with WTTS-seq is necessary to validate transcriptome-wide APA. PMID:27098915
Sauvage, J-F; Mugnier, L M; Rousset, G; Fusco, T
2010-11-01
In this paper we derive an analytical model of a long-exposure star image for an adaptive-optics(AO)-corrected coronagraphic imaging system. This expression accounts for static aberrations upstream and downstream of the coronagraphic mask as well as turbulence residuals. It is based on the perfect coronagraph model. The analytical model is validated by means of simulations using the design and parameters of the SPHERE instrument. The analytical model is also compared to a simulated four-quadrant phase-mask coronagraph. Then, its sensitivity to a miscalibration of structure function and upstream static aberrations is studied, and the impact on exoplanet detectability is quantified. Last, a first inversion method is presented for a simulation case using a single monochromatic image with no reference. The obtained result shows a planet detectability increase by two orders of magnitude with respect to the raw image. This analytical model presents numerous potential applications in coronographic imaging, such as exoplanet direct detection, and circumstellar disk observation.
Sauvage, J-F; Mugnier, L M; Rousset, G; Fusco, T
2010-11-01
In this paper we derive an analytical model of a long-exposure star image for an adaptive-optics(AO)-corrected coronagraphic imaging system. This expression accounts for static aberrations upstream and downstream of the coronagraphic mask as well as turbulence residuals. It is based on the perfect coronagraph model. The analytical model is validated by means of simulations using the design and parameters of the SPHERE instrument. The analytical model is also compared to a simulated four-quadrant phase-mask coronagraph. Then, its sensitivity to a miscalibration of structure function and upstream static aberrations is studied, and the impact on exoplanet detectability is quantified. Last, a first inversion method is presented for a simulation case using a single monochromatic image with no reference. The obtained result shows a planet detectability increase by two orders of magnitude with respect to the raw image. This analytical model presents numerous potential applications in coronographic imaging, such as exoplanet direct detection, and circumstellar disk observation. PMID:21045877
NASA Astrophysics Data System (ADS)
Kántor, Tibor; Bartha, András
2015-11-01
The self-absorption of spectral lines was studied with up to date multi-element inductively coupled plasma atomic emission spectrometry (ICP-AES) instrumentation using radial and axial viewing of the plasma, as well, performing line peak height and line peak area measurements. Two resonance atomic and ionic lines of Cd and Mg were studied, the concentration range was extended up to 2000 mg/L. At the varying analyte concentration, constant matrix concentration of 10,000 mg/L Ca was ensured in the pneumatically nebulized solutions. The physical and the phenomenological formulation of the emission analytical function is overviewed and as the continuity of the earlier results the following equation is offered:
Dawany, Noor; Showe, Louise C.; Kossenkov, Andrew V.; Chang, Celia; Ive, Prudence; Conradie, Francesca; Stevens, Wendy; Sanne, Ian
2014-01-01
Background Co-infection with tuberculosis (TB) is the leading cause of death in HIV-infected individuals. However, diagnosis of TB, especially in the presence of an HIV co-infection, can be limiting due to the high inaccuracy associated with the use of conventional diagnostic methods. Here we report a gene signature that can identify a tuberculosis infection in patients co-infected with HIV as well as in the absence of HIV. Methods We analyzed global gene expression data from peripheral blood mononuclear cell (PBMC) samples of patients that were either mono-infected with HIV or co-infected with HIV/TB and used support vector machines to identify a gene signature that can distinguish between the two classes. We then validated our results using publically available gene expression data from patients mono-infected with TB. Results Our analysis successfully identified a 251-gene signature that accurately distinguishes patients co-infected with HIV/TB from those infected with HIV only, with an overall accuracy of 81.4% (sensitivity = 76.2%, specificity = 86.4%). Furthermore, we show that our 251-gene signature can also accurately distinguish patients with active TB in the absence of an HIV infection from both patients with a latent TB infection and healthy controls (88.9–94.7% accuracy; 69.2–90% sensitivity and 90.3–100% specificity). We also demonstrate that the expression levels of the 251-gene signature diminish as a correlate of the length of TB treatment. Conclusions A 251-gene signature is described to (a) detect TB in the presence or absence of an HIV co-infection, and (b) assess response to treatment following anti-TB therapy. PMID:24587128
Analytical approximations for spatial stochastic gene expression in single cells and tissues
Smith, Stephen; Cianci, Claudia; Grima, Ramon
2016-01-01
Gene expression occurs in an environment in which both stochastic and diffusive effects are significant. Spatial stochastic simulations are computationally expensive compared with their deterministic counterparts, and hence little is currently known of the significance of intrinsic noise in a spatial setting. Starting from the reaction–diffusion master equation (RDME) describing stochastic reaction–diffusion processes, we here derive expressions for the approximate steady-state mean concentrations which are explicit functions of the dimensionality of space, rate constants and diffusion coefficients. The expressions have a simple closed form when the system consists of one effective species. These formulae show that, even for spatially homogeneous systems, mean concentrations can depend on diffusion coefficients: this contradicts the predictions of deterministic reaction–diffusion processes, thus highlighting the importance of intrinsic noise. We confirm our theory by comparison with stochastic simulations, using the RDME and Brownian dynamics, of two models of stochastic and spatial gene expression in single cells and tissues. PMID:27146686
Recent α decay half-lives and analytic expression predictions including superheavy nuclei
NASA Astrophysics Data System (ADS)
Royer, G.; Zhang, H. F.
2008-03-01
New recent experimental α decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the α emitter and the Qα value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the α decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Qα of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].
NASA Technical Reports Server (NTRS)
Zuffada, C.; Cwik, T.; Jamnejad, V.
1993-01-01
Recently an approach which combines the finite element technique and an integral equation to determine the fields scattered by inhomogeneous bodies of complicated shape has been proposed. Basically, a mathematical surface which encloses the scatterers is introduced, thus dividing the space into an interior and an exterior volume, in which the finite element technique and an integral equation for EM scattering, respectively, are applied. The integral equation is set up for the tangential components of the fields at the surface, while the interior volume the unknowns are the total fields. Continuity of the tangential fields at the boundary, as required by Maxwell's equations, is imposed, thus coupling the two methods to obtain a consistent solution. The coupling term is expressed by a surface integral formed by the dot product of a FE basis function and an IE testing function, or viceversa. By choosing the boundary to be a surface of revolution and by making a convenient selection of IE basis (testing) functions, it is possible to evaluate the integrals analytically on surfaces such as curved triangles, curved quadrilaterals and curved pentagons. We will illustrate the salient steps involved in setting up and carrying out these integrals and discuss what class of basis (testing) functions and analytic surfaces of revolution they are applicable to. Analytic calculations offer the advantage of better accuracy than purely numerical ones, and, when combined with them, often shed light on issues of numerical convergence and limiting values. Furthermore, they may reduce computation time and storage requirements.
How effective are expressive writing interventions for adolescents? A meta-analytic review.
Travagin, Gabriele; Margola, Davide; Revenson, Tracey A
2015-03-01
This meta-analysis evaluated the effects of the expressive writing intervention (EW; Pennebaker & Beall, 1986) among adolescents. Twenty-one independent studies that assessed the efficacy of expressive writing on youth samples aged 10-18 ears were collected and analyzed. Results indicated an overall mean g-effect size that was positive in direction but relatively small (0.127), as well as significant g-effect sizes ranging from 0.107 to 0.246 for the outcome domains of Emotional Distress, Problem Behavior, Social Adjustment, and School Participation. Few significant effects were found within specific outcome domains for putative moderator variables that included characteristics of the participants, intervention instructions, or research design. Studies involving adolescents with high levels of emotional problems at baseline reported larger effects on school performance. Studies that implemented a higher dosage intervention (i.e., greater number and, to some extent, greater spacing of sessions) reported larger effects on somatic complaints. Overall, the findings suggest that expressive writing tends to produce small yet significant improvements on adolescents' well-being. The findings highlight the importance of modifying the traditional expressive writing protocol to enhance its efficacy and reduce potential detrimental effects. At this stage of research the evidence on expressive writing as a viable intervention for adolescents is promising but not decisive.
Lewis, E.R.; Schwartz, S.
2010-03-15
Light scattering by aerosols plays an important role in Earth’s radiative balance, and quantification of this phenomenon is important in understanding and accounting for anthropogenic influences on Earth’s climate. Light scattering by an aerosol particle is determined by its radius and index of refraction, and for aerosol particles that are hygroscopic, both of these quantities vary with relative humidity RH. Here exact expressions are derived for the dependences of the radius ratio (relative to the volume-equivalent dry radius) and index of refraction on RH for aqueous solutions of single solutes. Both of these quantities depend on the apparent molal volume of the solute in solution and on the practical osmotic coefficient of the solution, which in turn depend on concentration and thus implicitly on RH. Simple but accurate approximations are also presented for the RH dependences of both radius ratio and index of refraction for several atmospherically important inorganic solutes over the entire range of RH values for which these substances can exist as solution drops. For all substances considered, the radius ratio is accurate to within a few percent, and the index of refraction to within ~0.02, over this range of RH. Such parameterizations will be useful in radiation transfer models and climate models.
Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor
2015-01-10
The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts.
Recent {alpha} decay half-lives and analytic expression predictions including superheavy nuclei
Royer, G.
2008-03-15
New recent experimental {alpha} decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the {alpha} emitter and the Q{sub {alpha}} value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the {alpha} decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Q{sub {alpha}} of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].
Analytical expression of the potential generated by a massive inhomogeneous straight segment
NASA Astrophysics Data System (ADS)
Najid, N.-E.; Elourabi, E.
2011-12-01
Potential calculation is an important task to study dynamical behavior of test particles around celestial bodies. Gravitational potential of irregular bodies is of great importance since the discoveries of binary asteroids, this opened a new field of research. A simple model to describe the motion of a test particle, in that case, is to consider a finite homogeneous straight segment. In our work, we take this model by adding an inhomogeneous distribution of mass. To be consistent with the geometrical shape of the asteroid, we explore a parabolic profile of the density. We establish the closet analytical form of the potential generated by this inhomogeneous massive straight segment. The study of the dynamical behavior is fulfilled by the use of Lagrangian formulation, which allowed us to give some two and three dimensional orbits.
Analytical expressions for maximum wind turbine average power in a Rayleigh wind regime
Carlin, P.W.
1996-12-01
Average or expectation values for annual power of a wind turbine in a Rayleigh wind regime are calculated and plotted as a function of cut-out wind speed. This wind speed is expressed in multiples of the annual average wind speed at the turbine installation site. To provide a common basis for comparison of all real and imagined turbines, the Rayleigh-Betz wind machine is postulated. This machine is an ideal wind machine operating with the ideal Betz power coefficient of 0.593 in a Rayleigh probability wind regime. All other average annual powers are expressed in fractions of that power. Cases considered include: (1) an ideal machine with finite power and finite cutout speed, (2) real machines operating in variable speed mode at their maximum power coefficient, and (3) real machines operating at constant speed.
NASA Astrophysics Data System (ADS)
Modak, Viraj P.; Wyslouzil, Barbara E.; Singer, Sherwin J.
2016-08-01
The crystal-vapor surface free energy γ is an important physical parameter governing physical processes, such as wetting and adhesion. We explore exact and approximate routes to calculate γ based on cleaving an intact crystal into non-interacting sub-systems with crystal-vapor interfaces. We do this by turning off the interactions, ΔV, between the sub-systems. Using the soft-core scheme for turning off ΔV, we find that the free energy varies smoothly with the coupling parameter λ, and a single thermodynamic integration yields the exact γ. We generate another exact method, and a cumulant expansion for γ by expressing the surface free energy in terms of an average of e-βΔV in the intact crystal. The second cumulant, or Gaussian approximation for γ is surprisingly accurate in most situations, even though we find that the underlying probability distribution for ΔV is clearly not Gaussian. We account for this fact by developing a non-Gaussian theory for γ and find that the difference between the non-Gaussian and Gaussian expressions for γ consist of terms that are negligible in many situations. Exact and approximate methods are applied to the (111) surface of a Lennard-Jones crystal and are also tested for more complex molecular solids, the surface of octane and nonadecane. Alkane surfaces were chosen for study because their crystal-vapor surface free energy has been of particular interest for understanding surface freezing in these systems.
Modak, Viraj P; Wyslouzil, Barbara E; Singer, Sherwin J
2016-08-01
The crystal-vapor surface free energy γ is an important physical parameter governing physical processes, such as wetting and adhesion. We explore exact and approximate routes to calculate γ based on cleaving an intact crystal into non-interacting sub-systems with crystal-vapor interfaces. We do this by turning off the interactions, ΔV, between the sub-systems. Using the soft-core scheme for turning off ΔV, we find that the free energy varies smoothly with the coupling parameter λ, and a single thermodynamic integration yields the exact γ. We generate another exact method, and a cumulant expansion for γ by expressing the surface free energy in terms of an average of e(-βΔV) in the intact crystal. The second cumulant, or Gaussian approximation for γ is surprisingly accurate in most situations, even though we find that the underlying probability distribution for ΔV is clearly not Gaussian. We account for this fact by developing a non-Gaussian theory for γ and find that the difference between the non-Gaussian and Gaussian expressions for γ consist of terms that are negligible in many situations. Exact and approximate methods are applied to the (111) surface of a Lennard-Jones crystal and are also tested for more complex molecular solids, the surface of octane and nonadecane. Alkane surfaces were chosen for study because their crystal-vapor surface free energy has been of particular interest for understanding surface freezing in these systems.
Zhang, Runxuan; Calixto, Cristiane P G; Tzioutziou, Nikoleta A; James, Allan B; Simpson, Craig G; Guo, Wenbin; Marquez, Yamile; Kalyna, Maria; Patro, Rob; Eyras, Eduardo; Barta, Andrea; Nimmo, Hugh G; Brown, John W S
2015-10-01
RNA-sequencing (RNA-seq) allows global gene expression analysis at the individual transcript level. Accurate quantification of transcript variants generated by alternative splicing (AS) remains a challenge. We have developed a comprehensive, nonredundant Arabidopsis reference transcript dataset (AtRTD) containing over 74 000 transcripts for use with algorithms to quantify AS transcript isoforms in RNA-seq. The AtRTD was formed by merging transcripts from TAIR10 and novel transcripts identified in an AS discovery project. We have estimated transcript abundance in RNA-seq data using the transcriptome-based alignment-free programmes Sailfish and Salmon and have validated quantification of splicing ratios from RNA-seq by high resolution reverse transcription polymerase chain reaction (HR RT-PCR). Good correlations between splicing ratios from RNA-seq and HR RT-PCR were obtained demonstrating the accuracy of abundances calculated for individual transcripts in RNA-seq. The AtRTD is a resource that will have immediate utility in analysing Arabidopsis RNA-seq data to quantify differential transcript abundance and expression.
An analytic expression for the sheath criterion in magnetized plasmas with multi-charged ion species
Hatami, M. M.
2015-04-15
The generalized Bohm criterion in magnetized multi-component plasmas consisting of multi-charged positive and negative ion species and electrons is analytically investigated by using the hydrodynamic model. It is assumed that the electrons and negative ion density distributions are the Boltzmann distribution with different temperatures and the positive ions enter into the sheath region obliquely. Our results show that the positive and negative ion temperatures, the orientation of the applied magnetic field and the charge number of positive and negative ions strongly affect the Bohm criterion in these multi-component plasmas. To determine the validity of our derived generalized Bohm criterion, it reduced to some familiar physical condition and it is shown that monotonically reduction of the positive ion density distribution leading to the sheath formation occurs only when entrance velocity of ion into the sheath satisfies the obtained Bohm criterion. Also, as a practical application of the obtained Bohm criterion, effects of the ionic temperature and concentration as well as magnetic field on the behavior of the charged particle density distributions and so the sheath thickness of a magnetized plasma consisting of electrons and singly charged positive and negative ion species are studied numerically.
NASA Technical Reports Server (NTRS)
Deprit, A.
1975-01-01
A theory for generating segmented ephemerides is discussed as a means for fast generation and simple retrieval of nominal orbit data. Over a succession of finite intervals of time, the orbit is represented by a best approximation expressed by Chebyshev polynomials. Storage of coefficients tables for Chebyshev polynomials is seen as a method to reduce data and decrease transmission costs. A general algorithm was constructed and computer programs were designed. The possibility of storing an ephemeris for a few days in the on-board computer, or in microprocessors attached to the data collectors is suggested.
Croft, Stephen; Evans, Louise G; Schear, Melissa A
2010-01-01
In the realm of nuclear safeguards, passive neutron multiplicity counting using shift register pulse train analysis to nondestructively quantify Pu in product materials is a familiar and widely applied technique. The approach most commonly taken is to construct a neutron detector consisting of {sup 3}He filled cylindrical proportional counters embedded in a high density polyethylene moderator. Fast neutrons from the item enter the moderator and are quickly slowed down, on timescales of the order of 1-2 {micro}s, creating a thermal population which then persists typically for several 10's {micro}s and is sampled by the {sup 3}He detectors. Because the initial transient is of comparatively short duration it has been traditional to treat it as instantaneous and furthermore to approximate the subsequent capture time distribution as exponential in shape. With these approximations simple expressions for the various Gate Utilization Factors (GUFs) can be obtained. These factors represent the proportion of time correlated events i.e. Doubles and Triples signal present in the pulse train that is detected by the coincidence gate structure chosen (predelay and gate width settings of the multiplicity shift register). More complicated expressions can be derived by generalizing the capture time distribution to multiple time components or harmonics typically present in real systems. When it comes to applying passive neutron multiplicity methods to extremely intense (i.e. high emission rate and highly multiplying) neutron sources there is a drive to use detector types with very fast response characteristics in order to cope with the high rates. In addition to short pulse width, detectors with a short capture time profile are also desirable so that a short coincidence gate width can be set in order to reduce the chance or Accidental coincidence signal. In extreme cases, such as might be realized using boron loaded scintillators, the dieaway time may be so short that the build
Analytical Expressions for the Hard-Scattering Production of Massive Partons
Wong, Cheuk-Yin
2016-01-01
We obtain explicit expressions for the two-particle differential cross section $E_c E_\\kappa d\\sigma (AB \\to c\\kappa X) /d\\bb c d \\bb \\kappa$ and the two-particle angular correlation function \\break $d\\sigma(AB$$ \\to$$ c\\kappa X)/d\\Delta \\phi \\, d\\Delta y$ in the hard-scattering production of massive partons in order to exhibit the ``ridge" structure on the away side in the hard-scattering process. The single-particle production cross section $d\\sigma(AB \\to cX) /dy_c c_T dc_T $ is also obtained and compared with the ALICE experimental data for charm production in $pp$ collisions at 7 TeV at LHC.
Yanguas-Gil, Angel; Elam, Jeffrey W.
2014-05-15
In this work, the authors present analytic models for atomic layer deposition (ALD) in three common experimental configurations: cross-flow, particle coating, and spatial ALD. These models, based on the plug-flow and well-mixed approximations, allow us to determine the minimum dose times and materials utilization for all three configurations. A comparison between the three models shows that throughput and precursor utilization can each be expressed by universal equations, in which the particularity of the experimental system is contained in a single parameter related to the residence time of the precursor in the reactor. For the case of cross-flow reactors, the authors show how simple analytic expressions for the reactor saturation profiles agree well with experimental results. Consequently, the analytic model can be used to extract information about the ALD surface chemistry (e.g., the reaction probability) by comparing the analytic and experimental saturation profiles, providing a useful tool for characterizing new and existing ALD processes.
NASA Astrophysics Data System (ADS)
Li, Ping; Li, Xin-zhou; Xi, Ping
2016-06-01
We present a detailed study of the spherically symmetric solutions in Lorentz-breaking massive gravity. There is an undetermined function { F }(X,{w}1,{w}2,{w}3) in the action of Stückelberg fields {S}φ ={{{Λ }}}4\\int {{{d}}}4x\\sqrt{-g}{ F }, which should be resolved through physical means. In general relativity, the spherically symmetric solution to the Einstein equation is a benchmark and its massive deformation also plays a crucial role in Lorentz-breaking massive gravity. { F } will satisfy the constraint equation {T}01=0 from the spherically symmetric Einstein tensor {G}01=0, if we maintain that any reasonable physical theory should possess the spherically symmetric solutions. The Stückelberg field {φ }i is taken as a ‘hedgehog’ configuration {φ }i=φ (r){x}i/r, whose stability is guaranteed by the topological one. Under this ansätz, {T}01=0 is reduced to d{ F }=0. The functions { F } for d{ F }=0 form a commutative ring {R}{ F }. We obtain an expression of the solution to the functional differential equation with spherical symmetry if { F }\\in {R}{ F }. If { F }\\in {R}{ F } and \\partial { F }/\\partial X=0, the functions { F } form a subring {S}{ F }\\subset {R}{ F }. We show that the metric is Schwarzschild, Schwarzschild-AdS or Schwarzschild-dS if { F }\\in {S}{ F }. When { F }\\in {R}{ F } but { F }\
Goheen, Steven C.
2001-07-01
Characterizing environmental samples has been exhaustively addressed in the literature for most analytes of environmental concern. One of the weak areas of environmental analytical chemistry is that of radionuclides and samples contaminated with radionuclides. The analysis of samples containing high levels of radionuclides can be far more complex than that of non-radioactive samples. This chapter addresses the analysis of samples with a wide range of radioactivity. The other areas of characterization examined in this chapter are the hazardous components of mixed waste, and special analytes often associated with radioactive materials. Characterizing mixed waste is often similar to characterizing waste components in non-radioactive materials. The largest differences are in associated safety precautions to minimize exposure to dangerous levels of radioactivity. One must attempt to keep radiological dose as low as reasonably achievable (ALARA). This chapter outlines recommended procedures to safely and accurately characterize regulated components of radioactive samples.
Cui, Linyan; Xue, Bindang; Zhou, Fugen
2013-11-01
The effects of moderate-to-strong non-Kolmogorov turbulence on the angle of arrival (AOA) fluctuations for plane and spherical waves are investigated in detail both analytically and numerically. New analytical expressions for the variance of AOA fluctuations are derived for moderate-to-strong non-Kolmogorov turbulence. The new expressions cover a wider range of non-Kolmogorov turbulence strength and reduce correctly to previously published analytic expressions for the cases of plane and spherical wave propagation through both weak non-Kolmogorov turbulence and moderate-to-strong Kolmogorov turbulence cases. The final results indicate that, as turbulence strength becomes greater, the expressions developed with the Rytov theory deviate from those given in this work. This deviation becomes greater with stronger turbulence, up to moderate-to-strong turbulence strengths. Furthermore, general spectral power law has significant influence on the variance of AOA fluctuations in non-Kolmogorov turbulence. These results are useful for understanding the potential impact of deviations from the standard Kolmogorv spectrum.
NASA Technical Reports Server (NTRS)
Georgevic, R. M.
1973-01-01
Closed-form analytic expressions for the time variations of instantaneous orbital parameters and of the topocentric range and range rate of a spacecraft moving in the gravitational field of an oblate large body are derived using a first-order variation of parameters technique. In addition, the closed-form analytic expressions for the partial derivatives of the topocentric range and range rate are obtained, with respect to the coefficient of the second harmonic of the potential of the central body (J sub 2). The results are applied to the motion of a point-mass spacecraft moving in the orbit around the equatorially elliptic, oblate sun, with J sub 2 approximately equal to .000027.
Martinez-Garcia, Jorge; Leoni, Matteo; Scardi, Paolo
2007-11-01
An analytical solution was obtained for the average contrast factor C{sub hkl} of dislocations with <001>(100) slip-system in anisotropic cubic crystals. The expression provides the dislocation contrast factor as an explicit function of the Miller indices (hkl), the elastic anisotropy factor A{sub z}, and the Poisson ratio {nu}, thus avoiding lengthy numerical calculations or approximate parametrizations. The expression was incorporated in the whole powder pattern modelling algorithm and used to study dislocations in ball milled nanocrystalline Cu{sub 2}O.
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed. PMID:26631024
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed.
Chiba-Mizutani, Tomoko; Miura, Hideka; Matsuda, Masakazu; Matsuda, Zene; Yokomaku, Yoshiyuki; Miyauchi, Kosuke; Nishizawa, Masako; Yamamoto, Naoki; Sugiura, Wataru
2007-02-01
Two new T-cell-based reporter cell lines were established to measure human immunodeficiency virus type 1 (HIV-1) infectivity. One cell line naturally expresses CD4 and CXCR4, making it susceptible to X4-tropic viruses, and the other cell line, in which a CCR5 expression vector was introduced, is susceptible to both X4- and R5-tropic viruses. Reporter cells were constructed by transfecting the human T-cell line HPB-Ma, which demonstrates high susceptibility to HIV-1, with genomes expressing two different luciferase reporters, HIV-1 long terminal repeat-driven firefly luciferase and cytomegalovirus promoter-driven renilla luciferase. Upon HIV infection, the cells expressed firefly luciferase at levels that were highly correlated (r2=0.91 to 0.98) with the production of the capsid antigen p24. The cells also constitutively expressed renilla luciferase, which was used to monitor cell numbers and viability. The reliability of the cell lines for two in vitro applications, drug resistance phenotyping and drug screening, was confirmed. As HIV-1 efficiently replicated in these cells, they could be used for multiple-round replication assays as an alternative method to a single-cycle replication protocol. Coefficients of variation for drug susceptibility evaluated with the cell lines ranged from 17 to 41%. The new cell lines were beneficial for evaluating antiretroviral drug resistance. Firefly luciferase gave a wider dynamic range for evaluating virus infectivity, and the introduction of renilla luciferase improved assay reproducibility. The cell lines were also beneficial for screening new antiretroviral agents, as false inhibition caused by the cytotoxicity of test compounds was easily detected by monitoring renilla luciferase activity.
Guérit, G; Drobinski, P; Flamant, P H; Augère, B
2001-08-20
There have been many analyses of the reduction of lidar system efficiency in bistatic geometry caused by beam spreading and by fluctuations along the two paths generated by refractive-index turbulence. Although these studies have led to simple, approximate results that provide a reliable basis for preliminary assessment of lidar performance, they do not apply to monostatic lidars. For such systems, calculations and numerical simulations predict an enhanced coherence for the backscattered field. However, to the authors' knowledge, a simple analytical mathematical framework for diagnosing the effects of refractive-index turbulence on the performance of both bistatic and monostatic coherent lidars does not exist. Here analytical empirical expressions for the transverse coherence variables and the heterodyne intensity are derived for bistatic and monostatic lidars as a function of moderate atmospheric refractive-index turbulence within the framework of the Gaussian-beam approximation.
Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya
2014-01-01
We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out
Zhao, Luyao; Yang, Shuming; Zhang, Yanhua; Zhang, Ying; Hou, Can; Cheng, Yongyou; You, Xinyong; Gu, Xu; Zhao, Zhen; Muhammad Tarique, Tunio
2016-03-01
In this study, quantification of mRNA gene expression was examined as biomarkers to detect ractopamaine abuse and ractopamaine residues in cashmere goats. It was focused on the identification of potential gene expression biomarkers and describing the coreletionship between gene expression and residue level by 58 animals for 49 days. The results showed that administration periods and residue levels significantly influenced mRNA expressions of the β2-adrenergic receptor (β2AR), the enzymes PRKACB, ADCY3, ATP1A3, ATP2A3, PTH, and MYLK, and the immune factors IL-1β and TNF-α. Statistical analysis like principal components analysis (PCA), hierarchical cluster analysis (HCA), and discriminant analysis (DA) showed that these genes can serve as potential biomarkers for ractopamine in skeletal muscle and that they are also suitable for describing different residue levels separately.
NASA Astrophysics Data System (ADS)
Dattani, Nikesh S.; Welsh, Staszek
2014-06-01
Being the simplest neutral open shell molecule, BeH is a very important benchmark system for ab initio calculations. However, the most accurate empirical potentials and Born-Oppenheimer breakdown (BOB) functions for this system are nearly a decade old and are not reliable in the long-range region. Particularly, the uncertainties in their dissociation energies were about ±200 cm-1, and even the number of vibrational levels predicted was at the time very questionable, meaning that no good benchmark exists for ab initio calculations on neutral open shell molecules. We build new empirical potentials for BeH, BeD, and BeT that are much more reliable in the long-range. Being the second lightest heteronuclear molecule with a stable ground electronic state, BeH is also very important for the study of isotope effects, such as BOB. We extensively study isotope effects in this system, and we show that the empirical BOB functions fitted from the data of any two isotopologues, is sufficient to predict crucial properties of the third isotopologue.
Emfietzoglou, D.; Kyriakou, I.; Garcia-Molina, R.; Abril, I.; Kostarelos, K.
2010-09-15
We have determined ''effective'' Bethe coefficients and the mean excitation energy of stopping theory (I-value) for multiwalled carbon nanotubes (MWCNTs) and single-walled carbon nanotube (SWCNT) bundles based on a sum-rule constrained optical-data model energy loss function with improved asymptotic properties. Noticeable differences between MWCNTs, SWCNT bundles, and the three allotropes of carbon (diamond, graphite, glassy carbon) are found. By means of Bethe's asymptotic approximation, the inelastic scattering cross section, the electronic stopping power, and the average energy transfer to target electrons in a single inelastic collision, are calculated analytically for a broad range of electron and proton beam energies using realistic excitation parameters.
Hasan, Atif Noorul; Ahmad, Mohammad Wakil; Madar, Inamul Hasan; Grace, B Leena; Hasan, Tarique Noorul
2015-01-01
Smoking is the leading cause of lung cancer development and several genes have been identified as potential biomarker for lungs cancer. Contributing to the present scientific knowledge of biomarkers for lung cancer two different data sets, i.e. GDS3257 and GDS3054 were downloaded from NCBI׳s GEO database and normalized by RMA and GRMA packages (Bioconductor). Diffrentially expressed genes were extracted by using and were R (3.1.2); DAVID online tool was used for gene annotation and GENE MANIA tool was used for construction of gene regulatory network. Nine smoking independent gene were found whereas average expressions of those genes were almost similar in both the datasets. Five genes among them were found to be associated with cancer subtypes. Thirty smoking specific genes were identified; among those genes eight were associated with cancer sub types. GPR110, IL1RN and HSP90AA1 were found directly associated with lung cancer. SEMA6A differentially expresses in only non-smoking lung cancer samples. FLG is differentially expressed smoking specific gene and is related to onset of various cancer subtypes. Functional annotation and network analysis revealed that FLG participates in various epidermal tissue developmental processes and is co-expressed with other genes. Lung tissues are epidermal tissues and thus it suggests that alteration in FLG may cause lung cancer. We conclude that smoking alters expression of several genes and associated biological pathways during development of lung cancers.
NASA Technical Reports Server (NTRS)
Gonzales, David A.; Varghese, Philip L.
1993-01-01
Closed form expressions for inelastic state-to-state and state-specific dissociative rate coefficients for utilization in vibrational master equation studies of shock heated CO, N2, and O2 highly dilute in Ar are considered. The master equation is linearized by neglecting diatom-diatom collisions and recombination. Master equation results indicate that the most significant contribution to dissociation comes from low and mid lying vibrational levels.
THORNLEY, J. H. M.
2002-01-01
Analytical expressions for the contributions of sun and shade leaves to instantaneous canopy photosynthesis are derived. The analysis is based on four assumptions. First, that the canopy is closed in the sense that it is horizontally uniform. Secondly, that there is an exponential profile of light down the canopy with the same decay constant for light from different parts of the sky. Thirdly, that the leaf photosynthetic response to incident irradiance can be described by a three‐parameter non‐rectangular hyperbola (NRH). And lastly, that light acclimation at the leaf level occurs in only one parameter of the NRH, that describing the light‐saturated photosynthetic rate, which is assumed to be proportional to the local averaged leaf irradiance. These assumptions have been extensively researched empirically and theoretically and their limitations are quite well understood. They have been widely used when appropriate. Combining these four assumptions permits the derivation of algebraic expressions for instantaneous canopy photosynthesis which are computationally efficient because they avoid the necessity for numerical integration down the canopy. These are valuable for modelling plant and crop ecosystems, for which canopy photosynthesis is the primary driver. Ignoring the sun/shade dichotomy can result in overestimates of canopy photosynthesis of up to 20 %, but using a rectangular hyperbola instead of a non‐rectangular hyperbola to estimate canopy photosynthesis taking account of sun and shade leaves can lead to a similarly sized underestimate. PMID:12096806
Analytical illuminance and caustic surface calculations in geometrical optics.
Shealy, D L
1976-10-01
The analytical illuminance monitoring technique provides an exact expression within the geometrical optics limit for the illuminance over an image surface for light that has passed through a multiinterface optical system. The light source may be collimated rays, a point source, or an extended source. The geometrical energy distributions can be graphically displayed as a line or point spread function over selected image planes. The analytical illuminance technique gives a more accurate and efficient computer technique for evaluating the energy distribution over an image surface than the traditional scanning of the spot diagram mathematically with a narrow slit. The analytical illuminance monitoring technique also provides a closed form expression for the caustic surface of the optical system. It is shown by examining the caustic surface for anumber of lens systems from the literature that the caustic is a valuable merit function for evaluating the aberrations and the size of the focal region.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Garai, Sisir Kumar; Mukhopadhyay, Sourangshu
2010-08-01
To implement different all optical logic operations, encoding/decoding of optical signal is a very important issue. Since now there are so many types of optical signal encoding and decoding techniques have been adopted, such as intensity encoding, polarization encoding, phase encoding, symbolic substitution technique etc. All these existing techniques have their own limitations. In this context one may mention the frequency encoding/decoding technique. The basic inherent advantage of frequency encoding technique over all other existing techniques is that as the frequency of a signal is the fundamental character of it, it always preserves its identity throughout the communication of the signal, irrespective of reflection, refraction attenuation etc. Again, different optical signal has different distinct frequency which may be encoded as a distinct state of a logic system to represent the information. Adopting this technique it is possible to implement binary logic system as well as higher order logic system such as tristate logic, quaternary logic system etc. The major advantages of multivalued logic system over Boolean logic system are that in multivalued logic system the states of information is very more and as result information storage capacity is high. Again in multivalued logic system carry free and borrow free operation can be implemented which is less time consuming and therefore speed of operation is very fast. We have already developed methods of implementation of different all-optical frequency encoded logic as well as different optical processor. In this communication we propose an analytical approach to develop the expression of the outputs of frequency encoded different binary logic expression in terms of input frequencies from the stand point of basic laws of reflection, transmission and frequency conversion property of optical devices and of course mention the way-out to implement these logic operations.
Analytical solutions for radiation-driven winds in massive stars. I. The fast regime
Araya, I.; Curé, M.; Cidale, L. S.
2014-11-01
Accurate mass-loss rate estimates are crucial keys in the study of wind properties of massive stars and for testing different evolutionary scenarios. From a theoretical point of view, this implies solving a complex set of differential equations in which the radiation field and the hydrodynamics are strongly coupled. The use of an analytical expression to represent the radiation force and the solution of the equation of motion has many advantages over numerical integrations. Therefore, in this work, we present an analytical expression as a solution of the equation of motion for radiation-driven winds in terms of the force multiplier parameters. This analytical expression is obtained by employing the line acceleration expression given by Villata and the methodology proposed by Müller and Vink. On the other hand, we find useful relationships to determine the parameters for the line acceleration given by Müller and Vink in terms of the force multiplier parameters.
NASA Technical Reports Server (NTRS)
Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.
1981-01-01
Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Not Available
2006-06-01
In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.
Accurate Optical Reference Catalogs
NASA Astrophysics Data System (ADS)
Zacharias, N.
2006-08-01
Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.
Lewis, D.W. . Dept. of Geology); McConchie, D.M. . Centre for Coastal Management)
1994-01-01
Both a self instruction manual and a cookbook'' guide to field and laboratory analytical procedures, this book provides an essential reference for non-specialists. With a minimum of mathematics and virtually no theory, it introduces practitioners to easy, inexpensive options for sample collection and preparation, data acquisition, analytic protocols, result interpretation and verification techniques. This step-by-step guide considers the advantages and limitations of different procedures, discusses safety and troubleshooting, and explains support skills like mapping, photography and report writing. It also offers managers, off-site engineers and others using sediments data a quick course in commissioning studies and making the most of the reports. This manual will answer the growing needs of practitioners in the field, either alone or accompanied by Practical Sedimentology, which surveys the science of sedimentology and provides a basic overview of the principles behind the applications.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
Groeneboom, N. E.; Dahle, H.
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Flanagan, R J; Widdop, B; Ramsey, J D; Loveland, M
1988-09-01
1. Major advances in analytical toxicology followed the introduction of spectroscopic and chromatographic techniques in the 1940s and early 1950s and thin layer chromatography remains important together with some spectrophotometric and other tests. However, gas- and high performance-liquid chromatography together with a variety of immunoassay techniques are now widely used. 2. The scope and complexity of forensic and clinical toxicology continues to increase, although the compounds for which emergency analyses are needed to guide therapy are few. Exclusion of the presence of hypnotic drugs can be important in suspected 'brain death' cases. 3. Screening for drugs of abuse has assumed greater importance not only for the management of the habituated patient, but also in 'pre-employment' and 'employment' screening. The detection of illicit drug administration in sport is also an area of increasing importance. 4. In industrial toxicology, the range of compounds for which blood or urine measurements (so called 'biological monitoring') can indicate the degree of exposure is increasing. The monitoring of environmental contaminants (lead, chlorinated pesticides) in biological samples has also proved valuable. 5. In the near future a consensus as to the units of measurement to be used is urgently required and more emphasis will be placed on interpretation, especially as regards possible behavioural effects of drugs or other poisons. Despite many advances in analytical techniques there remains a need for reliable, simple tests to detect poisons for use in smaller hospital and other laboratories.
Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations
NASA Technical Reports Server (NTRS)
Gayley, K. G.
1993-01-01
Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.
Accurate Mass Measurements in Proteomics
Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.
2007-08-01
To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
Integrated Risk Information System (IRIS)
Express ; CASRN 101200 - 48 - 0 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Assessments for Noncarcinogenic Effect
An Analytical State Transition Matrix for Orbits Perturbed by an Oblate Spheroid
NASA Technical Reports Server (NTRS)
Mueller, A. C.
1977-01-01
An analytical state transition matrix and its inverse, which include the short period and secular effects of the second zonal harmonic, were developed from the nonsingular PS satellite theory. The fact that the independent variable in the PS theory is not time is in no respect disadvantageous, since any explicit analytical solution must be expressed in the true or eccentric anomaly. This is shown to be the case for the simple conic matrix. The PS theory allows for a concise, accurate, and algorithmically simple state transition matrix. The improvement over the conic matrix ranges from 2 to 4 digits accuracy.
Gravitational lensing from compact bodies: Analytical results for strong and weak deflection limits
Amore, Paolo; Cervantes, Mayra; De Pace, Arturo; Fernandez, Francisco M.
2007-04-15
We develop a nonperturbative method that yields analytical expressions for the deflection angle of light in a general static and spherically symmetric metric. The method works by introducing into the problem an artificial parameter, called {delta}, and by performing an expansion in this parameter to a given order. The results obtained are analytical and nonperturbative because they do not correspond to a polynomial expression in the physical parameters. Already to first order in {delta} the analytical formulas obtained using our method provide at the same time accurate approximations both at large distances (weak deflection limit) and at distances close to the photon sphere (strong deflection limit). We have applied our technique to different metrics and verified that the error is at most 0.5% for all regimes. We have also proposed an alternative approach which provides simpler formulas, although with larger errors.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
NASA Astrophysics Data System (ADS)
Olivares-Rivas, Wilmer; Colmenares, Pedro J.
2016-09-01
The non-static generalized Langevin equation and its corresponding Fokker-Planck equation for the position of a viscous fluid particle were solved in closed form for a time dependent external force. Its solution for a constant external force was obtained analytically. The non-Markovian stochastic differential equation, associated to the dynamics of the position under a colored noise, was then applied to the description of the dynamics and persistence time of particles constrained within absorbing barriers. Comparisons with molecular dynamics were very satisfactory.
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Analytical Chemistry of Nitric Oxide
Hetrick, Evan M.
2013-01-01
Nitric oxide (NO) is the focus of intense research, owing primarily to its wide-ranging biological and physiological actions. A requirement for understanding its origin, activity, and regulation is the need for accurate and precise measurement techniques. Unfortunately, analytical assays for monitoring NO are challenged by NO’s unique chemical and physical properties, including its reactivity, rapid diffusion, and short half-life. Moreover, NO concentrations may span pM to µM in physiological milieu, requiring techniques with wide dynamic response ranges. Despite such challenges, many analytical techniques have emerged for the detection of NO. Herein, we review the most common spectroscopic and electrochemical methods, with special focus on the fundamentals behind each technique and approaches that have been coupled with modern analytical measurement tools or exploited to create novel NO sensors. PMID:20636069
Visual analytics of brain networks.
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2012-05-15
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging.
Visual Analytics of Brain Networks
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2014-01-01
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur
2014-09-01
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller-Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
Bozkaya, Uğur
2014-09-28
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller–Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
Bozkaya, Uğur
2014-09-28
General analytic gradient expressions (with the frozen-core approximation) are presented for density-fitted post-HF methods. An efficient implementation of frozen-core analytic gradients for the second-order Møller-Plesset perturbation theory (MP2) with the density-fitting (DF) approximation (applying to both reference and correlation energies), which is denoted as DF-MP2, is reported. The DF-MP2 method is applied to a set of alkanes, conjugated dienes, and noncovalent interaction complexes to compare the computational cost of single point analytic gradients with MP2 with the resolution of the identity approach (RI-MP2) [F. Weigend and M. Häser, Theor. Chem. Acc. 97, 331 (1997); R. A. Distasio, R. P. Steele, Y. M. Rhee, Y. Shao, and M. Head-Gordon, J. Comput. Chem. 28, 839 (2007)]. In the RI-MP2 method, the DF approach is used only for the correlation energy. Our results demonstrate that the DF-MP2 method substantially accelerate the RI-MP2 method for analytic gradient computations due to the reduced input/output (I/O) time. Because in the DF-MP2 method the DF approach is used for both reference and correlation energies, the storage of 4-index electron repulsion integrals (ERIs) are avoided, 3-index ERI tensors are employed instead. Further, as in case of integrals, our gradient equation is completely avoid construction or storage of the 4-index two-particle density matrix (TPDM), instead we use 2- and 3-index TPDMs. Hence, the I/O bottleneck of a gradient computation is significantly overcome. Therefore, the cost of the generalized-Fock matrix (GFM), TPDM, solution of Z-vector equations, the back transformation of TPDM, and integral derivatives are substantially reduced when the DF approach is used for the entire energy expression. Further application results show that the DF approach introduce negligible errors for closed-shell reaction energies and equilibrium bond lengths.
Toward more accurate loss tangent measurements in reentrant cavities
Moyer, R. D.
1980-05-01
Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.
Liang, Linlin; Lan, Feifei; Li, Li; Ge, Shenguang; Yu, Jinghua; Ren, Na; Liu, Haiyun; Yan, Mei
2016-12-15
A novel colorimetric/fluorescence bimodal lab-on-paper cyto-device was fabricated based on concanavalin A (Con A)-integrating multibranched hybridization chain reaction (mHCR). The product of mHCR was modified PtCu nanochains (colorimetric signal label) and graphene quantum dot (fluorescence signal label) for in situ and dynamically evaluating cell surface N-glycan expression. In this strategy, preliminary detection was carried out through colorimetric method, if needed, then the fluorescence method was applied for a precise determination. Au-Ag-paper devices increased the surface areas and active sites for immobilizing larger amount of aptamers, and then specifically and efficiently captured more cancer cells. Moreover, it could effectively reduce the paper background fluorescence. Due to the specific recognition of Con A with mannose and the effective signal amplification of mHCR, the proposed strategy exhibited excellent high sensitivity for the cytosensing of MCF-7 cells ranging from 100 to 1.0×10(7) and 80-5.0×10(7) cellsmL(-1) with the detection limit of 33 and 26 cellsmL(-1) for colorimetric and fluorescence, respectively. More importantly, this strategy was successfully applied to dynamically monitor cell-surface multi-glycans expression on living cells under external stimuli of inhibitors as well as for N-glycan expression inhibitor screening. These results implied that this biosensor has potential in studying complex native glycan-related biological processes and elucidating the N-glycan-related diseases in biological and physiological processes.
Palm: Easing the Burden of Analytical Performance Modeling
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexity (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.
Two-level laser: Analytical results and the laser transition
Gartner, Paul
2011-11-15
The problem of the two-level laser is studied analytically. The steady-state solution is expressed as a continued fraction and allows for accurate approximation by rational functions. Moreover, we show that the abrupt change observed in the pump dependence of the steady-state population is directly connected to the transition to the lasing regime. The condition for a sharp transition to Poissonian statistics is expressed as a scaling limit of vanishing cavity loss and light-matter coupling, {kappa}{yields}0, g{yields}0, such that g{sup 2}/{kappa} stays finite and g{sup 2}/{kappa}>2{gamma}, where {gamma} is the rate of nonradiative losses. The same scaling procedure is also shown to describe a similar change to the Poisson distribution in the Scully-Lamb laser model, suggesting that the low-{kappa}, low-g asymptotics is of more general significance for the laser transition.
Grzegorczyk, Tomasz M.; Kong, Jin Au
2007-03-15
A closed-form expression of the force on an infinite lossless dielectric cylinder illuminated by a TM incidence (electric field parallel to the cylinder's axis) is derived. The formula, expressed as a simple sum, is straightforward to compute and is shown to be faster converging than the direct application of the Maxwell stress tensor and the expansion of the fields in the cylindrical coordinate system. A generalization of the formula to multiple incidences is provided and is illustrated by studying the force due to a Gaussian beam on cylinders of various parameters. We show in this way that the effects of the gradient of the intensity profile on the transverse and longitudinal confinements are decoupled, due to the permittivity contrast and to the size of the particle. Since the formula we derive is exact and is therefore not limited to the Rayleigh or ray optics regime, we expect it to be important for the modeling of optical forces on elongated particles of arbitrary sizes.
NASA Astrophysics Data System (ADS)
Kurylyk, B. L.; MacQuarrie, K. T. B.; Caissie, D.; McKenzie, J. M.
2015-05-01
Climate change is expected to increase stream temperatures and the projected warming may alter the spatial extent of habitat for cold-water fish and other aquatic taxa. Recent studies have proposed that stream thermal sensitivities, derived from short-term air temperature variations, can be employed to infer future stream warming due to long-term climate change. However, this approach does not consider the potential for streambed heat fluxes to increase due to gradual warming of the shallow subsurface. The temperature of shallow groundwater is particularly important for the thermal regimes of groundwater-dominated streams and rivers. Also, recent studies have investigated how land surface perturbations, such as wildfires or timber harvesting, can influence stream temperatures by changing stream surface heat fluxes, but these studies have typically not considered how these surface disturbances can also alter shallow groundwater temperatures and streambed heat fluxes. In this study, several analytical solutions to the one-dimensional unsteady advection-diffusion equation for subsurface heat transport are employed to estimate the timing and magnitude of groundwater temperature changes due to seasonal and long-term variability in land surface temperatures. Groundwater thermal sensitivity formulae are proposed that accommodate different surface warming scenarios. The thermal sensitivity formulae suggest that shallow groundwater will warm in response to climate change and other surface perturbations, but the timing and magnitude of the subsurface warming depends on the rate of surface warming, subsurface thermal properties, bulk aquifer depth, and groundwater velocity. The results also emphasize the difference between the thermal sensitivity of shallow groundwater to short-term (e.g., seasonal) and long-term (e.g., multi-decadal) land surface-temperature variability, and thus demonstrate the limitations of using short-term air and water temperature records to project
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799
NASA Astrophysics Data System (ADS)
Guseinov, I. I.; Mamedov, B. A.; Ekenoğlu, A. S.
2006-08-01
A detailed study is undertaken, using various techniques, in deriving analytical formula of Franck-Condon overlap integrals and matrix elements of various functions of power (x), exponential (exp(-2cx)) and Gaussian (exp(-cx)) over displaced harmonic oscillator wave functions with arbitrary frequencies. The results suggested by previous experience with various algorithms are presented in mathematically compact form and consist of generalization. The relationships obtained are valid for the arbitrary values of parameters and the computation results are in good agreement with the literature. The numerical results illustrate clearly a further reduction in calculation times. Program summaryProgram name:FRANCK Catalogue identifier:ADXX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXX_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Programming language:Mathematica 5.0 Computer:Pentium M 1.4 GHz Operating system:Mathematica 5.0 RAM:512 MB No. of lines in distributed program, including test data, etc.:825 No. of bytes in distributed program, including test data, etc.:16 344 Distribution format:tar.gz Nature of problem:The programs calculate the Franck-Condon factors and matrix elements over displaced harmonic oscillator wave functions with arbitrary quantum numbers (n,n), frequencies (a,a) and displacement (d) for the various functions of power (x), exponential (exp(-2cx)) and Gaussian (exp(-cx)). Solution method:The Franck-Condon factors and matrix elements are evaluated using binomial coefficients and basic integrals. Restrictions:The results obtained by the present programs show great numerical stability for arbitrary quantum numbers (n,n), frequencies (a,a) and displacement (d). Unusual features:None Running time:As an example, for the value of Franck-Condon Overlap Integral I(d;α,α)=0.004405001887372332 with n=3, n=2, a=4, a=3, d=2, the compilation time in a Pentium M 1.4 GHz computer is 0.18 s. Execution
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
The analytical validation of the Oncotype DX Recurrence Score assay
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940
Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method
NASA Astrophysics Data System (ADS)
Rahimi, L.; Askari, A. A.; Saghafifar, H.
2016-07-01
The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.
Hadjitheodorou, Amalia; Kalosakas, George
2014-09-01
We investigate, both analytically and numerically, diffusion-controlled drug release from composite spherical formulations consisting of an inner core and an outer shell of different drug diffusion coefficients. Theoretically derived analytical results are based on the exact solution of Fick's second law of diffusion for a composite sphere, while numerical data are obtained using Monte Carlo simulations. In both cases, and for the range of matrix parameter values considered in this work, fractional drug release profiles are described accurately by a stretched exponential function. The release kinetics obtained is quantified through a detailed investigation of the dependence of the two stretched exponential release parameters on the device characteristics, namely the geometrical radii of the inner core and outer shell and the corresponding drug diffusion coefficients. Similar behaviors are revealed by both the theoretical results and the numerical simulations, and approximate analytical expressions are presented for the dependencies. PMID:25063169
Analytical model of diffuse reflectance spectrum of skin tissue
Lisenko, S A; Kugeiko, M M; Firago, V A; Sobchuk, A N
2014-01-31
We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions. (biophotonics)
Analytical model of diffuse reflectance spectrum of skin tissue
NASA Astrophysics Data System (ADS)
Lisenko, S. A.; Kugeiko, M. M.; Firago, V. A.; Sobchuk, A. N.
2014-01-01
We have derived simple analytical expressions that enable highly accurate calculation of diffusely reflected light signals of skin in the spectral range from 450 to 800 nm at a distance from the region of delivery of exciting radiation. The expressions, taking into account the dependence of the detected signals on the refractive index, transport scattering coefficient, absorption coefficient and anisotropy factor of the medium, have been obtained in the approximation of a two-layer medium model (epidermis and dermis) for the same parameters of light scattering but different absorption coefficients of layers. Numerical experiments on the retrieval of the skin biophysical parameters from the diffuse reflectance spectra simulated by the Monte Carlo method show that commercially available fibre-optic spectrophotometers with a fixed distance between the radiation source and detector can reliably determine the concentration of bilirubin, oxy- and deoxyhaemoglobin in the dermis tissues and the tissue structure parameter characterising the size of its effective scatterers. We present the examples of quantitative analysis of the experimental data, confirming the correctness of estimates of biophysical parameters of skin using the obtained analytical expressions.
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in
Microemulsification: an approach for analytical determinations.
Lima, Renato S; Shiroma, Leandro Y; Teixeira, Alvaro V N C; de Toledo, José R; do Couto, Bruno C; de Carvalho, Rogério M; Carrilho, Emanuel; Kubota, Lauro T; Gobbi, Angelo L
2014-09-16
We address a novel method for analytical determinations that combines simplicity, rapidity, low consumption of chemicals, and portability with high analytical performance taking into account parameters such as precision, linearity, robustness, and accuracy. This approach relies on the effect of the analyte content over the Gibbs free energy of dispersions, affecting the thermodynamic stabilization of emulsions or Winsor systems to form microemulsions (MEs). Such phenomenon was expressed by the minimum volume fraction of amphiphile required to form microemulsion (Φ(ME)), which was the analytical signal of the method. Thus, the measurements can be taken by visually monitoring the transition of the dispersions from cloudy to transparent during the microemulsification, like a titration. It bypasses the employment of electric energy. The performed studies were: phase behavior, droplet dimension by dynamic light scattering, analytical curve, and robustness tests. The reliability of the method was evaluated by determining water in ethanol fuels and monoethylene glycol in complex samples of liquefied natural gas. The dispersions were composed of water-chlorobenzene (water analysis) and water-oleic acid (monoethylene glycol analysis) with ethanol as the hydrotrope phase. The mean hydrodynamic diameter values for the nanostructures in the droplet-based water-chlorobenzene MEs were in the range of 1 to 11 nm. The procedures of microemulsification were conducted by adding ethanol to water-oleic acid (W-O) mixtures with the aid of micropipette and shaking. The Φ(ME) measurements were performed in a thermostatic water bath at 23 °C by direct observation that is based on the visual analyses of the media. The experiments to determine water demonstrated that the analytical performance depends on the composition of ME. It shows flexibility in the developed method. The linear range was fairly broad with limits of linearity up to 70.00% water in ethanol. For monoethylene glycol in
The analytic renormalization group
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2016-08-01
Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients Gk, k ∈ Z, associated with the Matsubara frequencies νk = 2 πk / β. We show that analyticity implies that the coefficients Gk must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct "Analytic Renormalization Group" linear maps Aμ which, for any choice of cut-off μ, allow to express the low energy Fourier coefficients for |νk | < μ (with the possible exception of the zero mode G0), together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for |νk | ≥ μ. Operating a simple numerical algorithm, we show that the exact universal linear constraints on Gk can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.
Accurate documentation and wound measurement.
Hampton, Sylvie
This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.
ERIC Educational Resources Information Center
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
ERIC Educational Resources Information Center
Oblinger, Diana G.
2012-01-01
Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)
Wideband analytical equivalent circuit for one-dimensional periodic stacked arrays.
Molero, Carlos; Rodríguez-Berral, Raúl; Mesa, Francisco; Medina, Francisco; Yakovlev, Alexander B
2016-01-01
A wideband equivalent circuit is proposed for the accurate analysis of scattering from a set of stacked slit gratings illuminated by a plane wave with transverse magnetic or electric polarization that impinges normally or obliquely along one of the principal planes of the structure. The slit gratings are printed on dielectric slabs of arbitrary thickness, including the case of closely spaced gratings that interact by higher-order modes. A Π-circuit topology is obtained for a pair of coupled arrays, with fully analytical expressions for all the circuit elements. This equivalent Π circuit is employed as the basis to derive the equivalent circuit of finite stacks with any given number of gratings. Analytical expressions for the Brillouin diagram and the Bloch impedance are also obtained for infinite periodic stacks. PMID:26871189
Wideband analytical equivalent circuit for one-dimensional periodic stacked arrays.
Molero, Carlos; Rodríguez-Berral, Raúl; Mesa, Francisco; Medina, Francisco; Yakovlev, Alexander B
2016-01-01
A wideband equivalent circuit is proposed for the accurate analysis of scattering from a set of stacked slit gratings illuminated by a plane wave with transverse magnetic or electric polarization that impinges normally or obliquely along one of the principal planes of the structure. The slit gratings are printed on dielectric slabs of arbitrary thickness, including the case of closely spaced gratings that interact by higher-order modes. A Π-circuit topology is obtained for a pair of coupled arrays, with fully analytical expressions for all the circuit elements. This equivalent Π circuit is employed as the basis to derive the equivalent circuit of finite stacks with any given number of gratings. Analytical expressions for the Brillouin diagram and the Bloch impedance are also obtained for infinite periodic stacks.
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
Technology Transfer Automated Retrieval System (TEKTRAN)
Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...
Huckans, Marilyn; Fuller, Bret E; Olavarria, Hannah; Sasaki, Anna W; Chang, Michael; Flora, Kenneth D; Kolessar, Michael; Kriz, Daniel; Anderson, Jeanne R; Vandenbark, Arthur A; Loftis, Jennifer M
2014-03-01
BackgroundThe purpose of this study was to characterize hepatitis C virus (HCV)-associated differences in the expression of 47 inflammatory factors and to evaluate the potential role of peripheral immune activation in HCV-associated neuropsychiatric symptoms-depression, anxiety, fatigue, and pain. An additional objective was to evaluate the role of immune factor dysregulation in the expression of specific neuropsychiatric symptoms to identify biomarkers that may be relevant to the treatment of these neuropsychiatric symptoms in adults with or without HCV. MethodsBlood samples and neuropsychiatric symptom severity scales were collected from HCV-infected adults (HCV+, n = 39) and demographically similar noninfected controls (HCV-, n = 40). Multi-analyte profile analysis was used to evaluate plasma biomarkers. ResultsCompared with HCV- controls, HCV+ adults reported significantly (P < 0.050) greater depression, anxiety, fatigue, and pain, and they were more likely to present with an increased inflammatory profile as indicated by significantly higher plasma levels of 40% (19/47) of the factors assessed (21%, after correcting for multiple comparisons). Within the HCV+ group, but not within the HCV- group, an increased inflammatory profile (indicated by the number of immune factors > the LDC) significantly correlated with depression, anxiety, and pain. Within the total sample, neuropsychiatric symptom severity was significantly predicted by protein signatures consisting of 4-10 plasma immune factors; protein signatures significantly accounted for 19-40% of the variance in depression, anxiety, fatigue, and pain. ConclusionsOverall, the results demonstrate that altered expression of a network of plasma immune factors contributes to neuropsychiatric symptom severity. These findings offer new biomarkers to potentially facilitate pharmacotherapeutic development and to increase our understanding of the molecular pathways associated with neuropsychiatric symptoms in
Huckans, Marilyn; Fuller, Bret E; Olavarria, Hannah; Sasaki, Anna W; Chang, Michael; Flora, Kenneth D; Kolessar, Michael; Kriz, Daniel; Anderson, Jeanne R; Vandenbark, Arthur A; Loftis, Jennifer M
2014-03-01
BackgroundThe purpose of this study was to characterize hepatitis C virus (HCV)-associated differences in the expression of 47 inflammatory factors and to evaluate the potential role of peripheral immune activation in HCV-associated neuropsychiatric symptoms-depression, anxiety, fatigue, and pain. An additional objective was to evaluate the role of immune factor dysregulation in the expression of specific neuropsychiatric symptoms to identify biomarkers that may be relevant to the treatment of these neuropsychiatric symptoms in adults with or without HCV. MethodsBlood samples and neuropsychiatric symptom severity scales were collected from HCV-infected adults (HCV+, n = 39) and demographically similar noninfected controls (HCV-, n = 40). Multi-analyte profile analysis was used to evaluate plasma biomarkers. ResultsCompared with HCV- controls, HCV+ adults reported significantly (P < 0.050) greater depression, anxiety, fatigue, and pain, and they were more likely to present with an increased inflammatory profile as indicated by significantly higher plasma levels of 40% (19/47) of the factors assessed (21%, after correcting for multiple comparisons). Within the HCV+ group, but not within the HCV- group, an increased inflammatory profile (indicated by the number of immune factors > the LDC) significantly correlated with depression, anxiety, and pain. Within the total sample, neuropsychiatric symptom severity was significantly predicted by protein signatures consisting of 4-10 plasma immune factors; protein signatures significantly accounted for 19-40% of the variance in depression, anxiety, fatigue, and pain. ConclusionsOverall, the results demonstrate that altered expression of a network of plasma immune factors contributes to neuropsychiatric symptom severity. These findings offer new biomarkers to potentially facilitate pharmacotherapeutic development and to increase our understanding of the molecular pathways associated with neuropsychiatric symptoms in
Huckans, Marilyn; Fuller, Bret E; Olavarria, Hannah; Sasaki, Anna W; Chang, Michael; Flora, Kenneth D; Kolessar, Michael; Kriz, Daniel; Anderson, Jeanne R; Vandenbark, Arthur A; Loftis, Jennifer M
2014-01-01
Background The purpose of this study was to characterize hepatitis C virus (HCV)-associated differences in the expression of 47 inflammatory factors and to evaluate the potential role of peripheral immune activation in HCV-associated neuropsychiatric symptoms—depression, anxiety, fatigue, and pain. An additional objective was to evaluate the role of immune factor dysregulation in the expression of specific neuropsychiatric symptoms to identify biomarkers that may be relevant to the treatment of these neuropsychiatric symptoms in adults with or without HCV. Methods Blood samples and neuropsychiatric symptom severity scales were collected from HCV-infected adults (HCV+, n = 39) and demographically similar noninfected controls (HCV−, n = 40). Multi-analyte profile analysis was used to evaluate plasma biomarkers. Results Compared with HCV− controls, HCV+ adults reported significantly (P < 0.050) greater depression, anxiety, fatigue, and pain, and they were more likely to present with an increased inflammatory profile as indicated by significantly higher plasma levels of 40% (19/47) of the factors assessed (21%, after correcting for multiple comparisons). Within the HCV+ group, but not within the HCV− group, an increased inflammatory profile (indicated by the number of immune factors > the LDC) significantly correlated with depression, anxiety, and pain. Within the total sample, neuropsychiatric symptom severity was significantly predicted by protein signatures consisting of 4–10 plasma immune factors; protein signatures significantly accounted for 19–40% of the variance in depression, anxiety, fatigue, and pain. Conclusions Overall, the results demonstrate that altered expression of a network of plasma immune factors contributes to neuropsychiatric symptom severity. These findings offer new biomarkers to potentially facilitate pharmacotherapeutic development and to increase our understanding of the molecular pathways associated with neuropsychiatric
SPLASH: Accurate OH maser positions
NASA Astrophysics Data System (ADS)
Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney
2013-10-01
The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
An Analytic Function of Lunar Surface Temperature for Exospheric Modeling
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David
2014-01-01
We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.
Proteomics: analytical tools and techniques.
MacCoss, M J; Yates, J R
2001-09-01
Scientists have long been interested in measuring the effects of different stimuli on protein expression and metabolism. Analytical methods are being developed for the automated separation, identification, and quantitation of all of the proteins within the cell. Soon, investigators will be able to observe the effects of an experiment on every protein (as opposed to a selected few). This review presents a discussion of recent technological advances in proteomics in addition to exploring current methodological limitations.
Analytical transmission electron microscopy in materials science
Fraser, H.L.
1980-01-01
Microcharacterization of materials on a scale of less than 10 nm has been afforded by recent advances in analytical transmission electron microscopy. The factors limiting accurate analysis at the limit of spatial resolution for the case of a combination of scanning transmission electron microscopy and energy dispersive x-ray spectroscopy are examined in this paper.
Dewey, Steven Clifford; Whetstone, Zachary David; Kearfott, Kimberlee Jane
2011-06-01
When characterizing environmental radioactivity, whether in the soil or within concrete building structures undergoing remediation or decommissioning, it is highly desirable to know the radionuclide depth distribution. This is typically modeled using continuous analytical expressions, whose forms are believed to best represent the true source distributions. In situ gamma ray spectroscopic measurements are combined with these models to fully describe the source. Currently, the choice of analytical expressions is based upon prior experimental core sampling results at similar locations, any known site history, or radionuclide transport models. This paper presents a method, employing multiple in situ measurements at a single site, for determining the analytical form that best represents the true depth distribution present. The measurements can be made using a variety of geometries, each of which has a different sensitivity variation with source spatial distribution. Using non-linear least squares numerical optimization methods, the results can be fit to a collection of analytical models and the parameters of each model determined. The analytical expression that results in the fit with the lowest residual is selected as the most accurate representation. A cursory examination is made of the effects of measurement errors on the method. PMID:21482447
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.; Christel, Michael; Ribarsky, Martin W.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Analytical Challenges in Biotechnology.
ERIC Educational Resources Information Center
Glajch, Joseph L.
1986-01-01
Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)
Analyticity without Differentiability
ERIC Educational Resources Information Center
Kirillova, Evgenia; Spindler, Karlheinz
2008-01-01
In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…
Modern analytical chemistry in the contemporary world
NASA Astrophysics Data System (ADS)
Šíma, Jan
2016-02-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Analytical evaluation of atomic form factors: Application to Rayleigh scattering
Safari, L.; Santos, J. P.; Amaro, P.; Jänkälä, K.; Fratini, F.
2015-05-15
Atomic form factors are widely used for the characterization of targets and specimens, from crystallography to biology. By using recent mathematical results, here we derive an analytical expression for the atomic form factor within the independent particle model constructed from nonrelativistic screened hydrogenic wave functions. The range of validity of this analytical expression is checked by comparing the analytically obtained form factors with the ones obtained within the Hartee-Fock method. As an example, we apply our analytical expression for the atomic form factor to evaluate the differential cross section for Rayleigh scattering off neutral atoms.
Light-emitting diodes for analytical chemistry.
Macka, Mirek; Piasecki, Tomasz; Dasgupta, Purnendu K
2014-01-01
Light-emitting diodes (LEDs) are playing increasingly important roles in analytical chemistry, from the final analysis stage to photoreactors for analyte conversion to actual fabrication of and incorporation in microdevices for analytical use. The extremely fast turn-on/off rates of LEDs have made possible simple approaches to fluorescence lifetime measurement. Although they are increasingly being used as detectors, their wavelength selectivity as detectors has rarely been exploited. From their first proposed use for absorbance measurement in 1970, LEDs have been used in analytical chemistry in too many ways to make a comprehensive review possible. Hence, we critically review here the more recent literature on their use in optical detection and measurement systems. Cloudy as our crystal ball may be, we express our views on the future applications of LEDs in analytical chemistry: The horizon will certainly become wider as LEDs in the deep UV with sufficient intensity become available.
Analytical solution of a model for complex food webs
NASA Astrophysics Data System (ADS)
Camacho, Juan; Guimerà, Roger; Amaral, Luís A.
2002-03-01
We investigate numerically and analytically a recently proposed model for food webs [Nature 404, 180 (2000)] in the limit of large web sizes and sparse interaction matrices. We obtain analytical expressions for several quantities with ecological interest, in particular, the probability distributions for the number of prey and the number of predators. We find that these distributions have fast-decaying exponential and Gaussian tails, respectively. We also find that our analytical expressions are robust to changes in the details of the model.
Accurate SHAPE-directed RNA structure determination
Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.
2009-01-01
Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441
Analytical Chemistry in Russia.
Zolotov, Yuri
2016-09-01
Research in Russian analytical chemistry (AC) is carried out on a significant scale, and the analytical service solves practical tasks of geological survey, environmental protection, medicine, industry, agriculture, etc. The education system trains highly skilled professionals in AC. The development and especially manufacturing of analytical instruments should be improved; in spite of this, there are several good domestic instruments and other satisfy some requirements. Russian AC has rather good historical roots.
NASA Astrophysics Data System (ADS)
Cai, Huayang; Savenije, Hubert H. G.; Toffolon, Marco
2012-09-01
This paper explores different analytical solutions of the tidal hydraulic equations in convergent estuaries. Linear and quasi-nonlinear models are compared for given geometry, friction, and tidal amplitude at the seaward boundary, proposing a common theoretical framework and showing that the main difference between the examined models lies in the treatment of the friction term. A general solution procedure is proposed for the set of governing analytical equations expressed in dimensionless form, and a new analytical expression for the tidal damping is derived as a weighted average of two solutions, characterized by the usual linearized formulation and the quasi-nonlinear Lagrangean treatment of the friction term. The different analytical solutions are tested against fully nonlinear numerical results for a wide range of parameters, and compared with observations in the Scheldt estuary. Overall, the new method compares best with the numerical solution and field data. The new accurate relationship for the tidal damping is then exploited for a classification of estuaries based on the distance of the tidally averaged depth from the ideal depth (relative to vanishing amplification) and the critical depth (condition for maximum amplification). Finally, the new model is used to investigate the effect of depth variations on the tidal dynamics in 23 real estuaries, highlighting the usefulness of the analytical method to assess the influence of human interventions (e.g. by dredging) and global sea-level rise on the estuarine environment.
Analytical Formalism for the Interaction of Two-Level Quantum Systems with Metal Nanoresonators
NASA Astrophysics Data System (ADS)
Yang, Jianji; Perrin, Mathias; Lalanne, Philippe
2015-04-01
Hybrid systems made of quantum emitters and plasmonic nanoresonators offer a unique platform to implement artificial atoms with completely novel optical responses that are not available otherwise. However, their theoretical analysis is difficult, and since many degrees of freedom have to be explored, engineering their optical properties remains challenging. Here, we propose a new formalism that removes most limitations encountered in previous analytical treatments and allows a flexible and efficient study of complex nanoresonators with arbitrary shapes in an almost fully analytically way. The formalism brings accurate closed-form expressions for the hybrid-system optical response and provides an intuitive description based on the coupling between the quantum emitters and the resonance modes of the nanoresonator. The ability to quickly predict light-scattering properties of hybrid systems paves the way to a deep exploration of their fascinating properties and may enable rapid optimization of quantum plasmonic metamaterials or quantum information devices.
NASA Astrophysics Data System (ADS)
Sedighi, H. M.; Shirazi, K. H.
2014-11-01
This article attains a new formulation of beam vibrations on an elastic foundation with quintic nonlinearity, including exact expressions for the beam curvature. To achieve a proper design of the beam structures, it is essential to realize how the beam vibrates in its transverse mode, which, in turn, yields the natural frequency of the system. In this direction, a powerful analytical method called the parameter expansion method is employed to obtain the exact solution of the frequency-amplitude relationship. It is clearly shown that the first term in series expansions is sufficient to produce a highly accurate approximation of the above-mentioned system. Finally, the accuracy of the present analytic procedure is evaluated through comparisons with numerical calculations.
Science Update: Analytical Chemistry.
ERIC Educational Resources Information Center
Worthy, Ward
1980-01-01
Briefly discusses new instrumentation in the field of analytical chemistry. Advances in liquid chromatography, photoacoustic spectroscopy, the use of lasers, and mass spectrometry are also discussed. (CS)
Road Transportable Analytical Laboratory (RTAL) system
Finger, S.M.
1995-10-01
The goal of the Road Transportable Analytical Laboratory (RTAL) Project is the development and demonstration of a system to meet the unique needs of the DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. This laboratory system has been designed to provide the field and laboratory analytical equipment necessary to detect and quantify radionuclides, organics, heavy metals and other inorganic compounds. The laboratory system consists of a set of individual laboratory modules deployable independently or as an interconnected group to meet each DOE site`s specific needs.
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability.
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability. PMID:26548137
Photovoltaic Degradation Rates -- An Analytical Review
Jordan, D. C.; Kurtz, S. R.
2012-06-01
As photovoltaic penetration of the power grid increases, accurate predictions of return on investment require accurate prediction of decreased power output over time. Degradation rates must be known in order to predict power delivery. This article reviews degradation rates of flat-plate terrestrial modules and systems reported in published literature from field testing throughout the last 40 years. Nearly 2000 degradation rates, measured on individual modules or entire systems, have been assembled from the literature, showing a median value of 0.5%/year. The review consists of three parts: a brief historical outline, an analytical summary of degradation rates, and a detailed bibliography partitioned by technology.
An analytic model for the Phobos surface
NASA Technical Reports Server (NTRS)
Duxbury, Thomas C.
1991-01-01
Analytic expressions are derived to model the surface topography and the normal to the surface of Phobos. The analytic expressions are comprised of a spherical harmonic expansion for the global figure of Phobos, augmented by addition terms for the large crater Stickney and other craters. Over 300 craters were measured in more than 100 Viking Orbiter images to produce the model. In general, the largest craters were measured since they have a significant effect on topography. The topographic model derived has a global spatial and topographic accuracy ranging from about 100 m in areas having the highest resolution and convergent, stereo coverage, up to 500 m in the poorest areas.
An analytic model for the PHOBOS surface
NASA Astrophysics Data System (ADS)
Duxbury, T. C.
1991-02-01
Analytic expressions are derived to model the surface topography and the normal to the surface of Phobos. The analytic expressions are comprised of a spherical harmonic expansion for the global figure of Phobos, augmented by addition terms for the large crater Stickney and other craters. Over 300 craters were measured in more than 100 Viking Orbiter images to produce the model. In general, the largest craters were measured since they have a significant effect on topography. The topographic model derived has a global spatial and topographic accuracy ranging from about 100 m in areas having the highest resolution and convergent, stereo coverage, up to 500 m in the poorest areas.
Learning Analytics Considered Harmful
ERIC Educational Resources Information Center
Dringus, Laurie P.
2012-01-01
This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Not Available
1990-01-01
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Analytical mass spectrometry. Abstracts
Not Available
1990-12-31
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Extreme Scale Visual Analytics
Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio
2012-05-08
Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.
Signals: Applying Academic Analytics
ERIC Educational Resources Information Center
Arnold, Kimberly E.
2010-01-01
Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…
ERIC Educational Resources Information Center
Jackson, Brian
2010-01-01
Using a survey of 138 writing programs, I argue that we must be more explicit about what we think students should get out of analysis to make it more likely that students will transfer their analytical skills to different settings. To ensure our students take analytical skills with them at the end of the semester, we must simplify the task we…
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
Extrapolatable analytical functions for tendon excursions and moment arms from sparse datasets.
Kurse, Manish U; Lipson, Hod; Valero-Cuevas, Francisco J
2012-06-01
Computationally efficient modeling of complex neuromuscular systems for dynamics and control simulations often requires accurate analytical expressions for moment arms over the entire range of motion. Conventionally, polynomial expressions are regressed from experimental data. But these polynomial regressions can fail to extrapolate, may require large datasets to train, are not robust to noise, and often have numerous free parameters. We present a novel method that simultaneously estimates both the form and parameter values of arbitrary analytical expressions for tendon excursions and moment arms over the entire range of motion from sparse datasets. This symbolic regression method based on genetic programming has been shown to find the appropriate form of mathematical expressions that capture the physics of mechanical systems. We demonstrate this method by applying it to 1) experimental data from a physical tendon-driven robotic system with arbitrarily routed multiarticular tendons and 2) synthetic data from musculoskeletal models. We show it outperforms polynomial regressions in the amount of training data, ability to extrapolate, robustness to noise, and representation containing fewer parameters--all critical to realistic and efficient computational modeling of complex musculoskeletal systems.
Extrapolatable analytical functions for tendon excursions and moment arms from sparse datasets
Kurse, Manish U.; Lipson, Hod
2013-01-01
Computationally efficient modeling of complex neuromuscular systems for dynamics and control simulations often requires accurate analytical expressions for moment arms over the entire range of motion. Conventionally, polynomial expressions are regressed from experimental data. But these polynomial regressions can fail to extrapolate, may require large datasets to train, are not robust to noise, and often have numerous free parameters. We present a novel method that simultaneously estimates both the form and parameter values of arbitrary analytical expressions for tendon excursions and moment arms over the entire range of motion from sparse datasets. This symbolic regression method based on genetic programming has been shown to find the appropriate form of mathematical expressions that capture the physics of mechanical systems. We demonstrate this method by applying it to (i) experimental data from a physical tendon-driven robotic system with arbitrarily routed multiarticular tendons and (ii) synthetic data from musculoskeletal models. We show it outperforms polynomial regressions in the amount of training data, ability to extrapolate, robustness to noise, and representation containing fewer parameters – all critical to realistic and efficient computational modeling of complex musculoskeletal systems. PMID:22410321
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
Enzymes in Analytical Chemistry.
ERIC Educational Resources Information Center
Fishman, Myer M.
1980-01-01
Presents tabular information concerning recent research in the field of enzymes in analytic chemistry, with methods, substrate or reaction catalyzed, assay, comments and references listed. The table refers to 128 references. Also listed are 13 general citations. (CS)
Extreme Scale Visual Analytics
Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E; Potok, Thomas E
2013-01-01
Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.
Analytical Improvements in PV Degradation Rate Determination
Jordan, D. C.; Kurtz, S. R.
2011-02-01
As photovoltaic (PV) penetration of the power grid increases, it becomes vital to know how decreased power output may affect cost over time. In order to predict power delivery, the decline or degradation rates must be determined accurately. For non-spectrally corrected data several complete seasonal cycles (typically 3-5 years) are required to obtain reasonably accurate degradation rates. In a rapidly evolving industry such a time span is often unacceptable and the need exists to determine degradation rates accurately in a shorter period of time. Occurrence of outliers and data shifts are two examples of analytical problems leading to greater uncertainty and therefore to longer observation times. In this paper we compare three methodologies of data analysis for robustness in the presence of outliers, data shifts and shorter measurement time periods.
Ke, Quan; Luo, Weijie; Yan, Guozheng; Yang, Kai
2016-04-01
A wireless power transfer system based on the weakly inductive coupling makes it possible to provide the endoscope microrobot (EMR) with infinite power. To facilitate the patients' inspection with the EMR system, the diameter of the transmitting coil is enlarged to 69 cm. Due to the large transmitting range, a high quality factor of the Litz-wire transmitting coil is a necessity to ensure the intensity of magnetic field generated efficiently. Thus, this paper builds an analytical model of the transmitting coil, and then, optimizes the parameters of the coil by enlarging the quality factor. The lumped model of the transmitting coil includes three parameters: ac resistance, self-inductance, and stray capacitance. Based on the exact two-dimension solution, the accurate analytical expression of ac resistance is derived. Several transmitting coils of different specifications are utilized to verify this analytical expression, being in good agreements with the measured results except the coils with a large number of strands. Then, the quality factor of transmitting coils can be well predicted with the available analytical expressions of self- inductance and stray capacitance. Owing to the exact estimation of quality factor, the appropriate coil turns of the transmitting coil is set to 18-40 within the restrictions of transmitting circuit and human tissue issues. To supply enough energy for the next generation of the EMR equipped with a Ø9.5×10.1 mm receiving coil, the coil turns of the transmitting coil is optimally set to 28, which can transfer a maximum power of 750 mW with the remarkable delivering efficiency of 3.55%. PMID:26292335
Accurate measurements of dynamics and reproducibility in small genetic networks
Dubuis, Julien O; Samanta, Reba; Gregor, Thomas
2013-01-01
Quantification of gene expression has become a central tool for understanding genetic networks. In many systems, the only viable way to measure protein levels is by immunofluorescence, which is notorious for its limited accuracy. Using the early Drosophila embryo as an example, we show that careful identification and control of experimental error allows for highly accurate gene expression measurements. We generated antibodies in different host species, allowing for simultaneous staining of four Drosophila gap genes in individual embryos. Careful error analysis of hundreds of expression profiles reveals that less than ∼20% of the observed embryo-to-embryo fluctuations stem from experimental error. These measurements make it possible to extract not only very accurate mean gene expression profiles but also their naturally occurring fluctuations of biological origin and corresponding cross-correlations. We use this analysis to extract gap gene profile dynamics with ∼1 min accuracy. The combination of these new measurements and analysis techniques reveals a twofold increase in profile reproducibility owing to a collective network dynamics that relays positional accuracy from the maternal gradients to the pair-rule genes. PMID:23340845
Cerebral cortical activity associated with non-experts' most accurate motor performance.
Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W
2014-10-01
This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
NASA Astrophysics Data System (ADS)
Diwakar, S. V.; Das, Sarit K.; Sundararajan, T.
2009-12-01
A new Quadratic Spline based Interface (QUASI) reconstruction algorithm is presented which provides an accurate and continuous representation of the interface in a multiphase domain and facilitates the direct estimation of local interfacial curvature. The fluid interface in each of the mixed cells is represented by piecewise parabolic curves and an initial discontinuous PLIC approximation of the interface is progressively converted into a smooth quadratic spline made of these parabolic curves. The conversion is achieved by a sequence of predictor-corrector operations enforcing function ( C0) and derivative ( C1) continuity at the cell boundaries using simple analytical expressions for the continuity requirements. The efficacy and accuracy of the current algorithm has been demonstrated using standard test cases involving reconstruction of known static interface shapes and dynamically evolving interfaces in prescribed flow situations. These benchmark studies illustrate that the present algorithm performs excellently as compared to the other interface reconstruction methods available in literature. Quadratic rate of error reduction with respect to grid size has been observed in all the cases with curved interface shapes; only in situations where the interface geometry is primarily flat, the rate of convergence becomes linear with the mesh size. The flow algorithm implemented in the current work is designed to accurately balance the pressure gradients with the surface tension force at any location. As a consequence, it is able to minimize spurious flow currents arising from imperfect normal stress balance at the interface. This has been demonstrated through the standard test problem of an inviscid droplet placed in a quiescent medium. Finally, the direct curvature estimation ability of the current algorithm is illustrated through the coupled multiphase flow problem of a deformable air bubble rising through a column of water.
Importance of Accurate Measurements in Nutrition Research: Dietary Flavonoids as a Case Study
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate measurements of the secondary metabolites in natural products and plant foods are critical to establishing diet/health relationships. There are as many as 50,000 secondary metabolites which may influence human health. Their structural and chemical diversity present a challenge to analytic...
NASA Astrophysics Data System (ADS)
Rensonnet, Gaëtan; Jacobs, Damien; Macq, Benoît.; Taquet, Maxime
2016-03-01
Diffusion-weighted magnetic resonance imaging (DW-MRI) is a powerful tool to probe the diffusion of water through tissues. Through the application of magnetic gradients of appropriate direction, intensity and duration constituting the acquisition parameters, information can be retrieved about the underlying microstructural organization of the brain. In this context, an important and open question is to determine an optimal sequence of such acquisition parameters for a specific purpose. The use of simulated DW-MRI data for a given microstructural configuration provides a convenient and efficient way to address this problem. We first present a novel hybrid method for the synthetic simulation of DW-MRI signals that combines analytic expressions in simple geometries such as spheres and cylinders and Monte Carlo (MC) simulations elsewhere. Our hybrid method remains valid for any acquisition parameters and provides identical levels of accuracy with a computational time that is 90% shorter than that required by MC simulations for commonly-encountered microstructural configurations. We apply our novel simulation technique to estimate the radius of axons under various noise levels with different acquisition protocols commonly used in the literature. The results of our comparison suggest that protocols favoring a large number of gradient intensities such as a Cube and Sphere (CUSP) imaging provide more accurate radius estimation than conventional single-shell HARDI acquisitions for an identical acquisition time.
Insight solutions are correct more often than analytic solutions
Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark
2016-01-01
How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960
An analytical study on the diffraction quality factor of open cavities
Huang, Y. J.; Chu, K. R.; Yeh, L. H.
2014-10-15
Open cavities are often employed as interaction structures in a new generation of coherent millimeter, sub-millimeter, and terahertz (THz) radiation sources called the gyrotron. One of the open ends of the cavity is intended for rapid extraction of the radiation generated by a powerful electron beam. Up to the sub-THz regime, the diffraction loss from this open end dominates over the Ohmic losses on the walls, which results in a much lower diffraction quality factor (Q{sub d}) than the Ohmic quality factor (Q{sub ohm}). Early analytical studies have led to various expressions for Q{sub d} and shed much light on its properties. In this study, we begin with a review of these studies, and then proceed with the derivation of an analytical expression for Q{sub d} accurate to high order. Its validity is verified with numerical solutions for a step-tunable cavity commonly employed for the development of sub-THz and THz gyrotrons. On the basis of the results, a simplified equation is obtained which explicitly expresses the scaling laws of Q{sub d} with respect to mode indices and cavity dimensions.
Advances in analytical chemistry
NASA Technical Reports Server (NTRS)
Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.
1991-01-01
Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.
Competing on talent analytics.
Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy
2010-10-01
Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.
Frontiers in analytical chemistry
Amato, I.
1988-12-15
Doing more with less was the modus operandi of R. Buckminster Fuller, the late science genius, and inventor of such things as the geodesic dome. In late September, chemists described their own version of this maxim--learning more chemistry from less material and in less time--in a symposium titled Frontiers in Analytical Chemistry at the 196th National Meeting of the American Chemical Society in Los Angeles. Symposium organizer Allen J. Bard of the University of Texas at Austin assembled six speakers, himself among them, to survey pretty widely different areas of analytical chemistry.
Monitoring the analytic surface.
Spence, D P; Mayes, L C; Dahl, H
1994-01-01
How do we listen during an analytic hour? Systematic analysis of the speech patterns of one patient (Mrs. C.) strongly suggests that the clustering of shared pronouns (e.g., you/me) represents an important aspect of the analytic surface, preconsciously sensed by the analyst and used by him to determine when to intervene. Sensitivity to these patterns increases over the course of treatment, and in a final block of 10 hours shows a striking degree of contingent responsivity: specific utterances by the patient are consistently echoed by the analyst's interventions. PMID:8182248
Accurate 12D dipole moment surfaces of ethylene
NASA Astrophysics Data System (ADS)
Delahaye, Thibault; Nikitin, Andrei V.; Rey, Michael; Szalay, Péter G.; Tyuterev, Vladimir G.
2015-10-01
Accurate ab initio full-dimensional dipole moment surfaces of ethylene are computed using coupled-cluster approach and its explicitly correlated counterpart CCSD(T)-F12 combined respectively with cc-pVQZ and cc-pVTZ-F12 basis sets. Their analytical representations are provided through 4th order normal mode expansions. First-principles prediction of the line intensities using variational method up to J = 30 are in excellent agreement with the experimental data in the range of 0-3200 cm-1. Errors of 0.25-6.75% in integrated intensities for fundamental bands are comparable with experimental uncertainties. Overall calculated C2H4 opacity in 600-3300 cm-1 range agrees with experimental determination better than to 0.5%.
Analytic solutions for optimal statistical arbitrage trading
NASA Astrophysics Data System (ADS)
Bertram, William K.
2010-06-01
In this paper we derive analytic formulae for statistical arbitrage trading where the security price follows an Ornstein-Uhlenbeck process. By framing the problem in terms of the first-passage time of the process, we derive expressions for the mean and variance of the trade length and the return. We examine the problem of choosing an optimal strategy under two different objective functions: the expected return, and the Sharpe ratio. An exact analytic solution is obtained for the case of maximising the expected return.
Integrated Array/Metadata Analytics
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2015-04-01
Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.
Analytical Services Management System
Church, Shane; Nigbor, Mike; Hillman, Daniel
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standard chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.
Analytical Services Management System
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standardmore » chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.« less
Challenges for Visual Analytics
Thomas, James J.; Kielman, Joseph
2009-09-23
Visual analytics has seen unprecedented growth in its first five years of mainstream existence. Great progress has been made in a short time, yet great challenges must be met in the next decade to provide new technologies that will be widely accepted by societies throughout the world. This paper sets the stage for some of those challenges in an effort to provide the stimulus for the research, both basic and applied, to address and exceed the envisioned potential for visual analytics technologies. We start with a brief summary of the initial challenges, followed by a discussion of the initial driving domains and applications, as well as additional applications and domains that have been a part of recent rapid expansion of visual analytics usage. We look at the common characteristics of several tools illustrating emerging visual analytics technologies, and conclude with the top ten challenges for the field of study. We encourage feedback and collaborative participation by members of the research community, the wide array of user communities, and private industry.
Analytical Chemistry Laboratory
NASA Technical Reports Server (NTRS)
Anderson, Mark
2013-01-01
The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.
Analytics: Changing the Conversation
ERIC Educational Resources Information Center
Oblinger, Diana G.
2013-01-01
In this third and concluding discussion on analytics, the author notes that we live in an information culture. We are accustomed to having information instantly available and accessible, along with feedback and recommendations. We want to know what people think and like (or dislike). We want to know how we compare with "others like me."…
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Modified chemiluminescent NO analyzer accurately measures NOX
NASA Technical Reports Server (NTRS)
Summers, R. L.
1978-01-01
Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
Metabolomics and diabetes: analytical and computational approaches.
Sas, Kelli M; Karnovsky, Alla; Michailidis, George; Pennathur, Subramaniam
2015-03-01
Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200
Metabolomics and Diabetes: Analytical and Computational Approaches
Sas, Kelli M.; Karnovsky, Alla; Michailidis, George
2015-01-01
Diabetes is characterized by altered metabolism of key molecules and regulatory pathways. The phenotypic expression of diabetes and associated complications encompasses complex interactions between genetic, environmental, and tissue-specific factors that require an integrated understanding of perturbations in the network of genes, proteins, and metabolites. Metabolomics attempts to systematically identify and quantitate small molecule metabolites from biological systems. The recent rapid development of a variety of analytical platforms based on mass spectrometry and nuclear magnetic resonance have enabled identification of complex metabolic phenotypes. Continued development of bioinformatics and analytical strategies has facilitated the discovery of causal links in understanding the pathophysiology of diabetes and its complications. Here, we summarize the metabolomics workflow, including analytical, statistical, and computational tools, highlight recent applications of metabolomics in diabetes research, and discuss the challenges in the field. PMID:25713200
Can Appraisers Rate Work Performance Accurately?
ERIC Educational Resources Information Center
Hedge, Jerry W.; Laue, Frances J.
The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1987-01-01
This document discusses the determination of caustic surfaces in terms of rays, reflectors, and wavefronts. Analytical caustics are obtained as a family of lines, a set of points, and several types of equations for geometries encountered in optics and microwave applications. Standard methods of differential geometry are applied under different approaches: directly to reflector surfaces, and alternatively, to wavefronts, to obtain analytical caustics of two sheets or branches. Gauss/Seidel aberrations are introduced into the wavefront approach, forcing the retention of all three coefficients of both the first- and the second-fundamental forms of differential geometry. An existing method for obtaining caustic surfaces through exploitation of the singularities in flux density is examined, and several constant-intensity contour maps are developed using only the intrinsic Gaussian, mean, and normal curvatures of the reflector. Numerous references are provided for extending the material of the present document to the morphologies of caustics and their associated diffraction patterns.
Requirements for Predictive Analytics
Troy Hiltbrand
2012-03-01
It is important to have a clear understanding of how traditional Business Intelligence (BI) and analytics are different and how they fit together in optimizing organizational decision making. With tradition BI, activities are focused primarily on providing context to enhance a known set of information through aggregation, data cleansing and delivery mechanisms. As these organizations mature their BI ecosystems, they achieve a clearer picture of the key performance indicators signaling the relative health of their operations. Organizations that embark on activities surrounding predictive analytics and data mining go beyond simply presenting the data in a manner that will allow decisions makers to have a complete context around the information. These organizations generate models based on known information and then apply other organizational data against these models to reveal unknown information.
Analytic ICF Hohlraum Energetics
Rosen, M D; Hammer, J
2003-08-27
We apply recent analytic solutions to the radiation diffusion equation to problems of interest for ICF hohlraums. The solutions provide quantitative values for absorbed energy which are of use for generating a desired radiation temperature vs. time within the hohlraum. Comparison of supersonic and subsonic solutions (heat front velocity faster or slower, respectively, than the speed of sound in the x-ray heated material) suggests that there may be some advantage in using high Z metallic foams as hohlraum wall material to reduce hydrodynamic losses, and hence, net absorbed energy by the walls. Analytic and numerical calculations suggest that the loss per unit area might be reduced {approx} 20% through use of foam hohlraum walls. Reduced hydrodynamic motion of the wall material may also reduce symmetry swings, as found for heavy ion targets.
Brune, D.; Forkman, B.; Persson, B.
1984-01-01
This book covers the general theories and techniques of nuclear chemical analysis, directed at applications in analytical chemistry, nuclear medicine, radiophysics, agriculture, environmental sciences, geological exploration, industrial process control, etc. The main principles of nuclear physics and nuclear detection on which the analysis is based are briefly outlined. An attempt is made to emphasise the fundamentals of activation analysis, detection and activation methods, as well as their applications. The book provides guidance in analytical chemistry, agriculture, environmental and biomedical sciences, etc. The contents include: the nuclear periodic system; nuclear decay; nuclear reactions; nuclear radiation sources; interaction of radiation with matter; principles of radiation detectors; nuclear electronics; statistical methods and spectral analysis; methods of radiation detection; neutron activation analysis; charged particle activation analysis; photon activation analysis; sample preparation and chemical separation; nuclear chemical analysis in biological and medical research; the use of nuclear chemical analysis in the field of criminology; nuclear chemical analysis in environmental sciences, geology and mineral exploration; and radiation protection.
Analytical applications of aptamers
NASA Astrophysics Data System (ADS)
Tombelli, S.; Minunni, M.; Mascini, M.
2007-05-01
Aptamers are single stranded DNA or RNA ligands which can be selected for different targets starting from a library of molecules containing randomly created sequences. Aptamers have been selected to bind very different targets, from proteins to small organic dyes. Aptamers are proposed as alternatives to antibodies as biorecognition elements in analytical devices with ever increasing frequency. This in order to satisfy the demand for quick, cheap, simple and highly reproducible analytical devices, especially for protein detection in the medical field or for the detection of smaller molecules in environmental and food analysis. In our recent experience, DNA and RNA aptamers, specific for three different proteins (Tat, IgE and thrombin), have been exploited as bio-recognition elements to develop specific biosensors (aptasensors). These recognition elements have been coupled to piezoelectric quartz crystals and surface plasmon resonance (SPR) devices as transducers where the aptamers have been immobilized on the gold surface of the crystals electrodes or on SPR chips, respectively.
Analytic holographic superconductor
NASA Astrophysics Data System (ADS)
Herzog, Christopher P.
2010-06-01
We investigate a holographic superconductor that admits an analytic treatment near the phase transition. In the dual 3+1-dimensional field theory, the phase transition occurs when a scalar operator of scaling dimension two gets a vacuum expectation value. We calculate current-current correlation functions along with the speed of second sound near the critical temperature. We also make some remarks about critical exponents. An analytic treatment is possible because an underlying Heun equation describing the zero mode of the phase transition has a polynomial solution. Amusingly, the treatment here may generalize for an order parameter with any integer spin, and we propose a Lagrangian for a spin-two holographic superconductor.
Cowell, Andrew J.; Cowell, Amanda K.
2009-08-29
This paper discusses the design and use of anthropomorphic computer characters as nonplayer characters (NPC’s) within analytical games. These new environments allow avatars to play a central role in supporting training and education goals instead of planning the supporting cast role. This new ‘science’ of gaming, driven by high-powered but inexpensive computers, dedicated graphics processors and realistic game engines, enables game developers to create learning and training opportunities on par with expensive real-world training scenarios. However, there needs to be care and attention placed on how avatars are represented and thus perceived. A taxonomy of non-verbal behavior is presented and its application to analytical gaming discussed.
Industrial Analytics Corporation
Industrial Analytics Corporation
2004-01-30
The lost foam casting process is sensitive to the properties of the EPS patterns used for the casting operation. In this project Industrial Analytics Corporation (IAC) has developed a new low voltage x-ray instrument for x-ray radiography of very low mass EPS patterns. IAC has also developed a transmitted visible light method for characterizing the properties of EPS patterns. The systems developed are also applicable to other low density materials including graphite foams.
Davenport, Thomas H
2006-01-01
We all know the power of the killer app. It's not just a support tool; it's a strategic weapon. Companies questing for killer apps generally focus all their firepower on the one area that promises to create the greatest competitive advantage. But a new breed of organization has upped the stakes: Amazon, Harrah's, Capital One, and the Boston Red Sox have all dominated their fields by deploying industrial-strength analytics across a wide variety of activities. At a time when firms in many industries offer similar products and use comparable technologies, business processes are among the few remaining points of differentiation--and analytics competitors wring every last drop of value from those processes. Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions. In companies that compete on analytics, senior executives make it clear--from the top down--that analytics is central to strategy. Such organizations launch multiple initiatives involving complex data and statistical analysis, and quantitative activity is managed atthe enterprise (not departmental) level. In this article, professor Thomas H. Davenport lays out the characteristics and practices of these statistical masters and describes some of the very substantial changes other companies must undergo in order to compete on quantitative turf. As one would expect, the transformation requires a significant investment in technology, the accumulation of massive stores of data, and the formulation of company-wide strategies for managing the data. But, at least as important, it also requires executives' vocal, unswerving commitment and willingness to change the way employees think, work, and are treated.
Mass inflation in Eddington-inspired Born-Infeld black holes: Analytical scaling solutions
NASA Astrophysics Data System (ADS)
Avelino, P. P.
2016-05-01
We study the inner dynamics of accreting Eddington-inspired Born-Infeld black holes using the homogeneous approximation and taking charge as a surrogate for angular momentum. We show that there is a minimum of the accretion rate below which mass inflation does not occur, and we derive an analytical expression for this threshold as a function of the fundamental scale of the theory, the accretion rate, the mass, and the charge of the black hole. Our result explicitly demonstrates that, no matter how close Eddington-inspired Born-Infeld gravity is to general relativity, there is always a minimum accretion rate below which there is no mass inflation. For larger accretion rates, mass inflation takes place inside the black hole as in general relativity until the extremely rapid density variations bring it to an abrupt end. We derive analytical scaling solutions for the value of the energy density and of the Misner-Sharp mass attained at the end of mass inflation as a function of the fundamental scale of the theory, the accretion rate, the mass, and the charge of the black hole, and compare these with the corresponding numerical solutions. We find that, except for unreasonably high accretion rates, our analytical results appear to provide an accurate description of homogeneous mass inflation inside accreting Eddington-inspired Born-Infeld black holes.
The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1...
New analytic formula for edge bootstrap current
NASA Astrophysics Data System (ADS)
Chang, C. S.; Koh, S.; Menard, J.; Weitzner, H.; Choe, W.
2012-10-01
The edge bootstrap current plays a critical role in the equilibrium and stability of the steep edge pedestal plasma. The pedestal plasma has an unconventional and difficult neoclassical property, as compared with the core plasma. A drift-kinetic particle code XGC0, equipped with a mass-momentum-energy conserving collision operator, is used to study the edge bootstrap current in a realistic diverted magnetic field geometry with a self-consistent radial electric field. When the edge electrons are in the low collisionality banana regime, surprisingly, the present kinetic simulation confirms that the existing analytic expressions (represented by O. Sauter, et. al., Phys. Plasmas 6, 1999) are still valid in this unconventional region, except in a thin radial layer in contact with the magnetic separatrix. However, when the pedestal electrons are in plateau-collisional regime, there is a significant deviation of numerical results from the existing analytic formulas. The deviation occurs in different ways between a conventional aspect ratio tokamak and a tight aspect ratio tokamak. A new analytic fitting formula, as a simple modification to the Sauter formula, is obtained to bring the analytic expression to a better agreement with the edge kinetic simulation results.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1990-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potential and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1989-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potenital and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Time-domain Raman analytical forward solvers.
Martelli, Fabrizio; Binzoni, Tiziano; Sekar, Sanathana Konugolu Venkata; Farina, Andrea; Cavalieri, Stefano; Pifferi, Antonio
2016-09-01
A set of time-domain analytical forward solvers for Raman signals detected from homogeneous diffusive media is presented. The time-domain solvers have been developed for two geometries: the parallelepiped and the finite cylinder. The potential presence of a background fluorescence emission, contaminating the Raman signal, has also been taken into account. All the solvers have been obtained as solutions of the time dependent diffusion equation. The validation of the solvers has been performed by means of comparisons with the results of "gold standard" Monte Carlo simulations. These forward solvers provide an accurate tool to explore the information content encoded in the time-resolved Raman measurements. PMID:27607645
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Visual Analytics: How Much Visualization and How Much Analytics?
Keim, Daniel; Mansmann, Florian; Thomas, James J.
2009-12-16
The term Visual Analytics has been around for almost five years by now, but still there are on-going discussions about what it actually is and in particular what is new about it. The core of our view on Visual Analytics is the new enabling and accessible analytic reasoning interactions supported by the combination of automated and visual analytics. In this paper, we outline the scope of Visual Analytics using two problem and three methodological classes in order to work out the need for and purpose of Visual Analytics. Thereby, the respective methods are explained plus examples of analytic reasoning interaction leading to a glimpse into the future of how Visual Analytics methods will enable us to go beyond what is possible when separately using the two methods.
Analytical solution for the Feynman ratchet.
Pesz, Karol; Gabryś, Barbara J; Bartkiewicz, Stanisław J
2002-12-01
A search for an analytical, closed form solution of the Fokker-Planck equation with periodic, asymmetric potentials (ratchets) is presented. It is found that logarithmic-type potential functions (related to "entropic" ratchets) allow for an approximate solution within a certain range of parameters. An expression for the net current is calculated and it is shown that the efficiency of the rocked entropic ratchet is always low.
Fast and Accurate Construction of Confidence Intervals for Heritability.
Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran
2016-06-01
Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
Analytic electrical-conductivity tensor of a nondegenerate Lorentz plasma
NASA Astrophysics Data System (ADS)
Stygar, W. A.; Gerdin, G. A.; Fehl, D. L.
2002-10-01
We have developed explicit quantum-mechanical expressions for the conductivity and resistivity tensors of a Lorentz plasma in a magnetic field. The expressions are based on a solution to the Boltzmann equation that is exact when the electric field is weak, the electron-Fermi-degeneracy parameter Θ>>1, and the electron-ion Coulomb-coupling parameter Γ/Z<<1. (Γ is the ion-ion coupling parameter and Z is the ion charge state.) Assuming a screened 1/r electron-ion scattering potential, we calculate the Coulomb logarithm in the second Born approximation. The ratio of the term obtained in the second approximation to that obtained in the first is used to define the parameter regime over which the calculation is valid. We find that the accuracy of the approximation is determined by Γ/Z and not simply the temperature, and that a quantum-mechanical description can be required at temperatures orders of magnitude less than assumed by Spitzer [Physics of Fully Ionized Gases (Wiley, New York, 1962)]. When the magnetic field B=0, the conductivity is identical to the Spitzer result except the Coulomb logarithm ln Λ1=(ln χ1- 1/2)+[(2Ze2/λmev2e1)(ln χ1-ln 24/3)], where χ1≡2meve1λ/ħ, me is the electron mass, ve1≡(7kBT/me)1/2, kB is the Boltzmann constant, T is the temperature, λ is the screening length, ħ is Planck's constant divided by 2π, and e is the absolute value of the electron charge. When the plasma Debye length λD is greater than the ion-sphere radius a, we assume λ=λD otherwise we set λ=a. The B=0 conductivity is consistent with measurements when Z>~1, Θ>~2, and Γ/Z<~1, and in this parameter regime appears to be more accurate than previous analytic models. The minimum value of ln Λ1 when Z>=1, Θ>=2, and Γ/Z<=1 is 1.9. The expression obtained for the resistivity tensor (B≠0) predicts that η⊥/η∥ (where η⊥ and η∥ are the resistivities perpendicular and parallel to the magnetic field) can be as much as 40% less than previous analytic
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Preparation and accurate measurement of pure ozone.
Janssen, Christof; Simone, Daniela; Guinet, Mickaël
2011-03-01
Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Line gas sampling system ensures accurate analysis
Not Available
1992-06-01
Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Safouhi, Hassan . E-mail: hassan.safouhi@ualberta.ca; Berlu, Lilian
2006-07-20
Molecular overlap-like quantum similarity measurements imply the evaluation of overlap integrals of two molecular electronic densities related by Dirac delta function. When the electronic densities are expanded over atomic orbitals using the usual LCAO-MO approach (linear combination of atomic orbitals), overlap-like quantum similarity integrals could be expressed in terms of four-center overlap integrals. It is shown that by introducing the Fourier transform of delta Dirac function in the integrals and using the Fourier transform approach combined with the so-called B functions, one can obtain analytic expressions of the integrals under consideration. These analytic expressions involve highly oscillatory semi-infinite spherical Bessel functions, which are the principal source of severe numerical and computational difficulties. In this work, we present a highly efficient algorithm for a fast and accurate numerical evaluation of these multicenter overlap-like quantum similarity integrals over Slater type functions. This algorithm is based on the SD-bar approach due to Safouhi. Recurrence formulae are used for a better control of the degree of accuracy and for a better stability of the algorithm. The numerical result section shows the efficiency of our algorithm, compared with the alternatives using the one-center two-range expansion method, which led to very complicated analytic expressions, the epsilon algorithm and the nonlinear D-bar transformation.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-04-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Analytical and numerical investigations into hemisphere-shaped electrostatic sensors.
Lin, Jun; Chen, Zhong-Sheng; Hu, Zheng; Yang, Yong-Min; Tang, Xin
2014-01-01
Electrostatic sensors have been widely used in many applications due to their advantages of low cost and robustness. Their spatial sensitivity and time-frequency characteristics are two important performance parameters. In this paper, an analytical model of the induced charge on a novel hemisphere-shaped electrostatic sensor was presented to investigate its accurate sensing characteristics. Firstly a Poisson model was built for electric fields produced by charged particles. Then the spatial sensitivity and time-frequency response functions were directly derived by the Green function. Finally, numerical calculations were done to validate the theoretical results. The results demonstrate that the hemisphere-shaped sensors have highly 3D-symmetrical spatial sensitivity expressed in terms of elementary function, and the spatial sensitivity is higher and less homogeneous near the hemispherical surface and vice versa. Additionally, the whole monitoring system, consisting of an electrostatic probe and a signal conditioner circuit, acts as a band-pass filter. The time-frequency characteristics depend strongly on the spatial position and velocity of the charged particle, the radius of the probe as well as the equivalent resistance and capacitance of the circuit.
Analytical and Numerical Investigations into Hemisphere-Shaped Electrostatic Sensors
Lin, Jun; Chen, Zhong-Sheng; Hu, Zheng; Yang, Yong-Min; Tang, Xin
2014-01-01
Electrostatic sensors have been widely used in many applications due to their advantages of low cost and robustness. Their spatial sensitivity and time-frequency characteristics are two important performance parameters. In this paper, an analytical model of the induced charge on a novel hemisphere-shaped electrostatic sensor was presented to investigate its accurate sensing characteristics. Firstly a Poisson model was built for electric fields produced by charged particles. Then the spatial sensitivity and time-frequency response functions were directly derived by the Green function. Finally, numerical calculations were done to validate the theoretical results. The results demonstrate that the hemisphere-shaped sensors have highly 3D-symmetrical spatial sensitivity expressed in terms of elementary function, and the spatial sensitivity is higher and less homogeneous near the hemispherical surface and vice versa. Additionally, the whole monitoring system, consisting of an electrostatic probe and a signal conditioner circuit, acts as a band-pass filter. The time-frequency characteristics depend strongly on the spatial position and velocity of the charged particle, the radius of the probe as well as the equivalent resistance and capacitance of the circuit. PMID:25090419
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Precise and accurate isotopic measurements using multiple-collector ICPMS
NASA Astrophysics Data System (ADS)
Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce
2004-06-01
New techniques of isotopic measurements by a new generation of mass spectrometers equipped with an inductively-coupled-plasma source, a magnetic mass filter, and multiple collection (MC-ICPMS) are quickly developing. These techniques are valuable because of (1) the ability of ICP sources to ionize virtually every element in the periodic table, and (2) the large sample throughout. However, because of the complex trajectories of multiple ion beams produced in the plasma source whether from the same or different elements, the acquisition of precise and accurate isotopic data with this type of instrument still requires a good understanding of instrumental fractionation processes, both mass-dependent and mass-independent. Although physical processes responsible for the instrumental mass bias are still to be understood more fully, we here present a theoretical framework that allows for most of the analytical limitations to high precision and accuracy to be overcome. After a presentation of unifying phenomenological theory for mass-dependent fractionation in mass spectrometers, we show how this theory accounts for the techniques of standard bracketing and of isotopic normalization by a ratio of either the same or a different element, such as the use of Tl to correct mass bias on Pb. Accuracy is discussed with reference to the concept of cup efficiencies. Although these can be simply calibrated by analyzing standards, we derive a straightforward, very general method to calculate accurate isotopic ratios from dynamic measurements. In this study, we successfully applied the dynamic method to Nd and Pb as examples. We confirm that the assumption of identical mass bias for neighboring elements (notably Pb and Tl, and Yb and Lu) is both unnecessary and incorrect. We further discuss the dangers of straightforward standard-sample bracketing when chemical purification of the element to be analyzed is imperfect. Pooling runs to improve precision is acceptable provided the pooled
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; McInerney, M. A.; Tamkin, G. S.; Thompson, J. H.; Gill, R.; Grieg, C. M.
2012-12-01
MERRA Analytic Services (MERRA/AS) is a cyberinfrastructure resource for developing and evaluating a new generation of climate data analysis capabilities. MERRA/AS supports OBS4MIP activities by reducing the time spent in the preparation of Modern Era Retrospective-Analysis for Research and Applications (MERRA) data used in data-model intercomparison. It also provides a testbed for experimental development of high-performance analytics. MERRA/AS is a cloud-based service built around the Virtual Climate Data Server (vCDS) technology that is currently used by the NASA Center for Climate Simulation (NCCS) to deliver Intergovernmental Panel on Climate Change (IPCC) data to the Earth System Grid Federation (ESGF). Crucial to its effectiveness, MERRA/AS's servers will use a workflow-generated realizable object capability to perform analyses over the MERRA data using the MapReduce approach to parallel storage-based computation. The results produced by these operations will be stored by the vCDS, which will also be able to host code sets for those who wish to explore the use of MapReduce for more advanced analytics. While the work described here will focus on the MERRA collection, these technologies can be used to publish other reanalysis, observational, and ancillary OBS4MIP data to ESGF and, importantly, offer an architectural approach to climate data services that can be generalized to applications and customers beyond the traditional climate research community. In this presentation, we describe our approach, experiences, lessons learned,and plans for the future.; (A) MERRA/AS software stack. (B) Example MERRA/AS interfaces.
Proficiency analytical testing program
Groff, J.H.; Schlecht, P.C.
1994-03-01
The Proficiency Analytical Testing (PAT) Program is a collaborative effort of the American Industrial Hygiene Association (AIHA) and researchers at the Centers for Disease Control and Prevention (CDC), National Institute for Occupational Safety and Health (NIOSH). The PAT Program provides quality control reference samples to over 1400 occupational health and environmental laboratories in over 15 countries. Although one objective of the PAT Program is to evaluate the analytical ability of participating laboratories, the primary objective is to assist these laboratories in improving their laboratory performance. Each calendar quarter (designated a round), samples are mailed to participating laboratories and the data are analyzed to evaluate laboratory performance on a series of analyses. Each mailing and subsequent data analysis are completed in time for participants to obtain repeat samples and to correct analytical problems before the next calendar quarter starts. The PAT Program currently includes four sets of samples. A mixture of 3 of the 4 possible metals, and 3 of the 15 possible organic solvents are rotated for each round. Laboratories are evaluated for each analysis by comparing their reported results against an acceptable performance limit for each PAT Program sample the laboratory analyses. Reference laboratories are preselected to provide the performance limits for each sample. These reference laboratories must meet the following criteria: (1) the laboratory was rated proficient in the last PAT evaluation of all the contaminants in the Program; and (2) the laboratory, if located in the United States, is AIHA accredited. Data are acceptable if they fall within the performance limits. Laboratories are rated based upon performance in the PAT Program over the last year (i.e., four calendar quarters), as well as on individual contaminant performance and overall performance. 1 ref., 3 tabs.
Proficiency analytical testing program
Schlecht, P.C.; Groff, J.H.
1994-06-01
The Proficiency Analytical Testing (PAT) Program is a collaborative effort of the American Industrial Hygiene Association (AIHA) and researchers at the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health (NIOSH). The PAT Program provides quality control reference samples to over 1400 occupational health and environmental laboratories in over 15 countries. Although one objective of the PAT Program is to evaluate the analytical ability of participating laboratories, the primary objective is to assist these laboratories in improving their laboratory performance. Each calendar quarter (designated a round), samples are mailed to participating laboratories and the data are analyzed to evaluate laboratory performance on a series of analyses. Each mailing and subsequent data analysis is completed in time for participants to obtain repeat samples and to correct analytical problems before the next calendar quarter starts. The PAT Program currently includes four sets of samples. A mixture of 3 of the 4 possible metals, and 3 of the 15 possible organic solvents are rotated for each round. Laboratories are evaluated for each analysis by comparing their reported results against an acceptable performance limit for each PAT Program sample the laboratory analyses. Reference laboratories are preselected to provide the performance limits for each sample. These reference laboratories must meet the following criteria: (1) the laboratory was rated proficient in the last PAT evaluation of all the contaminants in the Program; and (2) the laboratory, if located in the United States, is AIHA accredited. Data are acceptable if they fall within the performance limits. Laboratories are rated based upon performance in the PAT Program over the last year (i.e., four calendar quarters), as well as on individual contaminant performance and overall performance. 1 ref., 3 tabs.
Analytical chemistry of nickel.
Stoeppler, M
1984-01-01
Analytical chemists are faced with nickel contents in environmental and biological materials ranging from the mg/kg down to the ng/kg level. Sampling and sample treatment have to be performed with great care at lower levels, and this also applies to enrichment and separation procedures. The classical determination methods formerly used have been replaced almost entirely by different forms of atomic absorption spectrometry. Electroanalytical methods are also of increasing importance and at present provide the most sensitive approach. Despite the powerful methods available, achieving reliable results is still a challenge for the analyst requiring proper quality control measures.
Automation of analytical isotachophoresis
NASA Technical Reports Server (NTRS)
Thormann, Wolfgang
1985-01-01
The basic features of automation of analytical isotachophoresis (ITP) are reviewed. Experimental setups consisting of narrow bore tubes which are self-stabilized against thermal convection are considered. Sample detection in free solution is discussed, listing the detector systems presently used or expected to be of potential use in the near future. The combination of a universal detector measuring the evolution of ITP zone structures with detector systems specific to desired components is proposed as a concept of an automated chemical analyzer based on ITP. Possible miniaturization of such an instrument by means of microlithographic techniques is discussed.
Analytical estimation of the parameters of autodyne lidar.
Koganov, Gennady A; Shuker, Reuben; Gordov, Evgueni P
2002-11-20
An analytical approach for a calculation of the parameters of autodyne lidar is presented. Approximate expressions connecting the absorption coefficient and the distance to the remote target with both the lidar parameters and the measured quantities are obtained. These expressions allow one to retrieve easily the information about the atmosphere from the experimental data. PMID:12463256
Continuous-wave (F + H(2)) chemical lasers: a temperature-dependent analytical diffusion model.
Herbelin, J M
1976-01-01
The development of an analytical model for predicting the performance of HF lasers that result from the mixing of atomic fluorine with molecular hydrogen in continuously flowing systems is described. The model combines a temperature-dependent solution for a premixed laser system with laminar or turbulent flame-sheet mixing schemes to generate closed-form expressions for the two conditions of constant pressure (simulating a free jet) and constant density (simulating a partially confined flow). The various approximations, including a fully communicating cavity and characteristic reaction and deactivation lifetimes, are discussed. Scaling laws that relate power to the total pressure and nozzle parameters are developed. Comparison with exact numerical treatments for a wide range of conditions reveals that the model is consistently accurate to ~10%. Finally, the sensitivity of the predictions to the kinetic rate package and the utility of the model for performing parameter studies are indicated.
Analytical and numerical analyses of the micromechanics of soft fibrous connective tissues.
deBotton, Gal; Oren, Tal
2013-01-01
State of the art research and treatment of biological tissues require accurate and efficient methods for describing their mechanical properties. Indeed, micromechanics-motivated approaches provide a systematic method for elevating relevant data from the microscopic level to the macroscopic one. In this work, the mechanical responses of hyperelastic tissues with one and two families of collagen fibers are analyzed by application of a new variational estimate accounting for their histology and the behaviors of their constituents. The resulting close-form expressions are used to determine the overall response of the wall of a healthy human coronary artery. To demonstrate the accuracy of the proposed method, these predictions are compared with corresponding 3D finite element simulations of a periodic unit cell of the tissue with two families of fibers. Throughout, the analytical predictions for the highly nonlinear and anisotropic tissue are in agreement with the numerical simulations.
Analytical Modeling of Squeeze Film Damping in Dual Axis Torsion Microactuators
NASA Astrophysics Data System (ADS)
Moeenfard, Hamid
2015-10-01
In this paper, problem of squeeze film damping in dual axis torsion microactuators is modeled and closed form expressions are provided for damping torques around tilting axes of the actuator. The Reynolds equation which governs the pressure distribution underneath the actuator is linearized. The resulting equation is then solved analytically. The obtained pressure distribution is used to calculate the normalized damping torques around tilting axes of the actuator. Dependence of the damping torques on the design parameters of the dual axis torsion actuator is studied. It is observed that with proper selection of the actuator's aspect ratio, damping torque along one of the tilting directions can be eliminated. It is shown that when the tilting angles of the actuator are small, squeeze film damping would act like a linear viscous damping. The results of this paper can be used for accurate dynamical modeling and control of torsion dual axis microactuators.
Bruce, S D; Higinbotham, J; Marshall, I; Beswick, P H
2000-01-01
The approximation of the Voigt line shape by the linear summation of Lorentzian and Gaussian line shapes of equal width is well documented and has proved to be a useful function for modeling in vivo (1)H NMR spectra. We show that the error in determining peak areas is less than 0.72% over a range of simulated Voigt line shapes. Previous work has concentrated on empirical analysis of the Voigt function, yielding accurate expressions for recovering the intrinsic Lorentzian component of simulated line shapes. In this work, an analytical approach to the approximation is presented which is valid for the range of Voigt line shapes in which either the Lorentzian or Gaussian component is dominant. With an empirical analysis of the approximation, the direct recovery of T(2) values from simulated line shapes is also discussed. PMID:10617435
Quality Indicators for Learning Analytics
ERIC Educational Resources Information Center
Scheffel, Maren; Drachsler, Hendrik; Stoyanov, Slavi; Specht, Marcus
2014-01-01
This article proposes a framework of quality indicators for learning analytics that aims to standardise the evaluation of learning analytics tools and to provide a mean to capture evidence for the impact of learning analytics on educational practices in a standardised manner. The criteria of the framework and its quality indicators are based on…
NASA Astrophysics Data System (ADS)
Daeppen, W.
1980-11-01
In the free energy method statistical mechanical models are used to construct a free energy function of the plasma. The equilibrium composition for given temperature and density is found where the free energy is a minimum. Until now the free energy could not be expressed analytically, because the contributions from the partially degenerate electrons and from the inner degrees of freedom of the bound particles had to be evaluated numerically. In the present paper further simplifications are made to obtain an analytic expression for the free energy. Thus the minimum is rapidly found using a second order algorithm, whereas until now numerical first order derivatives and a steepest- descent method had to be used. Consequently time-consuming computations are avoided and the analytical version of the free energy method has successfully been incorporated into the stellar evolution programmes at Geneva Observatory. No use of thermodynamical tables is made, either. Although some accuracy is lost by the simplified analytical expression, the main advantages of the free energy method over simple ideal-gas and Sacha-equation subprogrammes (as used in the stellar programmes mentioned) are still kept. The relative errors of the simplifications made here are estimated and they are shown not to exceed 10% altogether. Densities up to those encountered in low-mass main-sequence stars can be treated within the region of validity of the method. Higher densities imply less accurate results. Nonetheless they are consistent so that they cannot disturb the numerical integration of the equilibrium equation in the stellar evolution model. The input quantities of the free energy method presented here are either temperature and density or temperature and pressure, the latter require a rapid numerical Legendre transformation which has been developed here.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
NASA Astrophysics Data System (ADS)
Mirkov, Mirko; Sherr, Evan A.; Sierra, Rafael A.; Lloyd, Jenifer R.; Tanghetti, Emil
2006-06-01
Detailed understanding of the thermal processes in biological targets undergoing laser irradiation continues to be a challenging problem. For example, the contemporary pulsed dye laser (PDL) delivers a complex pulse format which presents specific challenges for theoretical understanding and further development. Numerical methods allow for adequate description of the thermal processes, but are lacking for clarifying the effects of the laser parameters. The purpose of this work is to derive a simplified analytical model that can guide the development of future laser designs. A mathematical model of heating and cooling processes in tissue is developed. Exact analytical solutions of the model are found when applied to specific temporal and spatial profiles of heat sources. Solutions are reduced to simple algebraic expressions. An algorithm is presented for approximating realistic cases of laser heating of skin structures by heat sources of the type found to have exact solutions. The simple algebraic expressions are used to provide insight into realistic laser irradiation cases. The model is compared with experiments on purpura threshold radiant exposure for PDL. These include data from four independent groups over a period of 20 years. Two of the data sets are taken from previously published articles. Two more data sets were collected from two groups of patients that were treated with two PDLs (585 and 595 nm) on normal buttocks skin. Laser pulse durations were varied between 0.5 and 40 ms radiant exposures were varied between 3 and 20 J/cm2. Treatment sites were evaluated 0.5, 1, and 24 hours later to determine purpuric threshold. The analytical model is in excellent agreement with a wide range of experimental data for purpura threshold radiant exposure. The data collected by independent research groups over the last 20 years with PDLs with wavelengths ranged from 577 to 595 nm were described accurately by this model. The simple analytical model provides an accurate
Analytical Chemistry Core Capability Assessment - Preliminary Report
Barr, Mary E.; Farish, Thomas J.
2012-05-16
The concept of 'core capability' can be nebulous one. Even at a fairly specific level, where core capability equals maintaining essential services, it is highly dependent upon the perspective of the requestor. Samples are submitted to analytical services because the requesters do not have the capability to conduct adequate analyses themselves. Some requests are for general chemical information in support of R and D, process control, or process improvement. Many analyses, however, are part of a product certification package and must comply with higher-level customer quality assurance requirements. So which services are essential to that customer - just those for product certification? Does the customer also (indirectly) need services that support process control and improvement? And what is the timeframe? Capability is often expressed in terms of the currently utilized procedures, and most programmatic customers can only plan a few years out, at best. But should core capability consider the long term where new technologies, aging facilities, and personnel replacements must be considered? These questions, and a multitude of others, explain why attempts to gain long-term consensus on the definition of core capability have consistently failed. This preliminary report will not try to define core capability for any specific program or set of programs. Instead, it will try to address the underlying concerns that drive the desire to determine core capability. Essentially, programmatic customers want to be able to call upon analytical chemistry services to provide all the assays they need, and they don't want to pay for analytical chemistry services they don't currently use (or use infrequently). This report will focus on explaining how the current analytical capabilities and methods evolved to serve a variety of needs with a focus on why some analytes have multiple analytical techniques, and what determines the infrastructure for these analyses. This information will be
Electron Microprobe Analysis Techniques for Accurate Measurements of Apatite
NASA Astrophysics Data System (ADS)
Goldoff, B. A.; Webster, J. D.; Harlov, D. E.
2010-12-01
Apatite [Ca5(PO4)3(F, Cl, OH)] is a ubiquitous accessory mineral in igneous, metamorphic, and sedimentary rocks. The mineral contains halogens and hydroxyl ions, which can provide important constraints on fugacities of volatile components in fluids and other phases in igneous and metamorphic environments in which apatite has equilibrated. Accurate measurements of these components in apatite are therefore necessary. Analyzing apatite by electron microprobe (EMPA), which is a commonly used geochemical analytical technique, has often been found to be problematic and previous studies have identified sources of error. For example, Stormer et al. (1993) demonstrated that the orientation of an apatite grain relative to the incident electron beam could significantly affect the concentration results. In this study, a variety of alternative EMPA operating conditions for apatite analysis were investigated: a range of electron beam settings, count times, crystal grain orientations, and calibration standards were tested. Twenty synthetic anhydrous apatite samples that span the fluorapatite-chlorapatite solid solution series, and whose halogen concentrations were determined by wet chemistry, were analyzed. Accurate measurements of these samples were obtained with many EMPA techniques. One effective method includes setting a static electron beam to 10-15nA, 15kV, and 10 microns in diameter. Additionally, the apatite sample is oriented with the crystal’s c-axis parallel to the slide surface and the count times are moderate. Importantly, the F and Cl EMPA concentrations are in extremely good agreement with the wet-chemical data. We also present EMPA operating conditions and techniques that are problematic and should be avoided. J.C. Stormer, Jr. et al., Am. Mineral. 78 (1993) 641-648.
Analytical phase diagrams for colloids and non-adsorbing polymer.
Fleer, Gerard J; Tuinier, Remco
2008-11-01
We review the free-volume theory (FVT) of Lekkerkerker et al. [Europhys. Lett. 20 (1992) 559] for the phase behavior of colloids in the presence of non-adsorbing polymer and we extend this theory in several aspects: (i) We take the solvent into account as a separate component and show that the natural thermodynamic parameter for the polymer properties is the insertion work Pi(v), where Pi is the osmotic pressure of the (external) polymer solution and v the volume of a colloid particle. (ii) Curvature effects are included along the lines of Aarts et al. [J. Phys.: Condens. Matt. 14 (2002) 7551] but we find accurate simple power laws which simplify the mathematical procedure considerably. (iii) We find analytical forms for the first, second, and third derivatives of the grand potential, needed for the calculation of the colloid chemical potential, the pressure, gas-liquid critical points and the critical endpoint (cep), where the (stable) critical line ends and then coincides with the triple point. This cep determines the boundary condition for a stable liquid. We first apply these modifications to the so-called colloid limit, where the size ratio q(R)=R/a between the radius of gyration R of the polymer and the particle radius a is small. In this limit the binodal polymer concentrations are below overlap: the depletion thickness delta is nearly equal to R, and Pi can be approximated by the ideal (van't Hoff) law Pi=Pi(0)=phi/N, where phi is the polymer volume fraction and N the number of segments per chain. The results are close to those of the original Lekkerkerker theory. However, our analysis enables very simple analytical expressions for the polymer and colloid concentrations in the critical and triple points and along the binodals as a function of q(R). Also the position of the cep is found analytically. In order to make the model applicable to higher size ratio's q(R) (including the so-called protein limit where q(R)>1) further extensions are needed. We
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Accurate Stellar Parameters for Exoplanet Host Stars
NASA Astrophysics Data System (ADS)
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Microarrays, Integrated Analytical Systems
NASA Astrophysics Data System (ADS)
Combinatorial chemistry is used to find materials that form sensor microarrays. This book discusses the fundamentals, and then proceeds to the many applications of microarrays, from measuring gene expression (DNA microarrays) to protein-protein interactions, peptide chemistry, carbodhydrate chemistry, electrochemical detection, and microfluidics.
Normality in analytical psychology.
Myers, Steve
2013-12-01
Although C.G. Jung's interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault's criticism, had Foucault chosen to review Jung's work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault's own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung's disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity.
NASA Astrophysics Data System (ADS)
Lomon, Earle L.; Pacetti, Simone
2016-09-01
The pion electromagnetic form factor and two-pion production in electron-positron collisions are simultaneously fitted by a vector dominance model evolving to perturbative QCD at large momentum transfer. This model was previously successful in simultaneously fitting the nucleon electromagnetic form factors (spacelike region) and the electromagnetic production of nucleon-antinucleon pairs (timelike region). For this pion case dispersion relations are used to produce the analytic connection of the spacelike and timelike regions. The fit to all the data is good, especially for the newer sets of timelike data. The description of high-q2 data, in the timelike region, requires one more meson with ρ quantum numbers than listed in the 2014 Particle Data Group review.
ANALYTIC MODELING OF STARSHADES
Cash, Webster
2011-09-01
External occulters, otherwise known as starshades, have been proposed as a solution to one of the highest priority yet technically vexing problems facing astrophysics-the direct imaging and characterization of terrestrial planets around other stars. New apodization functions, developed over the past few years, now enable starshades of just a few tens of meters diameter to occult central stars so efficiently that the orbiting exoplanets can be revealed and other high-contrast imaging challenges addressed. In this paper, an analytic approach to the analysis of these apodization functions is presented. It is used to develop a tolerance analysis suitable for use in designing practical starshades. The results provide a mathematical basis for understanding starshades and a quantitative approach to setting tolerances.
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates servedmore » within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercial firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.« less
Analytics for Metabolic Engineering.
Petzold, Christopher J; Chan, Leanne Jade G; Nhan, Melissa; Adams, Paul D
2015-01-01
Realizing the promise of metabolic engineering has been slowed by challenges related to moving beyond proof-of-concept examples to robust and economically viable systems. Key to advancing metabolic engineering beyond trial-and-error research is access to parts with well-defined performance metrics that can be readily applied in vastly different contexts with predictable effects. As the field now stands, research depends greatly on analytical tools that assay target molecules, transcripts, proteins, and metabolites across different hosts and pathways. Screening technologies yield specific information for many thousands of strain variants, while deep omics analysis provides a systems-level view of the cell factory. Efforts focused on a combination of these analyses yield quantitative information of dynamic processes between parts and the host chassis that drive the next engineering steps. Overall, the data generated from these types of assays aid better decision-making at the design and strain construction stages to speed progress in metabolic engineering research.
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates served within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercial firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.
Normality in Analytical Psychology
Myers, Steve
2013-01-01
Although C.G. Jung’s interest in normality wavered throughout his career, it was one of the areas he identified in later life as worthy of further research. He began his career using a definition of normality which would have been the target of Foucault’s criticism, had Foucault chosen to review Jung’s work. However, Jung then evolved his thinking to a standpoint that was more aligned to Foucault’s own. Thereafter, the post Jungian concept of normality has remained relatively undeveloped by comparison with psychoanalysis and mainstream psychology. Jung’s disjecta membra on the subject suggest that, in contemporary analytical psychology, too much focus is placed on the process of individuation to the neglect of applications that consider collective processes. Also, there is potential for useful research and development into the nature of conflict between individuals and societies, and how normal people typically develop in relation to the spectrum between individuation and collectivity. PMID:25379262
Analytical formulation of the quantum electromagnetic cross section
NASA Astrophysics Data System (ADS)
Brandsema, Matthew J.; Narayanan, Ram M.; Lanzagorta, Marco
2016-05-01
It has been found that the quantum radar cross section (QRCS) equation can be written in terms of the Fourier transform of the surface atom distribution of the object. This paper uses this form to provide an analytical formulation of the quantum radar cross section by deriving closed form expressions for various geometries. These expressions are compared to the classical radar cross section (RCS) expressions and the quantum advantages are discerned from the differences in the equations. Multiphoton illumination is also briefly discussed.
Building pit dewatering: application of transient analytic elements.
Zaadnoordijk, Willem J
2006-01-01
Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework.
Matej, Samuel; Daube-Witherspoon, Margaret E; Karp, Joel S
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Large-scale analytical Fourier transform of photomask layouts using graphics processing units
NASA Astrophysics Data System (ADS)
Sakamoto, Julia A.
2015-10-01
Compensation of lens-heating effects during the exposure scan in an optical lithographic system requires knowledge of the heating profile in the pupil of the projection lens. A necessary component in the accurate estimation of this profile is the total integrated distribution of light, relying on the squared modulus of the Fourier transform (FT) of the photomask layout for individual process layers. Requiring a layout representation in pixelated image format, the most common approach is to compute the FT numerically via the fast Fourier transform (FFT). However, the file size for a standard 26- mm×33-mm mask with 5-nm pixels is an overwhelming 137 TB in single precision; the data importing process alone, prior to FFT computation, can render this method highly impractical. A more feasible solution is to handle layout data in a highly compact format with vertex locations of mask features (polygons), which correspond to elements in an integrated circuit, as well as pattern symmetries and repetitions (e.g., GDSII format). Provided the polygons can decompose into shapes for which analytical FT expressions are possible, the analytical approach dramatically reduces computation time and alleviates the burden of importing extensive mask data. Algorithms have been developed for importing and interpreting hierarchical layout data and computing the analytical FT on a graphics processing unit (GPU) for rapid parallel processing, not assuming incoherent imaging. Testing was performed on the active layer of a 392- μm×297-μm virtual chip test structure with 43 substructures distributed over six hierarchical levels. The factor of improvement in the analytical versus numerical approach for importing layout data, performing CPU-GPU memory transfers, and executing the FT on a single NVIDIA Tesla K20X GPU was 1.6×104, 4.9×103, and 3.8×103, respectively. Various ideas for algorithm enhancements will be discussed.
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293
Accurate Telescope Mount Positioning with MEMS Accelerometers
NASA Astrophysics Data System (ADS)
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Accurate simulation dynamics of microscopic filaments using ``caterpillar'' Oseen hydrodynamics
NASA Astrophysics Data System (ADS)
Bailey, A. G.; Lowe, C. P.; Pagonabarraga, I.; Lagomarsino, M. Cosentino
2009-10-01
Microscopic semiflexible filaments suspended in a viscous fluid are widely encountered in biophysical problems. The classic example is the flagella used by microorganisms to generate propulsion. Simulating the dynamics of these filaments numerically is complicated because of the coupling between the motion of the filament and that of the surrounding fluid. An attractive idea is to simplify this coupling by modeling the fluid motion by using Stokeslets distributed at equal intervals along the model filament. We show that, with an appropriate choice of the hydrodynamic radii, one can recover accurate hydrodynamic behavior of a filament with a finite cross section without requiring an explicit surface. This is true, however, only if the hydrodynamic radii take specific values and that they differ in the parallel and perpendicular directions leading to a caterpillarlike hydrodynamic shape. Having demonstrated this, we use the model to compare with analytic theory of filament deformation and rotation in the small deformation limit. Generalization of the methodology, including application to simulations using the Rotne-Prager tensor, is discussed.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D
George L Mesina; David Aumiller; Francis Buschman
2014-07-01
Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate Weather Forecasting for Radio Astronomy
NASA Astrophysics Data System (ADS)
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
MAGNETARS AS HIGHLY MAGNETIZED QUARK STARS: AN ANALYTICAL TREATMENT
Orsaria, M.; Ranea-Sandoval, Ignacio F.; Vucetich, H.
2011-06-10
We present an analytical model of a magnetar as a high-density magnetized quark bag. The effect of strong magnetic fields (B > 5 x 10{sup 16} G) in the equation of state is considered. An analytic expression for the mass-radius relationship is found from the energy variational principle in general relativity. Our results are compared with observational evidence of possible quark and/or hybrid stars.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Analytical orbit predictions with air drag using K-S uniformly regular canonical elements
NASA Astrophysics Data System (ADS)
Xavier James Raj, M.; Sharma, R. K.
elements are constant for unperturbed motion and the equations permit the uniform formulation of the basic laws of elliptic, parabolic and hyperbolic motion (Stiefel and Scheifele, 1971, p250) are found to provide accurate short- and long- term orbit predictions numerically, with Earth's zonal harmonic terms J2 to J36 (Sharma and James Raj, 1988). Recently these equations were utilized by the authors to generate the analytical solutions for short-term orbit predictions with respect to Earth's zonal harmonic terms J2, J3, J4 (James Raj and Sharma 2003). In this paper we have extended the K-S uniform regular canonical equations of motion for inclusion of the canonical forces and analytically integrated the resulting equations of motion by a series expansion method with air drag force, by assuming the atmosphere to be symmetrically spherical with constant density scale height. A non-singular solution up to third-order terms in eccentricity is obtained. Only two of the nine equations are solved analytically to compute the state vector and change in energy at the end of each revolution, due to symmetry in the equations of motion. For comparison purpose these equations are integrated numerically with a fixed step size 4th order Runge-Kutta-Gill method with a small step size of half degree in eccentric anomaly. Numerical experimentation with the analytical solution for a wide range of perigee altitude, eccentricity and orbital inclination has been carried out up to 100 revolutions. The results obtained from the analytical expressions match quite well with the numerically integrated values as well as show improvement over the results obtained from the third-order theories of King-Hele, Cook & Walker(1960) and Sharma(1992), which are generated with the same atmospheric model.
Hanford transuranic analytical capability
McVey, C.B.
1995-02-24
With the current DOE focus on ER/WM programs, an increase in the quantity of waste samples that requires detailed analysis is forecasted. One of the prime areas of growth is the demand for DOE environmental protocol analyses of TRU waste samples. Currently there is no laboratory capacity to support analysis of TRU waste samples in excess of 200 nCi/gm. This study recommends that an interim solution be undertaken to provide these services. By adding two glove boxes in room 11A of 222S the interim waste analytical needs can be met for a period of four to five years or until a front end facility is erected at or near the 222-S facility. The yearly average of samples is projected to be approximately 600 samples. The figure has changed significantly due to budget changes and has been downgraded from 10,000 samples to the 600 level. Until these budget and sample projection changes become firmer, a long term option is not recommended at this time. A revision to this document is recommended by March 1996 to review the long term option and sample projections.
Analytics for Metabolic Engineering
Petzold, Christopher J.; Chan, Leanne Jade G.; Nhan, Melissa; Adams, Paul D.
2015-01-01
Realizing the promise of metabolic engineering has been slowed by challenges related to moving beyond proof-of-concept examples to robust and economically viable systems. Key to advancing metabolic engineering beyond trial-and-error research is access to parts with well-defined performance metrics that can be readily applied in vastly different contexts with predictable effects. As the field now stands, research depends greatly on analytical tools that assay target molecules, transcripts, proteins, and metabolites across different hosts and pathways. Screening technologies yield specific information for many thousands of strain variants, while deep omics analysis provides a systems-level view of the cell factory. Efforts focused on a combination of these analyses yield quantitative information of dynamic processes between parts and the host chassis that drive the next engineering steps. Overall, the data generated from these types of assays aid better decision-making at the design and strain construction stages to speed progress in metabolic engineering research. PMID:26442249
Accurate masses for dispersion-supported galaxies
NASA Astrophysics Data System (ADS)
Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.
2010-08-01
We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
Analytic boosted boson discrimination
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-01
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.
Analytic boosted boson discrimination
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-20
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less
The SILAC Fly Allows for Accurate Protein Quantification in Vivo*
Sury, Matthias D.; Chen, Jia-Xuan; Selbach, Matthias
2010-01-01
Stable isotope labeling by amino acids in cell culture (SILAC) is widely used to quantify protein abundance in tissue culture cells. Until now, the only multicellular organism completely labeled at the amino acid level was the laboratory mouse. The fruit fly Drosophila melanogaster is one of the most widely used small animal models in biology. Here, we show that feeding flies with SILAC-labeled yeast leads to almost complete labeling in the first filial generation. We used these “SILAC flies” to investigate sexual dimorphism of protein abundance in D. melanogaster. Quantitative proteome comparison of adult male and female flies revealed distinct biological processes specific for each sex. Using a tudor mutant that is defective for germ cell generation allowed us to differentiate between sex-specific protein expression in the germ line and somatic tissue. We identified many proteins with known sex-specific expression bias. In addition, several new proteins with a potential role in sexual dimorphism were identified. Collectively, our data show that the SILAC fly can be used to accurately quantify protein abundance in vivo. The approach is simple, fast, and cost-effective, making SILAC flies an attractive model system for the emerging field of in vivo quantitative proteomics. PMID:20525996
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule.
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728
MEMS accelerometers in accurate mount positioning systems
NASA Astrophysics Data System (ADS)
Mészáros, László; Pál, András.; Jaskó, Attila
2014-07-01
In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.
Analytical theory of mesoscopic Bose-Einstein condensation in an ideal gas
Kocharovsky, Vitaly V.; Kocharovsky, Vladimir V.
2010-03-15
We find the universal structure and scaling of the Bose-Einstein condensation (BEC) statistics and thermodynamics (Gibbs free energy, average energy, heat capacity) for a mesoscopic canonical-ensemble ideal gas in a trap with an arbitrary number of atoms, any volume, and any temperature, including the whole critical region. We identify a universal constraint-cutoff mechanism that makes BEC fluctuations strongly non-Gaussian and is responsible for all unusual critical phenomena of the BEC phase transition in the ideal gas. The main result is an analytical solution to the problem of critical phenomena. It is derived by, first, calculating analytically the universal probability distribution of the noncondensate occupation, or a Landau function, and then using it for the analytical calculation of the universal functions for the particular physical quantities via the exact formulas which express the constraint-cutoff mechanism. We find asymptotics of that analytical solution as well as its simple analytical approximations which describe the universal structure of the critical region in terms of the parabolic cylinder or confluent hypergeometric functions. The obtained results for the order parameter, all higher-order moments of BEC fluctuations, and thermodynamic quantities perfectly match the known asymptotics outside the critical region for both low and high temperature limits. We suggest two- and three-level trap models of BEC and find their exact solutions in terms of the cutoff negative binomial distribution (which tends to the cutoff gamma distribution in the continuous limit) and the confluent hypergeometric distribution, respectively. Also, we present an exactly solvable cutoff Gaussian model of BEC in a degenerate interacting gas. All these exact solutions confirm the universality and constraint-cutoff origin of the strongly non-Gaussian BEC statistics. We introduce a regular refinement scheme for the condensate statistics approximations on the basis of the
Cates, Joshua W.; Vinke, Ruud; Levin, Craig S.
2015-01-01
Excellent timing resolution is required to enhance the signal-to-noise ratio (SNR) gain available from the incorporation of time-of-flight (ToF) information in image reconstruction for positron emission tomography (PET). As the detector’s timing resolution improves, so does SNR, reconstructed image quality, and accuracy. This directly impacts the challenging detection and quantification tasks in the clinic. The recognition of these benefits has spurred efforts within the molecular imaging community to determine to what extent the timing resolution of scintillation detectors can be improved and develop near-term solutions for advancing ToF-PET. Presented in this work, is a method for calculating the Cramér-Rao lower bound (CRLB) on timing resolution for scintillation detectors with long crystal elements, where the influence of the variation in optical path length of scintillation light on achievable timing resolution is non-negligible. The presented formalism incorporates an accurate, analytical probability density function (PDF) of optical transit time within the crystal to obtain a purely mathematical expression of the CRLB with high-aspect-ratio (HAR) scintillation detectors. This approach enables the statistical limit on timing resolution performance to be analytically expressed for clinically-relevant PET scintillation detectors without requiring Monte Carlo simulation-generated photon transport time distributions. The analytically calculated optical transport PDF was compared with detailed light transport simulations, and excellent agreement was found between the two. The coincidence timing resolution (CTR) between two 3×3×20 mm3 LYSO:Ce crystals coupled to analogue SiPMs was experimentally measured to be 162±1 ps FWHM, approaching the analytically calculated lower bound within 6.5%. PMID:26083559
Analytical laboratory quality audits
Kelley, William D.
2001-06-11
Analytical Laboratory Quality Audits are designed to improve laboratory performance. The success of the audit, as for many activities, is based on adequate preparation, precise performance, well documented and insightful reporting, and productive follow-up. Adequate preparation starts with definition of the purpose, scope, and authority for the audit and the primary standards against which the laboratory quality program will be tested. The scope and technical processes involved lead to determining the needed audit team resources. Contact is made with the auditee and a formal audit plan is developed, approved and sent to the auditee laboratory management. Review of the auditee's quality manual, key procedures and historical information during preparation leads to better checklist development and more efficient and effective use of the limited time for data gathering during the audit itself. The audit begins with the opening meeting that sets the stage for the interactions between the audit team and the laboratory staff. Arrangements are worked out for the necessary interviews and examination of processes and records. The information developed during the audit is recorded on the checklists. Laboratory management is kept informed of issues during the audit so there are no surprises at the closing meeting. The audit report documents whether the management control systems are effective. In addition to findings of nonconformance, positive reinforcement of exemplary practices provides balance and fairness. Audit closure begins with receipt and evaluation of proposed corrective actions from the nonconformances identified in the audit report. After corrective actions are accepted, their implementation is verified. Upon closure of the corrective actions, the audit is officially closed.
Analytical analysis of particle-core dynamics
Batygin, Yuri K
2010-01-01
Particle-core interaction is a well-developed model of halo formation in high-intensity beams. In this paper, we present an analytical solution for averaged, single particle dynamics, around a uniformly charged beam. The problem is analyzed through a sequence of canonical transformations of the Hamiltonian, which describes nonlinear particle oscillations. A closed form expression for maximum particle deviation from the axis is obtained. The results of this study are in good agreement with numerical simulations and with previously obtained data.
Analytical approach to quasiperiodic beam Coulomb field modeling
NASA Astrophysics Data System (ADS)
Rubtsova, I. D.
2016-09-01
The paper is devoted to modeling of space charge field of quasiperiodic axial- symmetric beam. Particle beam is simulated by charged disks. Two analytical Coulomb field expressions are presented, namely, Fourier-Bessel series and trigonometric polynomial. Both expressions permit the integral representation. It provides the possibility of integro-differential beam dynamics description. Consequently, when beam dynamics optimization problem is considered, it is possible to derive the analytical formula for quality functional gradient and to apply directed optimization methods. In addition, the paper presents the method of testing of space charge simulation code.
Keck, B D; Ognibene, T; Vogel, J S
2010-02-05
Accelerator mass spectrometry (AMS) is an isotope based measurement technology that utilizes carbon-14 labeled compounds in the pharmaceutical development process to measure compounds at very low concentrations, empowers microdosing as an investigational tool, and extends the utility of {sup 14}C labeled compounds to dramatically lower levels. It is a form of isotope ratio mass spectrometry that can provide either measurements of total compound equivalents or, when coupled to separation technology such as chromatography, quantitation of specific compounds. The properties of AMS as a measurement technique are investigated here, and the parameters of method validation are shown. AMS, independent of any separation technique to which it may be coupled, is shown to be accurate, linear, precise, and robust. As the sensitivity and universality of AMS is constantly being explored and expanded, this work underpins many areas of pharmaceutical development including drug metabolism as well as absorption, distribution and excretion of pharmaceutical compounds as a fundamental step in drug development. The validation parameters for pharmaceutical analyses were examined for the accelerator mass spectrometry measurement of {sup 14}C/C ratio, independent of chemical separation procedures. The isotope ratio measurement was specific (owing to the {sup 14}C label), stable across samples storage conditions for at least one year, linear over 4 orders of magnitude with an analytical range from one tenth Modern to at least 2000 Modern (instrument specific). Further, accuracy was excellent between 1 and 3 percent while precision expressed as coefficient of variation is between 1 and 6% determined primarily by radiocarbon content and the time spent analyzing a sample. Sensitivity, expressed as LOD and LLOQ was 1 and 10 attomoles of carbon-14 (which can be expressed as compound equivalents) and for a typical small molecule labeled at 10% incorporated with {sup 14}C corresponds to 30 fg
The Case for Assessment Analytics
ERIC Educational Resources Information Center
Ellis, Cath
2013-01-01
Learning analytics is a relatively new field of inquiry and its precise meaning is both contested and fluid (Johnson, Smith, Willis, Levine & Haywood, 2011; LAK, n.d.). Ferguson (2012) suggests that the best working definition is that offered by the first Learning Analytics and Knowledge (LAK) conference: "the measurement, collection,…
Understanding Education Involving Geovisual Analytics
ERIC Educational Resources Information Center
Stenliden, Linnea
2013-01-01
Handling the vast amounts of data and information available in contemporary society is a challenge. Geovisual Analytics provides technology designed to increase the effectiveness of information interpretation and analytical task solving. To date, little attention has been paid to the role such tools can play in education and to the extent to which…
[Photonic crystals for analytical chemistry].
Chen, Yi; Li, Jincheng
2009-09-01
Photonic crystals, originally created to control the transmission of light, have found their increasing value in the field of analytical chemistry and are probable to become a hot research area soon. This review is hence composed, focusing on their analytical chemistry-oriented applications, including especially their use in chromatography, capillary- and chip-based electrophoresis.
Information Theory in Analytical Chemistry.
ERIC Educational Resources Information Center
Eckschlager, Karel; Stepanek, Vladimir
1982-01-01
Discusses information theory in analytical practice. Topics include information quantities; ways of obtaining formulas for the amount of information in structural, qualitative, and trace analyses; and information measures in comparing and optimizing analytical methods and procedures. Includes tables outlining applications of information theory to…
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Accurate Thermal Conductivities from First Principles
NASA Astrophysics Data System (ADS)
Carbogno, Christian
2015-03-01
In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).
Shells on nanowires detected by analytical TEM
NASA Astrophysics Data System (ADS)
Thomas, Jürgen; Gemming, Thomas
2005-09-01
Nanostructures in the form of nanowires or filled nanotubes and nanoparticles covered by shells are of great interest in materials science. They allow the creation of new materials with tailored new properties. For the characterisation of these structures and their shells by means of analytical transmission electron microscopy (TEM), especially by energy dispersive X-ray spectroscopy (EDXS), and electron energy loss spectroscopy (EELS), the accurate analysis of linescan intensity profiles is necessary. A mathematical model is described, which is suitable for this analysis. It considers the finite electron beam size, the beam convergence, and the beam broadening within the specimen. It is shown that the beam size influences the measured result of core radius and shell thickness. On the other hand, the influence of the beam broadening within the specimen is negligible. At EELS, the specimen thickness must be smaller than the mean free path for inelastic scattering. Otherwise, artifacts of the signal profile of a nanowire can pretend a nanotube.
Analytic prediction of airplane equilibrium spin characteristics
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.
1972-01-01
The nonlinear equations of motion are solved algebraically for conditions for which an airplane is in an equilibrium spin. Constrained minimization techniques are employed in obtaining the solution. Linear characteristics of the airplane about the equilibrium points are also presented and their significance in identifying the stability characteristics of the equilibrium points is discussed. Computer time requirements are small making the method appear potentially applicable in airplane design. Results are obtained for several configurations and are compared with other analytic-numerical methods employed in spin prediction. Correlation with experimental results is discussed for one configuration for which a rather extensive data base was available. A need is indicated for higher Reynolds number data taken under conditions which more accurately simulate a spin.
Analytical modeling of the steady radiative shock
NASA Astrophysics Data System (ADS)
Boireau, L.; Bouquet, S.; Michaut, C.; Clique, C.
2006-06-01
In a paper dated 2000 [1], a fully analytical theory of the radiative shock has been presented. This early model had been used to design [2] radiative shock experiments at the Laboratory for the Use of Intense Lasers (LULI) [3 5]. It became obvious from numerical simulations [6, 7] that this model had to be improved in order to accurately recover experiments. In this communication, we present a new theory in which the ionization rates in the unshocked (bar{Z_1}) and shocked (bar{Z_2} neq bar{Z_1}) material, respectively, are included. Associated changes in excitation energy are also taken into account. We study the influence of these effects on the compression and temperature in the shocked medium.
Comparing numerical and analytic approximate gravitational waveforms
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
Road transportable analytical laboratory (RTAL) system
Finger, S.M.
1996-12-31
Remediation of DOE contaminated areas requires extensive sampling and analysis. Reliable, road transportable, fully independent laboratory systems that could perform on-site a full range of analyses meeting high levels of quality assurance and control, would accelerate and thereby reduce the cost of cleanup and remediation efforts by (1) providing critical analytical data more rapidly, and (2) eliminating the handling, shipping, and manpower associated with sample shipments. Goals of RTAL are to meet the needs of DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. The system consists of a set of individual laboratory modules deployable independently or together, to meet specific site needs: radioanalytical lab, organic chemical analysis lab, inorganic chemical analysis lab, aquatic biomonitoring lab, field analytical lab, robotics base station, decontamination/sample screening module, and operations control center. Goal of this integrated system is a sample throughput of 20 samples/day, providing a full range of accurate analyses on each sample within 16 h (after sample preparation), compared with the 45- day turnaround time in commercial laboratories. A prototype RTAL consisting of 5 modules was built and demonstrated at Fernald(FEMP)`s OU-1 Waste Pits, during the 1st-3rd quarters of FY96 (including the `96 Blizzard). All performance and operational goals were met or exceeded: as many as 50 sample analyses/day were achieved, depending on the procedure, sample turnaround times were 50-67% less than FEMP`s best times, and RTAL costs were projected to be 30% less than FEMP costs for large volume analyses in fixed laboratories.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Visual analytics for immunologists: Data compression and fractal distributions.
Naumova, Elena N
2010-07-01
Visual analytics is the science of analytical reasoning that facilitates research through the use of interactive visual interfaces. New techniques of visual analytics are designed to aid the understanding of complex systems versus traditional blind-context rules to explore massive volumes of interrelated data. Nowhere else is visualization more important in analysis than in the emerging fields of life sciences, where amounts of collected data grow increasingly in exponential rates.The complexity of the immune system in immunology makes visual analytics especially important for understanding how this system works. In this context, our effort should be focused on avoiding accurate but potentially misleading use of visual interfaces. The proposed approach of data compression and visualization that reveal structural and functional features of immune responses enhances systemic and comprehensive description and provides the platform for hypothesis generation. Further, this approach can evolve into a powerful visual-analytical tool for prospective and real-time monitoring and can provide an intuitive and interpretable illustration of vital dynamics that govern immune responses in an individual and populations.The undertaken explorations demonstrate the critical role of novel techniques of visual analytics in stimulating research in immunology and other life sciences and in leading us to understanding of complex biological systems and processes.
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Analytical Model For Fluid Dynamics In A Microgravity Environment
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.
Analytical techniques for direct identification of biosignatures and microorganisms
NASA Astrophysics Data System (ADS)
Cid, C.; Garcia-Descalzo, L.; Garcia-Lopez, E.; Postigo, M.; Alcazar, A.; Baquero, F.
2012-09-01
Rover missions to potentially habitable ecosystems require portable instruments that use minimal power, require no sample preparation, and provide suitably diagnostic information to an Earth-based exploration team. In exploration of terrestrial analogue environments of potentially habitable ecosystems it is important to screen rapidly for the presence of biosignatures and microorganisms and especially to identify them accurately. In this study, several analytical techniques for the direct identification of biosignatures and microorganisms in different Earth analogues of habitable ecosystems are compared.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
Chromatin States Accurately Classify Cell Differentiation Stages
Larson, Jessica L.; Yuan, Guo-Cheng
2012-01-01
Gene expression is controlled by the concerted interactions between transcription factors and chromatin regulators. While recent studies have identified global chromatin state changes across cell-types, it remains unclear to what extent these changes are co-regulated during cell-differentiation. Here we present a comprehensive computational analysis by assembling a large dataset containing genome-wide occupancy information of 5 histone modifications in 27 human cell lines (including 24 normal and 3 cancer cell lines) obtained from the public domain, followed by independent analysis at three different representations. We classified the differentiation stage of a cell-type based on its genome-wide pattern of chromatin states, and found that our method was able to identify normal cell lines with nearly 100% accuracy. We then applied our model to classify the cancer cell lines and found that each can be unequivocally classified as differentiated cells. The differences can be in part explained by the differential activities of three regulatory modules associated with embryonic stem cells. We also found that the “hotspot” genes, whose chromatin states change dynamically in accordance to the differentiation stage, are not randomly distributed across the genome but tend to be embedded in multi-gene chromatin domains, and that specialized gene clusters tend to be embedded in stably occupied domains. PMID:22363642
A new analytical framework for understanding the tidal damping in estuaries
NASA Astrophysics Data System (ADS)
Cai, Huayang; Savenije, Hubert H. G.; Toffolon, Marco
2013-04-01
Tidal dynamics in estuaries have long been the subject of intensive scientific interest, particularly to analyse the environmental impact of human interventions. In many estuaries, there are increasing concerns about the impacts on the estuarine environment of, e.g., sea-level rise, water diversion and dredging. However, before predictions about hydraulic responses to future changes can be made with any confidence, there is need to achieve an adequate understanding of tidal wave propagation in estuaries. In this study, we explore different analytical solutions of the tidal hydraulic equations in convergent estuaries. Linear and quasi-nonlinear models are compared for given geometry, friction, and tidal amplitude at the seaward boundary, proposing a common theoretical framework and showing that the main difference between the examined models lies in the treatment of the friction term. A general solution procedure is proposed for the governing equations expressed in dimensionless form, and a new analytical expression for the tidal damping is derived as a weighted average of two solutions, characterized by the usual linearized formulation and the quasi-nonlinear Lagrangean treatment of the friction term (Savenije et al., 2008). The different analytical solutions are tested against fully nonlinear numerical results for a wide range of parameters, and compared with observations in the Scheldt estuary. Overall, the new method compares best with the numerical solution and field data (Cai et al., 2012a). The new accurate relationship for the tidal damping is then exploited for a classification of estuaries based on the distance of the tidally averaged depth from the ideal depth (relative to vanishing amplification) and the critical depth (condition for maximum amplification). Finally, the new model is used to investigate the effect of depth variations on the tidal dynamics in 23 real estuaries, highlighting the usefulness of the analytical method to assess the influence of
An analytical solution for quantum size effects on Seebeck coefficient
NASA Astrophysics Data System (ADS)
Karabetoglu, S.; Sisman, A.; Ozturk, Z. F.
2016-03-01
There are numerous experimental and numerical studies about quantum size effects on Seebeck coefficient. In contrast, in this study, we obtain analytical expressions for Seebeck coefficient under quantum size effects. Seebeck coefficient of a Fermi gas confined in a rectangular domain is considered. Analytical expressions, which represent the size dependency of Seebeck coefficient explicitly, are derived in terms of confinement parameters. A fundamental form of Seebeck coefficient based on infinite summations is used under relaxation time approximation. To obtain analytical results, summations are calculated using the first two terms of Poisson summation formula. It is shown that they are in good agreement with the exact results based on direct calculation of summations as long as confinement parameters are less than unity. The analytical results are also in good agreement with experimental and numerical ones in literature. Maximum relative errors of analytical expressions are less than 3% and 4% for 2D and 1D cases, respectively. Dimensional transitions of Seebeck coefficient are also examined. Furthermore, a detailed physical explanation for the oscillations in Seebeck coefficient is proposed by considering the relative standard deviation of total variance of particle number in Fermi shell.
Analytic barrage attack model. Final report, January 1986-January 1989
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
1989-01-01
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for the analytic model and for a numerical model used to check the analytic results.
Numerical and Analytical Design of Functionally Graded Piezoelectric Transducers
NASA Astrophysics Data System (ADS)
Rubio, Wilfredo Montealegre; Buiochi, Flavio; Adamowski, Julio C.; Silva, Emílio Carlos Nelli
2008-02-01
This paper presents analytical and finite element methods to model broadband transducers with a graded piezoelectric parameter. The application of FGM (Functionally Graded Materials) concept to piezoelectric transducer design allows the design of composite transducers without interface between materials (e.g. piezoelectric ceramic and backing material), due to the continuous change of property values. Thus, large improvements can be achieved in their performance characteristics, mainly generating short-time waveform ultrasonic pulses. Nevertheless, recent research on functionally graded piezoelectric transducers shows lack of studies that compare numerical and analytical approaches used in their design. In this work analytical and numerical models of FGM piezoelectric transducers are developed to analyze the effects of piezoelectric material gradation, specifically, in ultrasonic applications. In addition, results using FGM piezoelectric transducers are compared with non-FGM piezoelectric transducers. We concluded that the developed modeling techniques are accurate, providing a useful tool for designing FGM piezoelectric transducers.
Analytical derivation of DC SQUID response
NASA Astrophysics Data System (ADS)
Soloviev, I. I.; Klenov, N. V.; Schegolev, A. E.; Bakurskiy, S. V.; Kupriyanov, M. Yu
2016-09-01
We consider voltage and current response formation in DC superconducting quantum interference device (SQUID) with overdamped Josephson junctions in resistive and superconducting state in the context of a resistively shunted junction (RSJ) model. For simplicity we neglect the junction capacitance and the noise effect. Explicit expressions for the responses in resistive state were obtained for a SQUID which is symmetrical with respect to bias current injection point. Normalized SQUID inductance l=2{{eI}}{{c}}L/{\\hslash } (where I c is the critical current of Josephson junction, L is the SQUID inductance, e is the electron charge and ℏ is the Planck constant) was assumed to be within the range l ≤ 1, subsequently expanded up to l≈ 7 using two fitting parameters. SQUID current response in the superconducting state was considered for arbitrary value of the inductance. The impact of small technological spread of parameters relevant to low-temperature superconductor (LTS) technology was studied, using a generalization of the developed analytical approach, for the case of a small difference of critical currents and shunt resistances of the Josephson junctions, and inequality of SQUID inductive shoulders for both resistive and superconducting states. Comparison with numerical calculation results shows that developed analytical expressions can be used in practical LTS SQUIDs and SQUID-based circuits design, e.g. large serial SQIF, drastically decreasing the time of simulation.
Analytical modeling of a single channel nonlinear fiber optic system based on QPSK.
Kumar, Shiva; Shahi, Sina Naderi; Yang, Dong
2012-12-01
A first order perturbation theory is used to develop analytical expressions for the power spectral density (PSD) of the nonlinear distortions due to intra-channel four-wave mixing (IFWM). For non-Gaussian pulses, the PSD can not be calculated analytically. However, using the stationary phase approximations, we found that convolutions become simple multiplications and a simple analytical expression for the PSD of the nonlinear distortion is found. The PSD of the nonlinear distortion is combined with the amplified spontaneous emission (ASE) PSD to obtain the total variance and bit error ratio (BER). The analytically estimated BER is found to be in good agreement with numerical simulations.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. PMID:24378297
Analytical scatter kernels for portal imaging at 6 MV.
Spies, L; Bortfeld, T
2001-04-01
X-ray photon scatter kernels for 6 MV electronic portal imaging are investigated using an analytical and a semi-analytical model. The models are tested on homogeneous phantoms for a range of uniform circular fields and scatterer-to-detector air gaps relevant for clinical use. It is found that a fully analytical model based on an exact treatment of photons undergoing a single Compton scatter event and an approximate treatment of second and higher order scatter events, assuming a multiple-scatter source at the center of the scatter volume, is accurate within 1% (i.e., the residual scatter signal is less than 1% of the primary signal) for field sizes up to 100 cm2 and air gaps over 30 cm, but shows significant discrepancies for larger field sizes. Monte Carlo results are presented showing that the effective multiple-scatter source is located toward the exit surface of the scatterer, rather than at its center. A second model is therefore investigated where second and higher-order scattering is instead modeled by fitting an analytical function describing a nonstationary isotropic point-scatter source to Monte Carlo generated data. This second model is shown to be accurate to within 1% for air gaps down to 20 cm, for field sizes up to 900 cm2 and phantom thicknesses up to 50 cm. PMID:11339752
77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...
Cautions Concerning Electronic Analytical Balances.
ERIC Educational Resources Information Center
Johnson, Bruce B.; Wells, John D.
1986-01-01
Cautions chemists to be wary of ferromagnetic samples (especially magnetized samples), stray electromagnetic radiation, dusty environments, and changing weather conditions. These and other conditions may alter readings obtained from electronic analytical balances. (JN)
Numerical integration of analytic functions
NASA Astrophysics Data System (ADS)
Milovanović, Gradimir V.; Tošić, Dobrilo ð.; Albijanić, Miloljub
2012-09-01
A weighted generalized N-point Birkhoff-Young quadrature of interpolatory type for numerical integration of analytic functions is considered. Special cases of such quadratures with respect to the generalized Gegenbauer weight function are derived.
Analytic Methods in Investigative Geometry.
ERIC Educational Resources Information Center
Dobbs, David E.
2001-01-01
Suggests an alternative proof by analytic methods, which is more accessible than rigorous proof based on Euclid's Elements, in which students need only apply standard methods of trigonometry to the data without introducing new points or lines. (KHR)
Trends in Analytical Scale Separations.
ERIC Educational Resources Information Center
Jorgenson, James W.
1984-01-01
Discusses recent developments in the instrumentation and practice of analytical scale operations. Emphasizes detection devices and procedures in gas chromatography, liquid chromatography, electrophoresis, supercritical fluid chromatography, and field-flow fractionation. (JN)
Laboratory Workhorse: The Analytical Balance.
ERIC Educational Resources Information Center
Clark, Douglas W.
1979-01-01
This report explains the importance of various analytical balances in the water or wastewater laboratory. Stressed is the proper procedure for utilizing the equipment as well as the mechanics involved in its operation. (CS)
Liposomes: Technologies and Analytical Applications
NASA Astrophysics Data System (ADS)
Jesorka, Aldo; Orwar, Owe
2008-07-01
Liposomes are structurally and functionally some of the most versatile supramolecular assemblies in existence. Since the beginning of active research on lipid vesicles in 1965, the field has progressed enormously and applications are well established in several areas, such as drug and gene delivery. In the analytical sciences, liposomes serve a dual purpose: Either they are analytes, typically in quality-assessment procedures of liposome preparations, or they are functional components in a variety of new analytical systems. Liposome immunoassays, for example, benefit greatly from the amplification provided by encapsulated markers, and nanotube-interconnected liposome networks have emerged as ultrasmall-scale analytical devices. This review provides information about new developments in some of the most actively researched liposome-related topics.
Analytical multikinks in smooth potentials
NASA Astrophysics Data System (ADS)
de Brito, G. P.; Correa, R. A. C.; de Souza Dutra, A.
2014-03-01
In this work we present an approach that can be systematically used to construct nonlinear systems possessing analytical multikink profile configurations. In contrast with previous approaches to the problem, we are able to do it by using field potentials that are considerably smoother than the ones of the doubly quadratic family of potentials. This is done without losing the capacity of writing exact analytical solutions. The resulting field configurations can be applied to the study of problems from condensed matter to braneworld scenarios.
Functionalized magnetic nanoparticle analyte sensor
Yantasee, Wassana; Warner, Maryin G; Warner, Cynthia L; Addleman, Raymond S; Fryxell, Glen E; Timchalk, Charles; Toloczko, Mychailo B
2014-03-25
A method and system for simply and efficiently determining quantities of a preselected material in a particular solution by the placement of at least one superparamagnetic nanoparticle having a specified functionalized organic material connected thereto into a particular sample solution, wherein preselected analytes attach to the functionalized organic groups, these superparamagnetic nanoparticles are then collected at a collection site and analyzed for the presence of a particular analyte.
Visual Analytics Technology Transition Progress
Scholtz, Jean; Cook, Kristin A.; Whiting, Mark A.; Lemon, Douglas K.; Greenblatt, Howard
2009-09-23
The authors provide a description of the transition process for visual analytic tools and contrast this with the transition process for more traditional software tools. This paper takes this into account and describes a user-oriented approach to technology transition including a discussion of key factors that should be considered and adapted to each situation. The progress made in transitioning visual analytic tools in the past five years is described and the challenges that remain are enumerated.
Analytical drafting curves provide exact equations for plotted data
NASA Technical Reports Server (NTRS)
Stewart, R. B.
1967-01-01
Analytical drafting curves provide explicit mathematical expressions for any numerical data that appears in the form of graphical plots. The curves each have a reference coordinate axis system indicated on the curve as well as the mathematical equation from which the curve was generated.
Gallien, Sebastien; Domon, Bruno
2014-08-01
High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.
AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz
Perley, R. A.; Butler, B. J. E-mail: BButler@nrao.edu
2013-02-15
We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.
How Accurate Are Transition States from Simulations of Enzymatic Reactions?
2015-01-01
The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275
NASA Astrophysics Data System (ADS)
Sauvé, Alexandre; Montier, Ludovic
2016-10-01
uc(Context): Bolometers are high sensitivity detector commonly used in Infrared astronomy. The HFI instrument of the Planck satellite makes extensive use of them, but after the satellite launch two electronic related problems revealed critical. First an unexpected excess response of detectors at low optical excitation frequency for ν < 1 Hz, and secondly the Analog To digital Converter (ADC) component had been insufficiently characterized on-ground. These two problems require an exquisite knowledge of detector response. However bolometers have highly nonlinear characteristics, coming from their electrical and thermal coupling making them very difficult to model. uc(Goal): We present a method to build the analytical transfer function in frequency domain which describe the voltage response of an Alternative Current (AC) biased bolometer to optical excitation, based on the standard bolometer model. This model is built using the setup of the Planck/HFI instrument and offers the major improvement of being based on a physical model rather than the currently in use had-hoc model based on Direct Current (DC) bolometer theory. uc(Method): The analytical transfer function expression will be presented in matrix form. For this purpose, we build linearized versions of the bolometer electro thermal equilibrium. A custom description of signals in frequency is used to solve the problem with linear algebra. The model performances is validated using time domain simulations. uc(Results): The provided expression is suitable for calibration and data processing. It can also be used to provide constraints for fitting optical transfer function using real data from steady state electronic response and optical response. The accurate description of electronic response can also be used to improve the ADC nonlinearity correction for quickly varying optical signals.
Roda, Aldo; Roda, Barbara; Cevenini, Luca; Michelini, Elisa; Mezzanotte, Laura; Reschiglian, Pierluigi; Hakkila, Kaisa; Virta, Marko
2011-07-01
Whole-cell bioluminescent (BL) bioreporter technology is a useful analytical tool for developing biosensors for environmental toxicology and preclinical studies. However, when applied to real samples, several methodological problems prevent it from being widely used. Here, we propose a methodological approach for improving its analytical performance with complex matrix. We developed bioluminescent Escherichia coli and Saccharomyces cerevisiae bioreporters for copper ion detection. In the same cell, we introduced two firefly luciferases requiring the same luciferin substrate emitting at different wavelengths. The expression of one was copper ion specific. The other, constitutively expressed, was used as a cell viability internal control. Engineered BL cells were characterized using the noninvasive gravitational field-flow fractionation (GrFFF) technique. Homogeneous cell population was isolated. Cells were then immobilized in a polymeric matrix improving cell responsiveness. The bioassay was performed in 384-well black polystyrene microtiter plates directly on the sample. After 2 h of incubation at 37 °C and the addition of the luciferin, we measured the emitted light. These dual-color bioreporters showed more robustness and a wider dynamic range than bioassays based on the same strains with a single reporter gene and that uses a separate cell strain as BL control. The internal correction allowed to accurately evaluate the copper content even in simulated toxic samples, where reduced cell viability was observed. Homogenous cells isolated by GrFFF showed improvement in method reproducibility, particularly for yeast cells. The applicability of these bioreporters to real samples was demonstrated in tap water and wastewater treatment plant effluent samples spiked with copper and other metal ions. PMID:21603915
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.
Towards numerically accurate many-body perturbation theory: Short-range correlation effects
Gulans, Andris
2014-10-28
The example of the uniform electron gas is used for showing that the short-range electron correlation is difficult to handle numerically, while it noticeably contributes to the self-energy. Nonetheless, in condensed-matter applications studied with advanced methods, such as the GW and random-phase approximations, it is common to neglect contributions due to high-momentum (large q) transfers. Then, the short-range correlation is poorly described, which leads to inaccurate correlation energies and quasiparticle spectra. To circumvent this problem, an accurate extrapolation scheme is proposed. It is based on an analytical derivation for the uniform electron gas presented in this paper, and it provides an explanation why accurate GW quasiparticle spectra are easy to obtain for some compounds and very difficult for others.
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-01
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Anderson, Oscar A.
2006-08-06
The well-known Kapchinskij-Vladimirskij (KV) equations are difficult to solve in general, but the problem is simplified for the matched-beam case with sufficient symmetry. They show that the interdependence of the two KV equations is eliminated, so that only one needs to be solved--a great simplification. They present an iterative method of solution which can potentially yield any desired level of accuracy. The lowest level, the well-known smooth approximation, yields simple, explicit results with good accuracy for weak or moderate focusing fields. The next level improves the accuracy for high fields; they previously showed how to maintain a simple explicit format for the results. That paper used expansion in a small parameter to obtain the second level. The present paper, using straightforward iteration, obtains equations of first, second, and third levels of accuracy. For a periodic lattice with beam matched to lattice, they use the lattice and beam parameters as input and solve for phase advances and envelope waveforms. They find excellent agreement with numerical solutions over a wide range of beam emittances and intensities.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
White, J A; Dutton, A W; Schmidt, J A; Roemer, R B
2000-01-01
An automated three-element meshing method for generating finite element based models for the accurate thermal analysis of blood vessels imbedded in tissue has been developed and evaluated. The meshing method places eight noded hexahedral elements inside the vessels where advective flows exist, and four noded tetrahedral elements in the surrounding tissue. The higher order hexahedrals are used where advective flow fields occur, since high accuracy is required and effective upwinding algorithms exist. Tetrahedral elements are placed in the remaining tissue region, since they are computationally more efficient and existing automatic tetrahedral mesh generators can be used. Five noded pyramid elements connect the hexahedrals and tetrahedrals. A convective energy equation (CEE) based finite element algorithm solves for the temperature distributions in the flowing blood, while a finite element formulation of a generalized conduction equation is used in the surrounding tissue. Use of the CEE allows accurate solutions to be obtained without the necessity of assuming ad hoc values for heat transfer coefficients. Comparisons of the predictions of the three-element model to analytical solutions show that the three-element model accurately simulates temperature fields. Energy balance checks show that the three-element model has small, acceptable errors. In summary, this method provides an accurate, automatic finite element gridding procedure for thermal analysis of irregularly shaped tissue regions that contain important blood vessels. At present, the models so generated are relatively large (in order to obtain accurate results) and are, thus, best used for providing accurate reference values for checking other approximate formulations to complicated, conjugated blood heat transfer problems.
On the analyticity of Laguerre series
NASA Astrophysics Data System (ADS)
Weniger, Ernst Joachim
2008-10-01
The transformation of a Laguerre series f(z) = ∑∞n=0λ(α)nL(α)n(z) to a power series f(z) = ∑∞n=0γnzn is discussed. Since many nonanalytic functions can be expanded in terms of generalized Laguerre polynomials, success is not guaranteed and such a transformation can easily lead to a mathematically meaningless expansion containing power series coefficients that are infinite in magnitude. Simple sufficient conditions based on the decay rates and sign patterns of the Laguerre series coefficients λ(α)n as n → ∞ can be formulated which guarantee that the resulting power series represents an analytic function. The transformation produces a mathematically meaningful result if the coefficients λ(α)n either decay exponentially or factorially as n → ∞. The situation is much more complicated—but also much more interesting—if the λ(α)n decay only algebraically as n → ∞. If the λ(α)n ultimately have the same sign, the series expansions for the power series coefficients diverge, and the corresponding function is not analytic at the origin. If the λ(α)n ultimately have strictly alternating signs, the series expansions for the power series coefficients still diverge, but are summable to something finite, and the resulting power series represents an analytic function. If algebraically decaying and ultimately alternating Laguerre series coefficients λ(α)n possess sufficiently simple explicit analytical expressions, the summation of the divergent series for the power series coefficients can often be accomplished with the help of analytic continuation formulae for hypergeometric series p+1Fp, but if the λ(α)n have a complicated structure or if only their numerical values are available, numerical summation techniques have to be employed. It is shown that certain nonlinear sequence transformations—in particular the so-called delta transformation (Weniger 1989 Comput. Phys. Rep. 10 189-371 (equation (8.4-4)))—are able to sum the divergent
Second-order analytic solutions for re-entry trajectories
NASA Astrophysics Data System (ADS)
Kim, Eun-Kyou
1993-01-01
With the development of aeroassist technology, either for near-earth orbital transfer with or without a plane change or for planetary aerocapture, it is of interest to have accurate analytic solutions for reentry trajectories in an explicit form. Starting with the equations of motion of a non-thrusting aerodynamic vehicle entering a non-rotating spherical planetary atmosphere, a normalization technique is used to transform the equations into a form suitable for an analytic integration. Then, depending on the type of planar entry modes with a constant angle-of-attack, namely, ballistic fly-through, lifting skip, and equilibrium glide trajectories, the first-order solutions are obtained with the appropriate simplification. By analytic continuation, the second-order solutions for the altitude, speed, and flight path angle are derived. The closed form solutions lead to explicit forms for the physical quantities of interest, such as the deceleration and aerodynamic heating rates. The analytic solutions for the planar case are extended to three-dimensional skip trajectories with a constant bank angle. The approximate solutions for the heading and latitude are developed to the second order. In each type of trajectory examined, explicit relations among the principal variables are in a form suitable for guidance and navigation purposes. The analytic solutions have excellent agreement with the numerical integrations. They also provide some new results which were not reported in the existing classical theory.
Analytical methods for quantitation of prenylated flavonoids from hops.
Nikolić, Dejan; van Breemen, Richard B
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical advantages of multivariate data processing. One, two, three, infinity?
Olivieri, Alejandro C
2008-08-01
Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646
Analytical methods for quantitation of prenylated flavonoids from hops
Nikolić, Dejan; van Breemen, Richard B.
2013-01-01
The female flowers of hops (Humulus lupulus L.) are used as a flavoring agent in the brewing industry. There is growing interest in possible health benefits of hops, particularly as estrogenic and chemopreventive agents. Among the possible active constituents, most of the attention has focused on prenylated flavonoids, which can chemically be classified as prenylated chalcones and prenylated flavanones. Among chalcones, xanthohumol (XN) and desmethylxanthohumol (DMX) have been the most studied, while among flavanones, 8-prenylnaringenin (8-PN) and 6-prenylnaringenin (6-PN) have received the most attention. Because of the interest in medicinal properties of prenylated flavonoids, there is demand for accurate, reproducible and sensitive analytical methods to quantify these compounds in various matrices. Such methods are needed, for example, for quality control and standardization of hop extracts, measurement of the content of prenylated flavonoids in beer, and to determine pharmacokinetic properties of prenylated flavonoids in animals and humans. This review summarizes currently available analytical methods for quantitative analysis of the major prenylated flavonoids, with an emphasis on the LC-MS and LC-MS-MS methods and their recent applications to biomedical research on hops. This review covers all methods in which prenylated flavonoids have been measured, either as the primary analytes or as a part of a larger group of analytes. The review also discusses methodological issues relating to the quantitative analysis of these compounds regardless of the chosen analytical approach. PMID:24077106
Analytical Sociology: A Bungean Appreciation
NASA Astrophysics Data System (ADS)
Wan, Poe Yu-ze
2012-10-01
Analytical sociology, an intellectual project that has garnered considerable attention across a variety of disciplines in recent years, aims to explain complex social processes by dissecting them, accentuating their most important constituent parts, and constructing appropriate models to understand the emergence of what is observed. To achieve this goal, analytical sociologists demonstrate an unequivocal focus on the mechanism-based explanation grounded in action theory. In this article I attempt a critical appreciation of analytical sociology from the perspective of Mario Bunge's philosophical system, which I characterize as emergentist systemism. I submit that while the principles of analytical sociology and those of Bunge's approach share a lot in common, the latter brings to the fore the ontological status and explanatory importance of supra-individual actors (as concrete systems endowed with emergent causal powers) and macro-social mechanisms (as processes unfolding in and among social systems), and therefore it does not stipulate that every causal explanation of social facts has to include explicit references to individual-level actors and mechanisms. In this sense, Bunge's approach provides a reasonable middle course between the Scylla of sociological reification and the Charybdis of ontological individualism, and thus serves as an antidote to the untenable "strong program of microfoundations" to which some analytical sociologists are committed.
Climate Analytics as a Service
NASA Technical Reports Server (NTRS)
Schnase, John L.; Duffy, Daniel Q.; McInerney, Mark A.; Webster, W. Phillip; Lee, Tsengdar J.
2014-01-01
Climate science is a big data domain that is experiencing unprecedented growth. In our efforts to address the big data challenges of climate science, we are moving toward a notion of Climate Analytics-as-a-Service (CAaaS). CAaaS combines high-performance computing and data-proximal analytics with scalable data management, cloud computing virtualization, the notion of adaptive analytics, and a domain-harmonized API to improve the accessibility and usability of large collections of climate data. MERRA Analytic Services (MERRA/AS) provides an example of CAaaS. MERRA/AS enables MapReduce analytics over NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA) data collection. The MERRA reanalysis integrates observational data with numerical models to produce a global temporally and spatially consistent synthesis of key climate variables. The effectiveness of MERRA/AS has been demonstrated in several applications. In our experience, CAaaS is providing the agility required to meet our customers' increasing and changing data management and data analysis needs.
A spectroscopic transfer standard for accurate atmospheric CO measurements
NASA Astrophysics Data System (ADS)
Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker
2016-04-01
Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Evaluation of the WIND System atmospheric models: An analytic approach
Fast, J.D.
1991-11-25
An analytic approach was used in this study to test the logic, coding, and the theoretical limits of the WIND System atmospheric models for the Savannah River Plant. In this method, dose or concentration estimates predicted by the models were compared to the analytic solutions to evaluate their performance. The results from AREA EVACUATION and PLTFF/PLUME were very nearly identical to the analytic solutions they are based on and the evaluation procedure demonstrated that these models were able to reproduce the theoretical characteristics of a puff or a plume. The dose or concentration predicted by PLTFF/PLUME was always within 1% of the analytic solution. Differences between the dose predicted by 2DPUF and its analytic solution were substantially greater than those associated with PUFF/PLUME, but were usually smaller than 6%. This behavior was expected because PUFF/PLUME solves a form of the analytic solution for a single puff, and 2DPUF performs an integration over a period of time for several puffs to obtain the dose. Relatively large differences between the dose predicted by 2DPUF and its analytic solution were found to occur close to the source under stable atmospheric conditions. WIND System users should be aware of these situations in which the assumptions of the System atmospheric models may be violated so that dose predictions can be interpreted correctly. The WIND System atmospheric models are similar to many other dispersion codes used by the EPA, NRC, and DOE. If the quality of the source term and meteorological data is high, relatively accurate and timely forecasts for emergency response situations can be made by the WIND System atmospheric models.
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
An Analytical Model for the Influence of Contact Resistance on Thermoelectric Efficiency
NASA Astrophysics Data System (ADS)
Bjørk, Rasmus
2016-03-01
An analytical model is presented that can account for both electrical and hot and cold thermal contact resistances when calculating the efficiency of a thermoelectric generator. The model is compared to a numerical model of a thermoelectric leg for 16 different thermoelectric materials, as well as to the analytical models of Ebling et al. (J Electron Mater 39:1376, 2010) and Min and Rowe (J Power Sour 38:253, 1992). The model presented here is shown to accurately calculate the efficiency for all systems and all contact resistances considered, with an average difference in efficiency between the numerical model and the analytical model of -0.07 ± 0.35pp. This makes the model more accurate than previously published models. The maximum absolute difference in efficiency between the analytical model and the numerical model is 1.14pp for all materials and all contact resistances considered.
Panuwet, Parinya; Hunter, Ronald E.; D’Souza, Priya E.; Chen, Xianyu; Radford, Samantha A.; Cohen, Jordan R.; Marder, M. Elizabeth; Kartavenka, Kostya; Ryan, P. Barry; Barr, Dana Boyd
2015-01-01
The ability to quantify levels of target analytes in biological samples accurately and precisely, in biomonitoring, involves the use of highly sensitive and selective instrumentation such as tandem mass spectrometers and a thorough understanding of highly variable matrix effects. Typically, matrix effects are caused by co-eluting matrix components that alter the ionization of target analytes as well as the chromatographic response of target analytes, leading to reduced or increased sensitivity of the analysis. Thus, before the desired accuracy and precision standards of laboratory data are achieved, these effects must be characterized and controlled. Here we present our review and observations of matrix effects encountered during the validation and implementation of tandem mass spectrometry-based analytical methods. We also provide systematic, comprehensive laboratory strategies needed to control challenges posed by matrix effects in order to ensure delivery of the most accurate data for biomonitoring studies assessing exposure to environmental toxicants. PMID:25562585
Big Data Analytics in Healthcare
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S. M. Reza; Navidi, Fatemeh; Beard, Daniel A.; Najarian, Kayvan
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined. PMID:26229957
Authenticity and the analytic process.
Boccara, Paolo; Gaddini, Andrea; Riefolo, Giuseppe
2009-12-01
In this paper we first make a differentiation between phenomena that can be defined as spontaneous and others that can be defined as authentic. We then attempt to present authenticity as a process rather than an outcome. Finally, we try to understand the location of authentic phenomena in the sensorial and pre-symbolic communicative register. We situate authentic phenomena in the register of sensorial and pre-symbolic communication. The authentic process becomes manifest, step by step in the analytic process (Borgogno, 1999), through the vivid iconic and sensorial elements that happen to cross the analytic field. Through two brief clinical vignettes, we seek to document the progression of the analytic process, in one case through the analyst's capacity for rêverie (Bion, 1962; Ogden, 1994, 1997; Ferro, 2002, 2007), and in the other through the sensorial elements with which analyst and patient are able to tune in to each other.
Big Data Analytics in Healthcare.
Belle, Ashwin; Thiagarajan, Raghuram; Soroushmehr, S M Reza; Navidi, Fatemeh; Beard, Daniel A; Najarian, Kayvan
2015-01-01
The rapidly expanding field of big data analytics has started to play a pivotal role in the evolution of healthcare practices and research. It has provided tools to accumulate, manage, analyze, and assimilate large volumes of disparate, structured, and unstructured data produced by current healthcare systems. Big data analytics has been recently applied towards aiding the process of care delivery and disease exploration. However, the adoption rate and research development in this space is still hindered by some fundamental problems inherent within the big data paradigm. In this paper, we discuss some of these major challenges with a focus on three upcoming and promising areas of medical research: image, signal, and genomics based analytics. Recent research which targets utilization of large volumes of medical data while combining multimodal data from disparate sources is discussed. Potential areas of research within this field which have the ability to provide meaningful impact on healthcare delivery are also examined.
Analytical Applications of NMR: Summer Symposium on Analytical Chemistry.
ERIC Educational Resources Information Center
Borman, Stuart A.
1982-01-01
Highlights a symposium on analytical applications of nuclear magnetic resonance spectroscopy (NMR), discussing pulse Fourier transformation technique, two-dimensional NMR, solid state NMR, and multinuclear NMR. Includes description of ORACLE, an NMR data processing system at Syracuse University using real-time color graphics, and algorithms for…
Current projects in Pre-analytics: where to go?
Sapino, Anna; Annaratone, Laura; Marchiò, Caterina
2015-01-01
The current clinical practice of tissue handling and sample preparation is multifaceted and lacks strict standardisation: this scenario leads to significant variability in the quality of clinical samples. Poor tissue preservation has a detrimental effect thus leading to morphological artefacts, hampering the reproducibility of immunocytochemical and molecular diagnostic results (protein expression, DNA gene mutations, RNA gene expression) and affecting the research outcomes with irreproducible gene expression and post-transcriptional data. Altogether, this limits the opportunity to share and pool national databases into European common databases. At the European level, standardization of pre-analytical steps is just at the beginning and issues regarding bio-specimen collection and management are still debated. A joint (public-private) project entitled on standardization of tissue handling in pre-analytical procedures has been recently funded in Italy with the aim of proposing novel approaches to the neglected issue of pre-analytical procedures. In this chapter, we will show how investing in pre-analytics may impact both public health problems and practical innovation in solid tumour processing.
Analytic Models of Plausible Gravitational Lens Potentials
Baltz, Edward A.; Marshall, Phil; Oguri, Masamune
2007-05-04
Gravitational lenses on galaxy scales are plausibly modeled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sersic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasizing that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential.We also provide analytic formulae for the lens potentials of Sersic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modeled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses.
A direct analytical approach for solving linear inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Ainajem, N. M.; Ozisik, M. N.
1985-08-01
The analytical approach presented for the solution of linear inverse heat conduction problems demonstrates that applied surface conditions involving abrupt changes with time can be effectively accommodated with polynomial representations in time over the entire time domain; the resulting inverse analysis predicts surface conditions accurately. All previous attempts have experienced difficulties in the development of analytic solutions that are applicable over the entire time domain when a polynomial representation is used.
Real-Time Analytics for the Healthcare Industry: Arrhythmia Detection.
Agneeswaran, Vijay Srinivas; Mukherjee, Joydeb; Gupta, Ashutosh; Tonpay, Pranay; Tiwari, Jayati; Agarwal, Nitin
2013-09-01
It is time for the healthcare industry to move from the era of "analyzing our health history" to the age of "managing the future of our health." In this article, we illustrate the importance of real-time analytics across the healthcare industry by providing a generic mechanism to reengineer traditional analytics expressed in the R programming language into Storm-based real-time analytics code. This is a powerful abstraction, since most data scientists use R to write the analytics and are not clear on how to make the data work in real-time and on high-velocity data. Our paper focuses on the applications necessary to a healthcare analytics scenario, specifically focusing on the importance of electrocardiogram (ECG) monitoring. A physician can use our framework to compare ECG reports by categorization and consequently detect Arrhythmia. The framework can read the ECG signals and uses a machine learning-based categorizer that runs within a Storm environment to compare different ECG signals. The paper also presents some performance studies of the framework to illustrate the throughput and accuracy trade-off in real-time analytics.
Eides, M.I.; Karshenboim, S.G.; Shelyuto, V.A. )
1991-02-01
Analytic expression for radiative-recoil corrections to muonium ground-state hyperfine splitting induced by muon-line radiative insertions is obtained. This result completes the program of analytic calculation of all radiative-recoil corrections. The perspectives of further muonium hyperfine splitting investigations are also discussed.
An Analytic Approach to Projectile Motion in a Linear Resisting Medium
ERIC Educational Resources Information Center
Stewart, Sean M.
2006-01-01
The time of flight, range and the angle which maximizes the range of a projectile in a linear resisting medium are expressed in analytic form in terms of the recently defined Lambert W function. From the closed-form solutions a number of results characteristic to the motion of the projectile in a linear resisting medium are analytically confirmed,…
Accurate description of calcium solvation in concentrated aqueous solutions.
Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel
2014-07-17
Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.
Analytical model of plasma-chemical etching in planar reactor
NASA Astrophysics Data System (ADS)
Veselov, D. S.; Bakun, A. D.; Voronov, Yu A.; Kireev, V. Yu; Vasileva, O. V.
2016-09-01
The paper discusses an analytical model of plasma-chemical etching in planar diode- type reactor. Analytical expressions of etch rate and etch anisotropy were obtained. It is shown that etch anisotropy increases with increasing the ion current and ion energy. At the same time, etch selectivity of processed material decreases as compared with the mask. Etch rate decreases with the distance from the centre axis of the reactor. To decrease the loading effect, it is necessary to reduce the wafer temperature and pressure in the reactor, as well as increase the gas flow rate through the reactor.
Analytical solutions for static elastic deformations of wire ropes
NASA Technical Reports Server (NTRS)
Kumar, K.; Cochran, J. E., Jr.
1987-01-01
This paper develops closed-form solutions for extension of twisted wire ropes subjected to axial forces for two different end conditions. The analytical results are compared with the corresponding numerical results obtained by Costello and Phillips. A close agreement between the two establishes validity of the analytical solutions. Finally, an expression for the effective rigidity modulus of the wire ropes is obtained in terms of the helix angle and the number of helical wires in the rope for each of the two end conditions.
On maximal analytical extension of the Vaidya metric
NASA Astrophysics Data System (ADS)
Berezin, V. A.; Dokuchaev, V. I.; Eroshenko, Yu N.
2016-07-01
The classical Vaidya metric is transformed to the special diagonal coordinates in the case of the linear mass function allowing rather easy treatment. We find the exact analytical expressions for metric functions in these diagonal coordinates. Using these coordinates, we elaborate the maximum analytic extension of the Vaidya metric with a linear growth of the black hole mass and construct the corresponding Carter–Penrose diagrams for different specific cases. The derived global geometry is also seemingly valid for a more general behavior of the black hole mass in the Vaidya metric.
Analytical collisionless damping rate of geodesic acoustic mode
NASA Astrophysics Data System (ADS)
Ren, H.; Xu, X. Q.
2016-10-01
Collisionless damping of geodesic acoustic mode (GAM) is analytically investigated by considering the finite-orbit-width (FOW) resonance effect to the 3rd order in the gyro-kinetic equations. A concise and transparent expression for the damping rate is presented for the first time. Good agreement is found between the analytical damping rate and the previous TEMPEST simulation result (Xu 2008 et al Phys. Rev. Lett. 100 215001) for systematic q scans. Our result also shows that it is of sufficient accuracy and has to take into account the FOW effect to the 3rd order.
Analytic Modeling of Pressurization and Cryogenic Propellant
NASA Technical Reports Server (NTRS)
Corpening, Jeremy H.
2010-01-01
An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.
ANALYTICAL STAR FORMATION RATE FROM GRAVOTURBULENT FRAGMENTATION
Hennebelle, Patrick; Chabrier, Gilles
2011-12-20
We present an analytical determination of the star formation rate (SFR) in molecular clouds, based on a time-dependent extension of our analytical theory of the stellar initial mass function. The theory yields SFRs in good agreement with observations, suggesting that turbulence is the dominant, initial process responsible for star formation. In contrast to previous SFR theories, the present one does not invoke an ad hoc density threshold for star formation; instead, the SFR continuously increases with gas density, naturally yielding two different characteristic regimes, thus two different slopes in the SFR versus gas density relationship, in agreement with observational determinations. Besides the complete SFR derivation, we also provide a simplified expression, which reproduces the complete calculations reasonably well and can easily be used for quick determinations of SFRs in cloud environments. A key property at the heart of both our complete and simplified theory is that the SFR involves a density-dependent dynamical time, characteristic of each collapsing (prestellar) overdense region in the cloud, instead of one single mean or critical freefall timescale. Unfortunately, the SFR also depends on some ill-determined parameters, such as the core-to-star mass conversion efficiency and the crossing timescale. Although we provide estimates for these parameters, their uncertainty hampers a precise quantitative determination of the SFR, within less than a factor of a few.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
Big Data Analytics in Immunology: A Knowledge-Based Approach
Zhang, Guang Lan
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677
Big data analytics in immunology: a knowledge-based approach.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677
Big data analytics in immunology: a knowledge-based approach.
Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir
2014-01-01
With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.
Monsanto analytical testing program for NPDES discharge self-monitoring
Hoogheem, T.J.; Woods, L.A.
1985-06-01
The Monsanto Analytical Testing (MAT) program was devised and implemented in order to provide analytical standards to Monsanto manufacturing plants involved in the self-monitoring of plant discharges as required by National Pollutant Discharge Elimination System (NPDES) permit conditions. Standards are prepared and supplied at concentration levels normally observed at each individual plant. These levels were established by canvassing all Monsanto plants having NPDES permits and by determining which analyses and concentrations were most appropriate. Standards are prepared by Monsanto's analyses and concentrations were most appropriate. Standards are prepared by Monsanto's Environmental Sciences Center (ESC) using Environmental Protection Agency (EPA) methods. Eleven standards are currently available, each in three concentrations. Standards are issued quarterly in a company internal round-robin program or on a per request basis or both. Since initiation of the MAT program in 1981, the internal round-robin program has become an integral part of Monsanto's overall Good Laboratory Practices (GLP) program. Overall, results have shown that the company's plant analytical personnel can accurately analyze and report standard test samples. More importantly, such personnel have gained increased confidence in their ability to report accurate values for compounds regulated in their respective plant NPDES permits. 3 references, 3 tables.
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John
2016-01-01
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.
Analytical SAR-GMTI principles
NASA Astrophysics Data System (ADS)
Soumekh, Mehrdad; Majumder, Uttam K.; Barnes, Christopher; Sobota, David; Minardi, Michael
2016-05-01
This paper provides analytical principles to relate the signature of a moving target to parameters in a SAR system. Our objective is to establish analytical tools that could predict the shift and smearing of a moving target in a subaperture SAR image. Hence, a user could identify the system parameters such as the coherent processing interval for a subaperture that is suitable to localize the signature of a moving target for detection, tracking and geolocating the moving target. The paper begins by outlining two well-known SAR data collection methods to detect moving targets. One uses a scanning beam in the azimuth domain with a relatively high PRF to separate the moving targets and the stationary background (clutter); this is also known as Doppler Beam Sharpening. The other scheme uses two receivers along the track to null the clutter and, thus, provide GMTI. We also present results on implementing our SAR-GMTI analytical principles for the anticipated shift and smearing of a moving target in a simulated code. The code would provide a tool for the user to change the SAR system and moving target parameters, and predict the properties of a moving target signature in a subaperture SAR image for a scene that is composed of both stationary and moving targets. Hence, the SAR simulation and imaging code could be used to demonstrate the validity and accuracy of the above analytical principles to predict the properties of a moving target signature in a subaperture SAR image.
An Overview of Learning Analytics
ERIC Educational Resources Information Center
Clow, Doug
2013-01-01
Learning analytics, the analysis and representation of data about learners in order to improve learning, is a new lens through which teachers can understand education. It is rooted in the dramatic increase in the quantity of data about learners and linked to management approaches that focus on quantitative metrics, which are sometimes antithetical…
FPI: FM Success through Analytics
ERIC Educational Resources Information Center
Hickling, Duane
2013-01-01
The APPA Facilities Performance Indicators (FPI) is perhaps one of the most powerful analytical tools that institutional facilities professionals have at their disposal. It is a diagnostic facilities performance management tool that addresses the essential questions that facilities executives must answer to effectively perform their roles. It…
Exploratory Analysis in Learning Analytics
ERIC Educational Resources Information Center
Gibson, David; de Freitas, Sara
2016-01-01
This article summarizes the methods, observations, challenges and implications for exploratory analysis drawn from two learning analytics research projects. The cases include an analysis of a games-based virtual performance assessment and an analysis of data from 52,000 students over a 5-year period at a large Australian university. The complex…
Microcomputer Applications in Analytical Chemistry.
ERIC Educational Resources Information Center
Long, Joseph W.
The first part of this paper addresses the following topics: (1) the usefulness of microcomputers; (2) applications for microcomputers in analytical chemistry; (3) costs; (4) major microcomputer systems and subsystems; and (5) which microcomputer to buy. Following these brief comments, the major focus of the paper is devoted to a discussion of…
Analytical Chemistry and the Microchip.
ERIC Educational Resources Information Center
Lowry, Robert K.
1986-01-01
Analytical techniques used at various points in making microchips are described. They include: Fourier transform infrared spectrometry (silicon purity); optical emission spectroscopy (quantitative thin-film composition); X-ray photoelectron spectroscopy (chemical changes in thin films); wet chemistry, instrumental analysis (process chemicals);…
Analytical Methods for Online Searching.
ERIC Educational Resources Information Center
Vigil, Peter J.
1983-01-01
Analytical methods for facilitating comparison of multiple sets during online searching are illustrated by description of specific searching methods that eliminate duplicate citations and a factoring procedure based on syntactic relationships that establishes ranked sets. Searches executed in National Center for Mental Health database on…
Faculty Workload: An Analytical Approach
ERIC Educational Resources Information Center
Dennison, George M.
2012-01-01
Recent discussions of practices in higher education have tended toward muck-raking and self-styled exposure of cynical self-indulgence by faculty and administrators at the expense of students and their families, as usually occurs during periods of economic duress, rather than toward analytical studies designed to foster understanding This article…
Analytical Sociology: A Bungean Appreciation
ERIC Educational Resources Information Center
Wan, Poe Yu-ze
2012-01-01
Analytical sociology, an intellectual project that has garnered considerable attention across a variety of disciplines in recent years, aims to explain complex social processes by dissecting them, accentuating their most important constituent parts, and constructing appropriate models to understand the emergence of what is observed. To achieve…
40 CFR 1065.750 - Analytical gases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Analytical gases. 1065.750 Section... ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration Standards § 1065.750 Analytical gases. Analytical gases must meet the accuracy and purity specifications of...
40 CFR 1065.750 - Analytical gases.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Analytical gases. 1065.750 Section... ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration Standards § 1065.750 Analytical gases. Analytical gases must meet the accuracy and purity specifications of...
40 CFR 1065.750 - Analytical gases.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Analytical gases. 1065.750 Section... ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration Standards § 1065.750 Analytical gases. Analytical gases must meet the accuracy and purity specifications of...
40 CFR 1065.750 - Analytical gases.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Analytical gases. 1065.750 Section... ENGINE-TESTING PROCEDURES Engine Fluids, Test Fuels, Analytical Gases and Other Calibration Standards § 1065.750 Analytical gases. Analytical gases must meet the accuracy and purity specifications of...
Analytical Plan for Roman Glasses
Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.; Schwantes, Jon M.; Olszta, Matthew J.; Thevuthasan, Suntharampillai; Heeren, Ronald M.
2011-01-01
Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University of Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.
Waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.
1995-05-01
The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.
NASA Astrophysics Data System (ADS)
Zhu, Timothy C.; Lu, Amy; Ong, Yi-Hong
2016-03-01
Accurate determination of in-vivo light fluence rate is critical for preclinical and clinical studies involving photodynamic therapy (PDT). This study compares the longitudinal light fluence distribution inside biological tissue in the central axis of a 1 cm diameter circular uniform light field for a range of in-vivo tissue optical properties (absorption coefficients (μa) between 0.01 and 1 cm-1 and reduced scattering coefficients (μs') between 2 and 40 cm-1). This was done using Monte-Carlo simulations for a semi-infinite turbid medium in an air-tissue interface. The end goal is to develop an analytical expression that would fit the results from the Monte Carlo simulation for both the 1 cm diameter circular beam and the broad beam. Each of these parameters is expressed as a function of tissue optical properties. These results can then be compared against the existing expressions in the literature for broad beam for analysis in both accuracy and applicable range. Using the 6-parameter model, the range and accuracy for light transport through biological tissue is improved and may be used in the future as a guide in PDT for light fluence distribution for known tissue optical properties.
Lu, Amy; Ong, Yi-Hong
2016-01-01
Accurate determination of in-vivo light fluence rate is critical for preclinical and clinical studies involving photodynamic therapy (PDT). This study compares the longitudinal light fluence distribution inside biological tissue in the central axis of a 1 cm diameter circular uniform light field for a range of in-vivo tissue optical properties (absorption coefficients (μa) between 0.01 and 1 cm−1 and reduced scattering coefficients (μs’) between 2 and 40 cm−1). This was done using Monte-Carlo simulations for a semi-infinite turbid medium in an air-tissue interface. The end goal is to develop an analytical expression that would fit the results from the Monte Carlo simulation for both the 1 cm diameter circular beam and the broad beam. Each of these parameters is expressed as a function of tissue optical properties. These results can then be compared against the existing expressions in the literature for broad beam for analysis in both accuracy and applicable range. Using the 6-parameter model, the range and accuracy for light transport through biological tissue is improved and may be used in the future as a guide in PDT for light fluence distribution for known tissue optical properties. PMID:27053827
Electron energy distribution in a dusty plasma: analytical approach.
Denysenko, I B; Kersten, H; Azarenkov, N A
2015-09-01
Analytical expressions describing the electron energy distribution function (EEDF) in a dusty plasma are obtained from the homogeneous Boltzmann equation for electrons. The expressions are derived neglecting electron-electron collisions, as well as transformation of high-energy electrons into low-energy electrons at inelastic electron-atom collisions. At large electron energies, the quasiclassical approach for calculation of the EEDF is applied. For the moderate energies, we account for inelastic electron-atom collisions in the dust-free case and both inelastic electron-atom and electron-dust collisions in the dusty plasma case. Using these analytical expressions and the balance equation for dust charging, the electron energy distribution function, the effective electron temperature, the dust charge, and the dust surface potential are obtained for different dust radii and densities, as well as for different electron densities and radio-frequency (rf) field amplitudes and frequencies. The dusty plasma parameters are compared with those calculated numerically by a finite-difference method taking into account electron-electron collisions and the transformation of high-energy electrons at inelastic electron-neutral collisions. It is shown that the analytical expressions can be used for calculation of the EEDF and dusty plasma parameters at typical experimental conditions, in particular, in the positive column of a direct-current glow discharge and in the case of an rf plasma maintained by an electric field with frequency f=13.56MHz.