Robust Accurate Non-Invasive Analyte Monitor
Robinson, Mark R.
1998-11-03
An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.
Accurate analytical approximation of asteroid deflection with constant tangential thrust
NASA Astrophysics Data System (ADS)
Bombardelli, Claudio; Baù, Giulio
2012-11-01
We present analytical formulas to estimate the variation of achieved deflection for an Earth-impacting asteroid following a continuous tangential low-thrust deflection strategy. Relatively simple analytical expressions are obtained with the aid of asymptotic theory and the use of Peláez orbital elements set, an approach that is particularly suitable to the asteroid deflection problem and is not limited to small eccentricities. The accuracy of the proposed formulas is evaluated numerically showing negligible error for both early and late deflection campaigns. The results will be of aid in planning future low-thrust asteroid deflection missions.
$W^+ W^-$ + Jet: Compact Analytic Results
Campbell, John; Miller, David; Robens, Tania
2016-01-14
In the second run of the LHC, which started in April 2015, an accurate understanding of Standard Model processes is more crucial than ever. Processes including electroweak gauge bosons serve as standard candles for SM measurements, and equally constitute important background for BSM searches. We here present the NLO QCD virtual contributions to W+W- + jet in an analytic format obtained through unitarity methods and show results for the full process using an implementation into the Monte Carlo event generator MCFM. Phenomenologically, we investigate total as well as differential cross sections for the LHC with 14 TeV center-of-mass energy, as well as a future 100 TeV proton-proton machine. In the format presented here, the one-loop virtual contributions also serve as important ingredients in the calculation of W+W- pair production at NNLO.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-21
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
Houairi, Kamel; Cassaing, Frédéric
2009-12-01
Two-wavelength interferometry combines measurement at two wavelengths lambda(1) and lambda(2) in order to increase the unambiguous range (UR) for the measurement of an optical path difference. With the usual algorithm, the UR is equal to the synthetic wavelength Lambda=lambda(1)lambda(2)/|lambda(1)-lambda(2)|, and the accuracy is a fraction of Lambda. We propose here a new analytical algorithm based on arithmetic properties, allowing estimation of the absolute fringe order of interference in a noniterative way. This algorithm has nice properties compared with the usual algorithm: it is at least as accurate as the most accurate measurement at one wavelength, whereas the UR is extended to several times the synthetic wavelength. The analysis presented shows how the actual UR depends on the wavelengths and different sources of error. The simulations presented are confirmed by experimental results, showing that the new algorithm has enabled us to reach an UR of 17.3 microm, much larger than the synthetic wavelength, which is only Lambda=2.2 microm. Applications to metrology and fringe tracking are discussed.
Quasi-normal frequencies: key analytic results
NASA Astrophysics Data System (ADS)
Boonserm, Petarpa; Visser, Matt
2011-03-01
The study of exact quasi-normal modes [QNMs], and their associated quasi-normal frequencies [QNFs], has had a long and convoluted history — replete with many rediscoveries of previously known results. In this article we shall collect and survey a number of known analytic results, and develop several new analytic results — specifically we shall provide several new QNF results and estimates, in a form amenable for comparison with the extant literature. Apart from their intrinsic interest, these exact and approximate results serve as a backdrop and a consistency check on ongoing efforts to find general model-independent estimates for QNFs, and general model-independent bounds on transmission probabilities. Our calculations also provide yet another physics application of the Lambert W function. These ideas have relevance to fields as diverse as black hole physics, (where they are related to the damped oscillations of astrophysical black holes, to greybody factors for the Hawking radiation, and to more speculative state-counting models for the Bekenstein entropy), to quantum field theory (where they are related to Casimir energies in unbounded systems), through to condensed matter physics, (where one may literally be interested in an electron tunnelling through a physical barrier).
Medendorp, Joseph; Lodder, Robert A
2006-03-01
This research was performed to test the hypothesis that acoustic-resonance spectrometry (ARS) is able to rapidly and accurately differentiate tablets of similar size and shape. The US Food and Drug Administration frequently orders recalls of tablets because of labeling problems (eg, the wrong tablet appears in a bottle). A high-throughput, nondestructive method of online analysis and label comparison before shipping could obviate the need for recall or disposal of a batch of mislabeled drugs, thus saving a company considerable expense and preventing a major safety risk. ARS is accurate and precise as well as inexpensive and nondestructive, and the sensor, is constructed from readily available parts, suggesting utility as a process analytical technology (PAT). To test the classification ability of ARS, 5 common household tablets of similar size and shape were chosen for analysis (aspirin, ibuprofen, acetaminophen, vitamin C, and vitamin B12). The measures of successful tablet identification were intertablet distances in nonparametric multidimensional standard deviations (MSDs) greater than, 3 and intratablet MSDs less than 3, as calculated from an extended bootstrap erroradjusted single sample technique. The average intertablet MSD was 65.64, while the average intratablet MSD from cross-validation was 1.91. Tablet mass (r(2)=0.977), thickness (r(2)=0.977), and density (r(2)=0.900) were measured very accurately from the AR spectra, each with less than 10% error. Tablets were identified correctly with only 250 ms data collection time. These results demonstrate that ARS effectively identified and characterized the 5 types of tablets and could potentially serve as a rapid high-throughput online pharmaceutical sensor.
NASA Astrophysics Data System (ADS)
Shizgal, Bernie D.
2016-12-01
There has been intense interest for several decades by different research groups to accurately model the temperature dependence of a large number of nuclear reaction rate coefficients for both light and heavy nuclides. The rate coefficient, k(T) , is given by the Maxwellian average of the reactive cross section expressed in terms of the astrophysical factor, S(E) , which for nonresonant reactions is generally written as a power series in the relative energy E. A computationally efficient algorithm for the temperature dependence of nuclear reaction rate coefficients is required for fusion reactor research and for models of nucleosynthesis and stellar evolution. In this paper, an accurate analytical expression for the temperature dependence of nuclear reaction rate coefficients is provided in terms of τ = 3(b / 2) 2/3 or equivalently, T - 1/3 , where b = B /√{kB T }, B is the Gamow factor and kB is the Boltzmann constant. The methodology is appropriate for all nonresonant nuclear reactions for which S(E) can be represented as a power series in E. The explicit expression for the rate coefficient versus temperature is derived with the asymptotic expansions of the moments of w(E) = exp(- E /kB T - B /√{ E }) in terms of τ. The zeroth order moment is the familiar Gaussian approximation to the rate coefficient. Results are reported for the representative reactions D(d, p)T, D(d, n)3He and 7Li(p, α) α and compared with several different fitting procedures reported in the literature.
Reusing Property Resulting from Analytical Laboratory Closure
Elmer, J.; DePinho, D.; Wetherstein, P.
2006-07-01
The U.S. Department of Energy Office of Legacy Management (DOE-LM) site in Grand Junction, Colorado, faced the problem of reusing an extensive assortment of laboratory equipment and supplies when its on-site analytical chemistry laboratory closed. This challenge, undertaken as part of the Grand Junction site's pollution prevention program, prioritized reuse of as much of the laboratory equipment and supplies as possible during a 9-month period in fiscal year 2004. Reuse remedies were found for approximately $3 million worth of instrumentation, equipment, chemicals, precious metals, and other laboratory items through other Grand Junction site projects, Federal Government databases, and extensive contact with other DOE facilities, universities, and colleges. In 2005, the DOE-LM Grand Junction site received two prestigious DOE pollution prevention awards for reuse of the laboratory's equipment and supplies. (authors)
Highly accurate analytic formulae for projectile motion subjected to quadratic drag
NASA Astrophysics Data System (ADS)
Turkyilmazoglu, Mustafa
2016-05-01
The classical phenomenon of motion of a projectile fired (thrown) into the horizon through resistive air charging a quadratic drag onto the object is revisited in this paper. No exact solution is known that describes the full physical event under such an exerted resistance force. Finding elegant analytical approximations for the most interesting engineering features of dynamical behavior of the projectile is the principal target. Within this purpose, some analytical explicit expressions are derived that accurately predict the maximum height, its arrival time as well as the flight range of the projectile at the highest ascent. The most significant property of the proposed formulas is that they are not restricted to the initial speed and firing angle of the object, nor to the drag coefficient of the medium. In combination with the available approximations in the literature, it is possible to gain information about the flight and complete the picture of a trajectory with high precision, without having to numerically simulate the full governing equations of motion.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Optimization of sample preparation for accurate results in quantitative NMR spectroscopy
NASA Astrophysics Data System (ADS)
Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi
2017-04-01
Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.
Milestone M4900: Simulant Mixing Analytical Results
Kaplan, D.I.
2001-07-26
This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.
Rotational modes of relativistic stars: Analytic results
NASA Astrophysics Data System (ADS)
Lockitch, Keith H.; Andersson, Nils; Friedman, John L.
2001-01-01
We study the r modes and rotational ``hybrid'' modes (inertial modes) of relativistic stars. As in Newtonian gravity, the spectrum of low-frequency rotational modes is highly sensitive to the stellar equation of state. If the star and its perturbations obey the same one-parameter equation of state (as with barotropic stars), there exist no pure r modes at all-no modes whose limit, for a star with zero angular velocity, is an axial-parity perturbation. Rotating stars of this kind similarly have no pure g modes, no modes whose spherical limit is a perturbation with polar parity and vanishing perturbed pressure and density. In spherical stars of this kind, the r modes and g modes form a degenerate zero-frequency subspace. We find that rotation splits the degeneracy to zeroth order in the star's angular velocity Ω, and the resulting modes are generically hybrids, whose limit as Ω-->0 is a stationary current with both axial and polar parts. Because each mode has definite parity, its axial and polar parts have alternating values of l. We show that each mode belongs to one of two classes, axial-led or polar-led, depending on whether the spherical harmonic with the lowest value of l that contributes to its velocity field is axial or polar. Newtonian barotropic stars retain a vestigial set of purely axial modes (those with l=m); however, for relativistic barotropic stars, we show that these modes must also be replaced by axial-led hybrids. We compute the post-Newtonian corrections to the l=m modes for uniform density stars. On the other hand, if the star is nonbarotropic (that is, if the perturbed star obeys an equation of state that differs from that of the unperturbed star), the r modes alone span the degenerate zero-frequency subspace of the spherical star. In Newtonian stars, this degeneracy is split only by the order-Ω2 rotational corrections. However, when relativistic effects are included, the degeneracy is again broken at zeroth order. We compute the r modes of a
Analytical results for a three-phase traffic model.
Huang, Ding-wei
2003-10-01
We study analytically a cellular automaton model, which is able to present three different traffic phases on a homogeneous highway. The characteristics displayed in the fundamental diagram can be well discerned by analyzing the evolution of density configurations. Analytical expressions for the traffic flow and shock speed are obtained. The synchronized flow in the intermediate-density region is the result of aggressive driving scheme and determined mainly by the stochastic noise.
NASA Astrophysics Data System (ADS)
Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas
2013-03-01
Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.
Analytical results for the wrinkling of graphene on nanoparticles
NASA Astrophysics Data System (ADS)
Guedda, M.; Alaa, N.; Benlahsen, M.
2016-10-01
A continuum elastic model, describing the wrinkling instability of graphene on substrate-supported silica nanoparticles [M. Yamamoto et al., Phys. Rev. X 2, 041018 (2012), 10.1103/PhysRevX.2.041018], is analytically studied, and an exact analytical expression of the critical nanoparticle separation or the maximum wrinkle length is derived. Our findings agree with the scaling property of Yamamoto et al. but improve their results. Moreover, from the elastic model we find a pseudomagnetic field as a function of the wrinkling deflection, leading to the conclusion that the middle of the wrinkled graphene may have a zero pseudomagnetic field, in marked contrast with previous results.
Analytic results for massless three-loop form factors
NASA Astrophysics Data System (ADS)
Lee, R. N.; Smirnov, A. V.; Smirnov, V. A.
2010-04-01
We evaluate, exactly in d, the master integrals contributing to massless threeloop QCD form factors. The calculation is based on a combination of a method recently suggested by one of the authors (R.L.) with other techniques: sector decomposition implemented in
Two-level laser: Analytical results and the laser transition
Gartner, Paul
2011-11-15
The problem of the two-level laser is studied analytically. The steady-state solution is expressed as a continued fraction and allows for accurate approximation by rational functions. Moreover, we show that the abrupt change observed in the pump dependence of the steady-state population is directly connected to the transition to the lasing regime. The condition for a sharp transition to Poissonian statistics is expressed as a scaling limit of vanishing cavity loss and light-matter coupling, {kappa}{yields}0, g{yields}0, such that g{sup 2}/{kappa} stays finite and g{sup 2}/{kappa}>2{gamma}, where {gamma} is the rate of nonradiative losses. The same scaling procedure is also shown to describe a similar change to the Poisson distribution in the Scully-Lamb laser model, suggesting that the low-{kappa}, low-g asymptotics is of more general significance for the laser transition.
Accurate Sloshing Modes Modeling: A New Analytical Solution and its Consequences on Control
NASA Astrophysics Data System (ADS)
Gonidou, Luc-Olivier; Desmariaux, Jean
2014-06-01
This study addresses the issue of sloshing modes modeling for GNC analyses purposes. On European launchers, equivalent mechanical systems are commonly used for modeling sloshing effects on launcher dynamics. The representativeness of such a methodology is discussed here. First an exact analytical formulation of the launcher dynamics fitted with sloshing modes is proposed and discrepancies with equivalent mechanical system approach are emphasized. Then preliminary comparative GNC analyses are performed using the different models of dynamics in order to evaluate the impact of the aforementioned discrepancies from GNC standpoint. Special attention is paid to system stability.
Analytical Grid Generation for accurate representation of clearances in CFD for Screw Machines
NASA Astrophysics Data System (ADS)
Rane, S.; Kovačević, A.; Stošić, N.
2015-08-01
One of the major factors affecting the performance prediction of twin screw compressors by use of computational fluid dynamics (CFD) is the accuracy with which the leakage gaps are captured by the discretization methods. The accuracy of mapping leakage flows can be improved by increasing the number of grid points on the profile. However, this method faces limitations when it comes to the complex deforming domains of a twin screw compressor because the computational time increases tremendously. In order to address this problem, an analytical grid distribution procedure is formulated that can independently refine the region of high importance for leakage flows in the interlobe space. This paper describes the procedure of analytical grid generation with the refined mesh in the interlobe area and presents a test case to show the influence of the mesh refinement in that area on the performance prediction. It is shown that by using this method, the flow domains in the vicinity of the interlobe gap and the blowhole area are refined which improves accuracy of leakage flow predictions.
Boothroyd, A.I. ); Dove, J.E.; Keogh, W.J. ); Martin, P.G. ); Peterson, M.R. )
1991-09-15
The interaction potential energy surface (PES) of H{sub 4} is of great importance for quantum chemistry, as a test case for molecule--molecule interactions. It is also required for a detailed understanding of certain astrophysical processes, namely, collisional excitation and dissociation of H{sub 2} in molecular clouds, at densities too low to be accessible experimentally. Accurate {ital ab} {ital initio} energies were computed for 6046 conformations of H{sub 4}, using a multiple reference (single and) double excitation configuration interaction (MRD-CI) program. Both systematic and random'' errors were estimated to have an rms size of 0.6 mhartree, for a total rms error of about 0.9 mhartree (or 0.55 kcal/mol) in the final {ital ab} {ital initio} energy values. It proved possible to include in a self-consistent way {ital ab} {ital initio} energies calculated by Schwenke, bringing the number of H{sub 4} conformations to 6101. {ital Ab} {ital initio} energies were also computed for 404 conformations of H{sub 3}; adding {ital ab} {ital initio} energies calculated by other authors yielded a total of 772 conformations of H{sub 3}. (The H{sub 3} results, and an improved analytic PES for H{sub 3}, are reported elsewhere.) {ital Ab} {ital initio} energies are tabulated in this paper only for a sample of H{sub 4} conformations; a full list of all 6101 conformations of H{sub 4} (and 772 conformations of H{sub 3} ) is available from Physics Auxiliary Publication Service (PAPS), or from the authors.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene; Malinas, Rebecca; Peterson, Matthew Gregor; Phillips, Cynthia A.; Robinson, David G.; Woodbridge, Diane
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Analytical results of asymmetric exclusion processes with ramps
NASA Astrophysics Data System (ADS)
Huang, Ding-Wei
2005-07-01
We present the analytical results in a simple traffic model describing a single-lane highway with ramps. Both on-ramps and off-ramps are considered. Complete classification of distinct phases is achieved. Exact phase diagrams are derived. In the case of a single ramp (either on-ramp or off-ramp), the bottleneck effect is absent. The traffic conditions of congestion before the ramp and free-flowing after the ramp cannot be realized. In the case of two consecutive ramps, the bottleneck emerges when the on-ramp is placed before the off-ramp and the flow in between the ramps saturates.
NASA Astrophysics Data System (ADS)
Lloyd, N. S.; Bouman, C.; Horstwood, M. S.; Parrish, R. R.; Schwieters, J. B.
2010-12-01
This presentation describes progress in mass spectrometry for analysing very small analyte quantities, illustrated by example applications from nuclear forensics. In this challenging application, precise and accurate (‰) uranium isotope ratios are required from 1 - 2 µm diameter uranium oxide particles, which comprise less than 40 pg of uranium. Traditionally these are analysed using thermal ionisation mass spectrometry (TIMS), and more recently using secondary ionisation mass spectrometry (SIMS). Multicollector inductively-coupled plasma mass spectrometry (MC-ICP-MS) can offer higher productivity compared to these techniques, but is traditionally limited by low efficiency of analyte utilisation (sample through to ion detection). Samples can either be introduced as a solution, or sampled directly from solid using laser ablation. Large multi-isotope ratio datasets can help identify provenance and intended use of anthropogenic uranium and other nuclear materials [1]. The Thermo Scientific NEPTUNE Plus (Bremen, Germany) with ‘Jet Interface’ option offers unparalleled MC-ICP-MS sensitivity. An analyte utilisation of c. 4% has previously been reported for uranium [2]. This high-sensitivity configuration utilises a dry high-capacity (100 m3/h) interface pump, special skimmer and sampler cones and a desolvating nebuliser system. Coupled with new acquisition methodologies, this sensitivity enhancement makes possible the analysis of micro-particles and small sample volumes at higher precision levels than previously achieved. New, high-performance, full-size and compact discrete dynode secondary electron multipliers (SEM) exhibit excellent stability and linearity over a large dynamic range and can be configured to simultaneously measure all of the uranium isotopes. Options for high abundance-sensitivity filters on two ion beams are also available, e.g. for 236U and 234U. Additionally, amplifiers with high ohm (1012 - 1013) feedback resistors have been developed to
Mechanical properties of triaxially braided composites: Experimental and analytical results
NASA Technical Reports Server (NTRS)
Masters, John E.; Foye, Raymond L.; Pastore, Christopher M.; Gowayed, Yasser A.
1992-01-01
This paper investigates the unnotched tensile properties of two-dimensional triaxial braid reinforced composites from both an experimental and analytical viewpoint. The materials are graphite fibers in an epoxy matrix. Three different reinforcing fiber architectures were considered. Specimens were cut from resin transfer molded (RTM) composite panels made from each braid. There were considerable differences in the observed elastic constants from different size strain gage and extensometer readings. Larger strain gages gave more consistent results and correlated better with the extensometer readings. Experimental strains correlated reasonably well with analytical predictions in the longitudinal, zero degree, fiber direction but not in the transverse direction. Tensile strength results were not always predictable even in reinforcing directions. Minor changes in braid geometry led to disproportionate strength variations. The unit cell structure of the triaxial braid was discussed with the assistence of computer analysis of the microgeometry. Photomicrographs of the braid geometry were used to improve upon the computer graphics representations of unit cells. These unit cells were used to predict the elastic moduli with various degrees of sophistication. The simple and the complex analyses were generally in agreement but none adequately matched the experimental results for all the braids.
Mechanical properties of triaxially braided composites: Experimental and analytical results
NASA Technical Reports Server (NTRS)
Masters, John E.; Foye, Raymond L.; Pastore, Christopher M.; Gowayed, Yasser A.
1992-01-01
The unnotched tensile properties of 2-D triaxial braid reinforced composites from both an experimental and an analytical viewpoint are studied. The materials are graphite fibers in an epoxy matrix. Three different reinforcing fiber architectures were considered. Specimens were cut from resin transfer molded (RTM) composite panels made from each braid. There were considerable differences in the observed elastic constants from different size strain gage and extensometer reading. Larger strain gages gave more consistent results and correlated better with the extensometer reading. Experimental strains correlated reasonably well with analytical predictions in the longitudinal, 0 degrees, fiber direction but not in the transverse direction. Tensile strength results were not always predictable even in reinforcing directions. Minor changes in braid geometry led to disproportionate strength variations. The unit cell structure of the triaxial braid was discussed with the assistance of computer analysis of the microgeometry. Photomicrographs of braid geometry were used to improve upon the computer graphics representations of unit cells. These unit cells were used to predict the elastic moduli with various degrees of sophistication. The simple and the complex analyses were generally in agreement but none adequately matched the experimental results for all the braids.
Dismer, Florian; Hansen, Sigrid; Oelmeier, Stefan Alexander; Hubbuch, Jürgen
2013-03-01
Chromatography is the method of choice for the separation of proteins, at both analytical and preparative scale. Orthogonal purification strategies for industrial use can easily be implemented by combining different modes of adsorption. Nevertheless, with flexibility comes the freedom of choice and optimal conditions for consecutive steps need to be identified in a robust and reproducible fashion. One way to address this issue is the use of mathematical models that allow for an in silico process optimization. Although this has been shown to work, model parameter estimation for complex feedstocks becomes the bottleneck in process development. An integral part of parameter assessment is the accurate measurement of retention times in a series of isocratic or gradient elution experiments. As high-resolution analytics that can differentiate between proteins are often not readily available, pure protein is mandatory for parameter determination. In this work, we present an approach that has the potential to solve this problem. Based on the uniqueness of UV absorption spectra of proteins, we were able to accurately measure retention times in systems of up to four co-eluting compounds. The presented approach is calibration-free, meaning that prior knowledge of pure component absorption spectra is not required. Actually, pure protein spectra can be determined from co-eluting proteins as part of the methodology. The approach was tested for size-exclusion chromatograms of 38 mixtures of co-eluting proteins. Retention times were determined with an average error of 0.6 s (1.6% of average peak width), approximated and measured pure component spectra showed an average coefficient of correlation of 0.992.
Analytical results on the Beauchemin model of lymphocyte migration
2013-01-01
The Beauchemin model is a simple particle-based description of stochastic lymphocyte migration in tissue, which has been successfully applied to studying immunological questions. In addition to being easy to implement, the model is also to a large extent mathematically tractable. This article provides a comprehensive overview of both existing and new analytical results on the Beauchemin model within a common mathematical framework. Specifically, we derive the motility coefficient, the mean square displacement, and the confinement ratio, and discuss four different methods for simulating biased migration of pre-defined speed. The results provide new insight into published studies and a reference point for future research based on this simple and popular lymphocyte migration model. PMID:23734948
An analytical fit to an accurate ab initio ( 1A 1) potential surface of H 2O
NASA Astrophysics Data System (ADS)
Redmon, Michael J.; Schatz, George C.
1981-01-01
The accurate ab initio MBPT quartic force field of Bartlett, Shavitt and Purvis has been fit to an analytical function using a method developed by Sorbie and Murrell (SM). An analysis of this surface indicates that it describes most properties of the H 2O molecule very accurately, including an exact fit to the MBPT force field, and very close to the correct energy difference between linear and equilibrium H 2O. The surface also reproduces the correct diatomic potentials in all dissociative regions, but some aspects of it in the "near asymptotic" O( 1D) + H 2 region are not quantitatively described. For example, the potential seems to be too attractive at long range for O + H 2 encounters, although it does have the correct minimum energy path geometry and correctly exhibits no barrier to O atom insertion. Comparisons of this surface with one previously developed by SM indicates generally good agreement between the two, especially after some of the SM parameters were corrected, using a numerical differentiation algorithm to evaluate them. A surface developed by Schinke and Lester (SL) is more realistic than outs in the O( 1D) + H 2 regions, but less quantitative in its description of the H 2O molecule. Overall, the present fit appears to be both realistic and quantitative for energy displacements up to 3-4; eV from H 2O equilibrium, and should therefore be useful for spectroscopic and collision dynamics studies involving H 2O.
Analytical results from routine DSSHT and SEHT monthly samples
Peters, T. B.
2016-12-01
Strip Effluent Hold Tank (SEHT) and Decontaminated Salt Solution Hold Tank (DSSHT) samples from several of the “microbatches” of Integrated Salt Disposition Project (ISDP) Salt Batch (“Macrobatch”) 8B have been analyzed for ^{238}Pu, ^{90}Sr, ^{137}Cs, cations (Inductively Coupled Plasma Emission Spectroscopy - ICPES), and anions (Ion Chromatography Anions - IC-A). The analytical results from the current microbatch samples are similar to those from previous macrobatch samples. The Cs removal continues to be excellent, with decontamination factors (DF) averaging 22,100 (114% RSD). The bulk chemistry of the DSSHT and SEHT samples do not show any signs of unusual behavior, other than lacking the anticipated degree of dilution that is calculated to occur during Modular Caustic-Side Solvent Extraction Unit (MCU) processing.
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
The Photon Impact Factor for DIS at NLO: analytic result
Chirilli, Giovanni A.
2011-07-15
Using the Operator Product Expansion for high-energy scattering processes, we compute the photon impact factor at next-to-leading order accuracy. We obtain an analytic expression as a linear combination of five independent conformal tensor structures.
Analytical Results from Routine DSSHT and SEHT Monthly Samples
Peters, T. B.
2016-08-17
Strip Effluent Hold Tank (SEHT) and Decontaminated Salt Solution Hold Tank (DSSHT) samples from several of the “microbatches” of Integrated Salt Disposition Project (ISDP) Salt Batch (“Macrobatch”) 8B have been analyzed for ^{238}Pu, ^{90}Sr, ^{137}Cs, cations (Inductively Coupled Plasma Emission Spectroscopy - ICPES), and anions (Ion Chromatography Anions - IC-A). The analytical results from the current microbatch samples are similar to those from previous macrobatch samples. The Actinide Removal Process (ARP) and the Modular Caustic-Side Solvent Extraction Unit (MCU) continue to show more than adequate Pu and Sr removal for times when monosodium titanate (MST) is used. Even with no MST strike being performed there exists some small Pu and Sr removal, likely from filtration of fines containing these elements. The Cs removal continues to be excellent, with decontamination factors (DF) averaging 16,400. The bulk chemistry of the DSSHT and SEHT samples do not show any signs of unusual behavior. SRNL recommends that a sample of the strip feed be analyzed for cation and anion content if a further decline in boron concentration is noted in future SEHT samples.
NASA Astrophysics Data System (ADS)
El-Diasty, M.
2014-11-01
An accurate heading solution is required for many applications and it can be achieved by high grade (high cost) gyroscopes (gyros) which may not be suitable for such applications. Micro-Electro Mechanical Systems-based (MEMS) is an emerging technology, which has the potential of providing heading solution using a low cost MEMS-based gyro. However, MEMS-gyro-based heading solution drifts significantly over time. The heading solution can also be estimated using MEMS-based magnetometer by measuring the horizontal components of the Earth magnetic field. The MEMS-magnetometer-based heading solution does not drift over time, but are contaminated by high level of noise and may be disturbed by the presence of magnetic field sources such as metal objects. This paper proposed an accurate heading estimation procedure based on the integration of MEMS-based gyro and magnetometer measurements that correct gyro and magnetometer measurements where gyro angular rates of changes are estimated using magnetometer measurements and then integrated with the measured gyro angular rates of changes with a robust filter to estimate the heading. The proposed integration solution is implemented using two data sets; one was conducted in static mode without magnetic disturbances and the second was conducted in kinematic mode with magnetic disturbances. The results showed that the proposed integrated heading solution provides accurate, smoothed and undisturbed solution when compared with magnetometerbased and gyro-based heading solutions.
NASA Astrophysics Data System (ADS)
Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.
2016-10-01
A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.
Analytical results on back propagation nonlinear compensator with coherent detection.
Tanimura, Takahito; Nölle, Markus; Fischer, Johannes Karl; Schubert, Colja
2012-12-17
We derive analytic formulas for the improvement in effective optical signal-to-noise ratio brought by a digital nonlinear compensator for dispersion uncompensated links. By assuming Gaussian distributed nonlinear noise, we are able to take both nonlinear signal-to-signal and nonlinear signal-to-noise interactions into account. In the limit of weak nonlinear signal-to-noise interactions, we derive an upper boundary of the OSNR improvement. This upper boundary only depends on fiber parameters as well as on the total bandwidth of the considered wavelength-division multiplexing (WDM) signal and the bandwidth available for back propagation. We discuss the dependency of the upper boundary on different fiber types and also the OSNR improvement in practical system conditions. Furthermore, the analytical formulas are validated by numerical simulations.
Path integral analysis of Jarzynski's equality: analytical results.
Minh, David D L; Adib, Artur B
2009-02-01
We apply path integrals to study nonequilibrium work theorems in the context of Brownian dynamics, deriving in particular the equations of motion governing the most typical and most dominant trajectories. For the analytically soluble cases of a moving harmonic potential and a harmonic oscillator with a time-dependent natural frequency, we find such trajectories, evaluate the work-weighted propagators, and validate Jarzynski's equality.
A results-based process for evaluation of diverse visual analytics tools
NASA Astrophysics Data System (ADS)
Rubin, Gary; Berger, David H.
2013-05-01
With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.
Exact analytical results for ADC with oscillating diffusion sensitizing gradients
Sukstanskii, A.L.
2013-01-01
The apparent diffusion coefficient (ADC) is analyzed for the case of oscillating diffusion sensitizing gradients. Exact analytical expressions are obtained in the high-frequency expansion of the ADC for an arbitrary number of oscillations N. These expressions are universal and valid for arbitrary system geometry. The validity conditions of the high-frequency expansion of ADC are obtained in the framework of a simple 1D model of restricted diffusion. These conditions are shown to be substantially different for cos- and sin-type gradients: for the cos-type gradients, the high-frequency expansion is valid when the period of a single oscillation is smaller than the characteristic diffusion time, the frequency dependence of ADC being practically the same for any N. In contrast, for the sin-type gradients, the high-frequency regime can be achieved only when the total diffusion time is smaller than the characteristic diffusion time, the frequency dependence of ADC being different for different N. PMID:23876779
Communicating Qualitative Analytical Results Following Grice's Conversational Maxims
ERIC Educational Resources Information Center
Chenail, Jan S.; Chenail, Ronald J.
2011-01-01
Conducting qualitative research can be seen as a developing communication act through which researchers engage in a variety of conversations. Articulating the results of qualitative data analysis results can be an especially challenging part of this scholarly discussion for qualitative researchers. To help guide investigators through this…
CSI sensing and control: Analytical and experimental results
NASA Technical Reports Server (NTRS)
Junkins, J. L.; Pollock, T. C.; Rahman, Z. H.
1989-01-01
Recent work on structural identification and large-angle maneuvers with vibration suppression was presented. The recent work has sought to balance structural and controls analysis activities by involving the analysts directly in the validation and experimental aspects of the research. Some new sensing, actuation, system identification, and control concepts were successfully implemented. An overview of these results is given.
Microgravity Fluid Separation Physics: Experimental and Analytical Results
NASA Technical Reports Server (NTRS)
Shoemaker, J. Michael; Schrage, Dean S.
1997-01-01
Effective, low power, two-phase separation systems are vital for the cost-effective study and utilization of two-phase flow systems and flow physics of two-phase flows. The study of microgravity flows have the potential to reveal significant insight into the controlling mechanisms for the behavior of flows in both normal and reduced gravity environments. The microgravity environment results in a reduction in gravity induced buoyancy forces acting on the discrete phases. Thus, surface tension, viscous, and inertial forces exert an increased influence on the behavior of the flow as demonstrated by the axisymmetric flow patterns. Several space technology and operations groups have studied the flow behavior in reduced gravity since gas-liquid flows are encountered in several systems such as cabin humidity control, wastewater treatment, thermal management, and Rankine power systems.
Transcriptional Bursting in Gene Expression: Analytical Results for General Stochastic Models
Kumar, Niraj; Singh, Abhyudai; Kulkarni, Rahul V.
2015-01-01
Gene expression in individual cells is highly variable and sporadic, often resulting in the synthesis of mRNAs and proteins in bursts. Such bursting has important consequences for cell-fate decisions in diverse processes ranging from HIV-1 viral infections to stem-cell differentiation. It is generally assumed that bursts are geometrically distributed and that they arrive according to a Poisson process. On the other hand, recent single-cell experiments provide evidence for complex burst arrival processes, highlighting the need for analysis of more general stochastic models. To address this issue, we invoke a mapping between general stochastic models of gene expression and systems studied in queueing theory to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions. These results are then used to derive noise signatures, i.e. explicit conditions based entirely on experimentally measurable quantities, that determine if the burst distributions deviate from the geometric distribution or if burst arrival deviates from a Poisson process. For non-Poisson arrivals, we develop approaches for accurate estimation of burst parameters. The proposed approaches can lead to new insights into transcriptional bursting based on measurements of steady-state mRNA/protein distributions. PMID:26474290
The structural properties of a two-Yukawa fluid: Simulation and analytical results.
Broccio, Matteo; Costa, Dino; Liu, Yun; Chen, Sow-Hsin
2006-02-28
Standard Monte Carlo simulations are carried out to assess the accuracy of theoretical predictions for the structural properties of a model fluid interacting through a hard-core two-Yukawa potential composed of a short-range attractive well next to a hard repulsive core, followed by a smooth, long-range repulsive tail. Theoretical calculations are performed in the framework provided by the Ornstein-Zernike equation, solved either analytically with the mean spherical approximation (MSA) or iteratively with the hypernetted-chain (HNC) closure. Our analysis shows that both theories are generally accurate in a thermodynamic region corresponding to a dense vapor phase around the critical point. For a suitable choice of potential parameters, namely, when the attractive well is deep and/or large enough, the static structure factor displays a secondary low-Q peak. In this case HNC predictions closely follow the simulation results, whereas MSA results progressively worsen the more pronounced this low-Q peak is. We discuss the appearance of such a peak, also experimentally observed in colloidal suspensions and protein solutions, in terms of the formation of equilibrium clusters in the homogeneous fluid.
The structural properties of a two-Yukawa fluid: Simulation and analytical results
NASA Astrophysics Data System (ADS)
Broccio, Matteo; Costa, Dino; Liu, Yun; Chen, Sow-Hsin
2006-02-01
Standard Monte Carlo simulations are carried out to assess the accuracy of theoretical predictions for the structural properties of a model fluid interacting through a hard-core two-Yukawa potential composed of a short-range attractive well next to a hard repulsive core, followed by a smooth, long-range repulsive tail. Theoretical calculations are performed in the framework provided by the Ornstein-Zernike equation, solved either analytically with the mean spherical approximation (MSA) or iteratively with the hypernetted-chain (HNC) closure. Our analysis shows that both theories are generally accurate in a thermodynamic region corresponding to a dense vapor phase around the critical point. For a suitable choice of potential parameters, namely, when the attractive well is deep and/or large enough, the static structure factor displays a secondary low-Q peak. In this case HNC predictions closely follow the simulation results, whereas MSA results progressively worsen the more pronounced this low-Q peak is. We discuss the appearance of such a peak, also experimentally observed in colloidal suspensions and protein solutions, in terms of the formation of equilibrium clusters in the homogeneous fluid.
Tank 48H Waste Composition and Results of Investigation of Analytical Methods
Walker , D.D.
1997-04-02
This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.
NASA Astrophysics Data System (ADS)
Cumpson, Peter J.; Hedley, John
2003-12-01
Calibration of atomic force microscope (AFM) cantilevers is necessary for the measurement of nanonewton and piconewton forces, which are critical to analytical applications of AFM in the analysis of polymer surfaces, biological structures and organic molecules at nanoscale lateral resolution. We have developed a compact and easy-to-use reference artefact for this calibration, using a method that allows traceability to the SI (Système International). Traceability is crucial to ensure that force measurements by AFM are comparable to those made by optical tweezers and other methods. The new non-contact calibration method measures the spring constant of these artefacts, by a combination of electrical measurements and Doppler velocimetry. The device was fabricated by silicon surface micromachining. The device allows AFM cantilevers to be calibrated quite easily by the 'cantilever-on-reference' method, with our reference device having a spring constant uncertainty of around ± 5% at one standard deviation. A simple substitution of the analogue velocimeter used in this work with a digital model should reduce this uncertainty to around ± 2%. Both are significant improvements on current practice, and allow traceability to the SI for the first time at these nanonewton levels.
Accurate Analytic Potential Functions for the a ^3Π_1 and X ^1Σ^+ States of {IBr}
NASA Astrophysics Data System (ADS)
Yukiya, Tokio; Nishimiya, Nobuo; Suzuki, Masao; Le Roy, Robert
2014-06-01
Spectra of IBr in various wavelength regions have been measured by a number of researchers using traditional diffraction grating and microwave methods, as well as using high-resolution laser techniques combined with a Fourier transform spectrometer. In a previous paper at this meeting, we reported a preliminary determination of analytic potential energy functions for the A ^3Π_1 and X ^1Σ^+ states of IBr from a direct-potential-fit (DPF) analysis of all of the data available at that time. That study also confirmed the presence of anomalous fluctuations in the v--dependence of the first differences of the inertial rotational constant, Δ Bv=Bv+1-Bv in the A ^3Π_1 state for vibrational levels with v'(A) in the mid 20's. However, our previous experience in a recent study of the analogous A ^3Π_1-X ^1Σ_g^+ system of Br_2 suggested that the effect of such fluctuations may be overcome if sufficient data are available. The present work therefore reports new measurements of transitions to levels in the v'(A)=23-26 region, together with a new global DPF analysis that uses ``robust" least-squares fits to average properly over the effect of such fluctuations in order to provide an optimum delineation of the underlying potential energy curve(s). L.E.Selin,Ark. Fys. 21,479(1962) E. Tiemann and Th. Moeller, Z. Naturforsch. A 30,986 (1975) E.M. Weinstock and A. Preston, J. Mol. Spectrosc. 70, 188 (1978) D.R.T. Appadoo, P.F. Bernath, and R.J. Le Roy, Can. J. Phys. 72, 1265 (1994) N. Nishimiya, T. Yukiya and M. Suzuki, J. Mol. Spectrosc. 173, 8 (1995). T. Yukiya, N. Nishimiya, and R.J. Le Roy, Paper MF12 at the 65th Ohio State University International Symposium on Molecular Spectroscopy, Columbus, Ohio, June 20-24, 2011. T. Yukiya, N. Nishimiya, Y. Samajima, K. Yamaguchi, M. Suzuki, C.D. Boone, I. Ozier and R.J. Le Roy, J. Mol. Spectrosc. 283, 32 (2013) J.K.G. Watson, J. Mol. Spectrosc. 219, 326 (2003).
Tank 241-A-101 cores 154 and 156 analytical results for the final report
Steen, F.H.
1997-05-02
This report contains tables of the analytical results from sampling Tank 241-A-101 for the following: fluorides, chlorides, nitrites, bromides, nitrates, phosphates, sulfates, and oxalates. This tank is listed on the Hydrogen Watch List.
Ballestra, S.; Vas, D.; Holm, E.; Lopez, J.J.; Parsi, P. )
1988-01-01
The Analytical Quality Control Services Program of the IAEA-ILMR covers a wide variety of intercalibration and reference materials. The purpose of the program is to ensure the comparability of the results obtained by the different participants and to enable laboratories engaged in low-level analyses of marine environmental materials to control their analytical performance. Within the past five years, the International Laboratory of Marine Radioactivity in Monaco has organized eight intercomparison exercises, on a world-wide basis, on natural materials of marine origin comprising sea water, sediment, seaweed and fish flesh. Results on artificial (fission and activation products, transuranium elements) and natural radionuclides were compiled and evaluated. Reference concentration values were established for a number of the intercalibration samples allowing them to become certified as reference materials available for general distribution. The results of the fish flesh sample and those of the deep-sea sediment are reviewed. The present status of three on-going intercomparison exercises on post-Chernobyl samples IAEA-306 (Baltic Sea sediment), IAEA-307 (Mediterranean sea-plant Posidonia oceanica) and IAEA-308 (Mediterranean mixed seaweed) is also described. 1 refs., 4 tabs.
NASA Technical Reports Server (NTRS)
Lueck, Dale E.; Captain, Janine E.; Gibson, Tracy L.; Peterson, Barbara V.; Berger, Cristina M.; Levine, Lanfang
2008-01-01
The RESOLVE project requires an analytical system to identify and quantitate the volatiles released from a lunar drill core sample as it is crushed and heated to 150 C. The expected gases and their range of concentrations were used to assess Gas Chromatography (GC) and Mass Spectrometry (MS), along with specific analyzers for use on this potential lunar lander. The ability of these systems to accurately quantitate water and hydrogen in an unknown matrix led to the selection of a small MEMS commercial process GC for use in this project. The modification, development and testing of this instrument for the specific needs of the project is covered.
Anderson, Oscar A.
2006-08-06
The well-known Kapchinskij-Vladimirskij (KV) equations are difficult to solve in general, but the problem is simplified for the matched-beam case with sufficient symmetry. They show that the interdependence of the two KV equations is eliminated, so that only one needs to be solved--a great simplification. They present an iterative method of solution which can potentially yield any desired level of accuracy. The lowest level, the well-known smooth approximation, yields simple, explicit results with good accuracy for weak or moderate focusing fields. The next level improves the accuracy for high fields; they previously showed how to maintain a simple explicit format for the results. That paper used expansion in a small parameter to obtain the second level. The present paper, using straightforward iteration, obtains equations of first, second, and third levels of accuracy. For a periodic lattice with beam matched to lattice, they use the lattice and beam parameters as input and solve for phase advances and envelope waveforms. They find excellent agreement with numerical solutions over a wide range of beam emittances and intensities.
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Walji, Sadru; Sentjens, Katherine
2013-06-01
Alkali hydride diatomic molecules have long been the object of spectroscopic studies. However, their small reduced mass makes them species for which the conventional semiclassical-based methods of analysis tend to have the largest errors. To date, the only quantum-mechanically accurate direct-potential-fit (DPF) analysis for one of these molecules was the one for LiH reported by Coxon and Dickinson. The present paper extends this level of analysis to NaH, and reports a DPF analysis of all available spectroscopic data for the A ^1Σ^+-X ^1Σ^+ system of NaH which yields analytic potential energy functions for these two states that account for those data (on average) to within the experimental uncertainties. W.C. Stwalley, W.T. Zemke and S.C. Yang, J. Phys. Chem. Ref. Data {20}, 153-187 (1991). J.A. Coxon and C.S. Dickinson, J. Chem. Phys. {121}, 8378 (2004).
Exploring vortex dynamics in the presence of dissipation: Analytical and numerical results
NASA Astrophysics Data System (ADS)
Yan, D.; Carretero-González, R.; Frantzeskakis, D. J.; Kevrekidis, P. G.; Proukakis, N. P.; Spirn, D.
2014-04-01
In this paper, we examine the dynamical properties of vortices in atomic Bose-Einstein condensates in the presence of phenomenological dissipation, used as a basic model for the effect of finite temperatures. In the context of this so-called dissipative Gross-Pitaevskii model, we derive analytical results for the motion of single vortices and, importantly, for vortex dipoles, which have become very relevant experimentally. Our analytical results are shown to compare favorably to the full numerical solution of the dissipative Gross-Pitaevskii equation where appropriate. We also present results on the stability of vortices and vortex dipoles, revealing good agreement between numerical and analytical results for the internal excitation eigenfrequencies, which extends even beyond the regime of validity of this equation for cold atoms.
Review of analytical results from the proposed agent disposal facility site, Aberdeen Proving Ground
Brubaker, K.L.; Reed, L.L.; Myers, S.W.; Shepard, L.T.; Sydelko, T.G.
1997-09-01
Argonne National Laboratory reviewed the analytical results from 57 composite soil samples collected in the Bush River area of Aberdeen Proving Ground, Maryland. A suite of 16 analytical tests involving 11 different SW-846 methods was used to detect a wide range of organic and inorganic contaminants. One method (BTEX) was considered redundant, and two {open_quotes}single-number{close_quotes} methods (TPH and TOX) were found to lack the required specificity to yield unambiguous results, especially in a preliminary investigation. Volatile analytes detected at the site include 1, 1,2,2-tetrachloroethane, trichloroethylene, and tetrachloroethylene, all of which probably represent residual site contamination from past activities. Other volatile analytes detected include toluene, tridecane, methylene chloride, and trichlorofluoromethane. These compounds are probably not associated with site contamination but likely represent cross-contamination or, in the case of tridecane, a naturally occurring material. Semivolatile analytes detected include three different phthalates and low part-per-billion amounts of the pesticide DDT and its degradation product DDE. The pesticide could represent residual site contamination from past activities, and the phthalates are likely due, in part, to cross-contamination during sample handling. A number of high-molecular-weight hydrocarbons and hydrocarbon derivatives were detected and were probably naturally occurring compounds. 4 refs., 1 fig., 8 tabs.
Tank 241-S-102, Core 232 analytical results for the final report
STEEN, F.H.
1998-11-04
This document is the analytical laboratory report for tank 241-S-102 push mode core segments collected between March 5, 1998 and April 2, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-S-102 Retained Gas Sampler System Sampling and Analysis Plan (TSAP) (McCain, 1998), Letter of Instruction for Compatibility Analysis of Samples from Tank 241-S-102 (LOI) (Thompson, 1998) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Mulkey and Miller, 1998). The analytical results are included in the data summary table (Table 1).
B Plant canyon sample TK-21-1 analytical results for the final report
Steen, F.H.
1998-04-10
This document is the analytical laboratory report for the TK-21-1 sample collected from the B Plant Canyon on February 18, 1998. The sample was analyzed in accordance with the Sampling and Analysis Plan for B Plant Solutions (SAP) (Simmons, 1997) in support of the B Plant decommissioning project. Samples were analyzed to provide data both to describe the material which would remain in the tanks after the B Plant transition is complete and to determine Tank Farm compatibility. The analytical results are included in the data summary table (Table 1).
An Analytical Approach For Earing In Cylindrical Deep Drawing Based On Uniaxial Tensile Test Results
NASA Astrophysics Data System (ADS)
Mulder, J.; Vegter, H.
2011-05-01
The prediction of earing and wall thickness distribution in cylindrical deep drawing is a challenging task even for today's FEA programs with advanced yield loci like Yld2004-18p, BBC2003 or Vegter. The current work involves an analytical description of cylindrical deep drawing that is comparable in accuracy to advanced numerical models. The analytical approach shows the importance for these type of simulations of fitting the yield locus description to uniaxial tensile test results in different directions, considering the full hardening curves.
Peters, T.; Fink, S.
2011-09-29
As part of the implementation process for the Next Generation Cesium Extraction Solvent (NGCS), SRNL and F/H Lab performed a series of analytical cross-checks to ensure that the components in the NGCS solvent system do not constitute an undue analytical challenge. For measurement of entrained Isopar{reg_sign} L in aqueous solutions, both labs performed similarly with results more reliable at higher concentrations (near 50 mg/L). Low bias occurred in both labs, as seen previously for comparable blind studies for the baseline solvent system. SRNL recommends consideration to use of Teflon{trademark} caps on all sample containers used for this purpose. For pH measurements, the labs showed reasonable agreement but considerable positive bias for dilute boric acid solutions. SRNL recommends consideration of using an alternate analytical method for qualification of boric acid concentrations.
Tank 241-BY-109, cores 201 and 203, analytical results for the final report
Esch, R.A.
1997-11-20
This document is the final laboratory report for tank 241-BY-109 push mode core segments collected between June 6, 1997 and June 17, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (Bell, 1997), the Tank Safety Screening Data Quality Objective (Dukelow, et al, 1995). The analytical results are included.
Feynman Path Integral Approach to Electron Diffraction for One and Two Slits: Analytical Results
ERIC Educational Resources Information Center
Beau, Mathieu
2012-01-01
In this paper we present an analytic solution of the famous problem of diffraction and interference of electrons through one and two slits (for simplicity, only the one-dimensional case is considered). In addition to exact formulae, various approximations of the electron distribution are shown which facilitate the interpretation of the results.…
Bounding the Higgs width at the LHC using full analytic results for $$gg → e^- e^+ \\mu^- \\mu^+$$
Campbell, John M.; Ellis, R. Keith; Williams, Ciaran
2014-04-09
We revisit the hadronic production of the four-lepton final state, e– e+ μ– μ+, through the fusion of initial state gluons. This process is mediated by loops of quarks and we provide first full analytic results for helicity amplitudes that account for both the effects of the quark mass in the loop and off-shell vector bosons. The analytic results have been implemented in the Monte Carlo program MCFM and are both fast, and numerically stable in the region of low Z transverse momentum. We use our results to study the interference between Higgs-mediated and continuum production of four-lepton final states,more » which is necessary in order to obtain accurate theoretical predictions outside the Higgs resonance region. We have confirmed and extended a recent analysis of Caola and Melnikov that proposes to use a measurement of the off-shell region to constrain the total width of the Higgs boson. Using a simple cut-and-count method, existing LHC data should bound the width at the level of 25-45 times the Standard Model expectation. We investigate the power of using a matrix element method to construct a kinematic discriminant to sharpen the constraint. Furthermore, in our analysis the bound on the Higgs width is improved by a factor of about 1.6 using a simple cut on the MEM discriminant, compared to an invariant mass cut μ4l > 300 GeV.« less
Recent Results on the Accurate Measurements of the Dielectric Constant of Seawater at 1.413GHZ
NASA Technical Reports Server (NTRS)
Lang, R.H.; Tarkocin, Y.; Utku, C.; Le Vine, D.M.
2008-01-01
Measurements of the complex. dielectric constant of seawater at 30.00 psu, 35.00 psu and 38.27 psu over the temperature range from 5 C to 3 5 at 1.413 GHz are given and compared with the Klein-Swift results. A resonant cavity technique is used. The calibration constant used in the cavity perturbation formulas is determined experimentally using methanol and ethanediol (ethylene glycol) as reference liquids. Analysis of the data shows that the measurements are accurate to better than 1.0% in almost all cases studied.
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
NASA Astrophysics Data System (ADS)
Sarkar, Vaskar; Dutta, Aloke K.
2006-11-01
A novel approach of defining the threshold voltage for long channel MOSFETs has been presented in this paper, where it has been proposed that it corresponds to the gate-to-source voltage for which the drift and diffusion components of the total drain current become equal to each other. In order to avoid the greater computation time associated with the numerical solution of these two components, an analytical expression of the surface potential, corresponding to the threshold condition, is given here, which has the same functional form as the one proposed by Tsividis. The fuzzy parameter n, appearing in this expression of the surface potential, is expressed as a function of the substrate doping density ( NA) and the oxide thickness ( tox). The threshold voltage values, obtained analytically from the relation between the surface potential at the threshold condition and the closed-form technology-mapped expression of the fuzzy parameter n, show an excellent match with those obtained from SILVACO simulations for a wide range of NA and tox, with the maximum error being only about 4%. The comparison of the percent error values of the threshold voltage obtained from this proposed model with those obtained from the other two recently proposed methods, all with respect to SILVACO simulation results, further verifies the validity of our completely analytical, mathematically simple, and straight-forward approach, proposed in this work here.
Analytic results for planar three-loop integrals for massive form factors
NASA Astrophysics Data System (ADS)
Henn, Johannes M.; Smirnov, Alexander V.; Smirnov, Vladimir A.
2016-12-01
We use the method of differential equations to analytically evaluate all planar three-loop Feynman integrals relevant for form factor calculations involving massive particles. Our results for ninety master integrals at general q 2 are expressed in terms of multiple polylogarithms, and results for fiftyone master integrals at the threshold q 2 = 4 m 2 are expressed in terms of multiple polylogarithms of argument one, with indices equal to zero or to a sixth root of unity.
Improving the trust in results of numerical simulations and scientific data analytics
Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan
2015-04-30
This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general
Forest, Valérie; Figarol, Agathe; Boudard, Delphine; Cottier, Michèle; Grosseau, Philippe; Pourchez, Jérémie
2015-03-31
Carbon nanotube (CNT) cytotoxicity is frequently investigated using in vitro classical toxicology assays. However, these cellular tests, usually based on the use of colorimetric or fluorimetric dyes, were designed for chemicals and may not be suitable for nanosized materials. Indeed, because of their unique physicochemical properties CNT can interfere with the assays and bias the results. To get accurate data and draw reliable conclusions, these artifacts should be carefully taken into account. The aim of this study was to evaluate qualitatively and quantitatively the interferences occurring between CNT and the commonly used lactate dehydrogenase (LDH) assay. Experiments under cell-free conditions were performed, and it was clearly demonstrated that artifacts occurred. They were due to the intrinsic absorbance of CNT on one hand and the adsorption of LDH at the CNT surface on the other hand. The adsorption of LDH on CNT was modeled and was found to fit the Langmuir model. The K(ads) and n(eq) constants were defined, allowing the correction of results obtained from cellular experiments to get more accurate data and lead to proper conclusions on the cytotoxicity of CNT.
Thermodynamics of the O(3) model in 1+1 dimensions: lattice vs. analytical results
NASA Astrophysics Data System (ADS)
Seel, Elina; Smith, Dominik; Lottini, Stefano; Giacosa, Francesco
2013-07-01
A detailed study of the thermodynamics of the O( N = 3) model in 1+1 dimensions is presented, employing a two-particle-irreducible resummation prescription as well as fully nonperturbative finite-temperature lattice simulations. The analytical results are computed using the Cornwall-Jackiw-Tomboulis (CJT) formalism and the auxiliary field method to one- and to two-loop order. The lattice results are obtained through Monte Carlo simulation for various lattice spacings. The analytical and lattice results for pressure, trace anomaly, and energy density, resembling closely those of four-dimensional Yang-Mills theories, are compared with each other. We find that to one-loop order there is a good correspondence between the CJT formalism and the lattice study for low temperatures. However, at high T the two-loop calculation fares better, correcting for the overestimation from the former approximation.
Variation of analytical results for peanuts in energy bars and milk chocolate.
Trucksess, Mary W; Whitaker, Thomas B; Slate, Andrew B; Williams, Kristina M; Brewer, Vickery A; Whittaker, Paul; Heeres, James T
2004-01-01
Peanuts contain proteins that can cause severe allergic reactions in some sensitized individuals. Studies were conducted to determine the percentage of recovery by an enzyme-linked immunosorbent assay (ELISA) method in the analysis for peanuts in energy bars and milk chocolate and to determine the sampling, subsampling, and analytical variances associated with testing energy bars and milk chocolate for peanuts. Food products containing chocolate were selected because their composition makes sample preparation for subsampling difficult. Peanut-contaminated energy bars, noncontaminated energy bars, incurred milk chocolate containing known levels of peanuts, and peanut-free milk chocolate were used. A commercially available ELISA kit was used for analysis. The sampling, sample preparation, and analytical variances associated with each step of the test procedure to measure peanut protein were determined for energy bars. The sample preparation and analytical variances were determined for milk chocolate. Variances were found to be functions of peanut concentration. Sampling and subsampling variability associated with energy bars accounted for 96.6% of the total testing variability. Subsampling variability associated with powdered milk chocolate accounted for >60% of the total testing variability. The variability among peanut test results can be reduced by increasing sample size, subsample size, and number of analyses. For energy bars the effect of increasing sample size from 1 to 4 bars, subsample size from 5 to 20 g, and number of aliquots quantified from 1 to 2 on reducing the sampling, sample preparation, and analytical variance was demonstrated. For powdered milk chocolate, the effects of increasing subsample size from 5 to 20 g and number of aliquots quantified from 1 to 2 on reducing sample preparation and analytical variances were demonstrated. This study serves as a template for application to other foods, and for extrapolation to different sizes of samples and
Tank 241-T-112, cores 185 and 186 analytical results for the final report
Steen, F.H.
1997-06-03
This document is the analytical laboratory report for tank 241-T-112 push mode core segments collected between February 26, 1997 and March 19, 1997. The segments were subsampled and analyzed in accordance with the Tank 241-T-112 Push Mode Core Samplings and Analysis Plan (TSAP) and the Safety Screening Data Quality Objective (DQO). The analytical results are included in the data summary table. None of the samples submitted for Differential Scanning Calorimetry and Total Alpha Activity (AT) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding and are not considered in this report.
Tank 241-AX-103, cores 212 and 214 analytical results for the final report
Steen, F.H.
1998-02-05
This document is the analytical laboratory report for tank 241-AX-103 push mode core segments collected between July 30, 1997 and August 11, 1997. The segments were subsampled and analyzed in accordance with the Tank 241-AX-103 Push Mode Core Sampling and Analysis Plan (TSAP) (Comer, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT), plutonium 239 (Pu239), and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Conner, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report.
A multigroup radiation diffusion test problem: Comparison of code results with analytic solution
Shestakov, A I; Harte, J A; Bolstad, J H; Offner, S R
2006-12-21
We consider a 1D, slab-symmetric test problem for the multigroup radiation diffusion and matter energy balance equations. The test simulates diffusion of energy from a hot central region. Opacities vary with the cube of the frequency and radiation emission is given by a Wien spectrum. We compare results from two LLNL codes, Raptor and Lasnex, with tabular data that define the analytic solution.
Analytical Results from the Area G Nitrate Salt Samples Submitted to C-AAC
Drake, Lawrence Randall
2014-09-03
Table 1 is a summary of the analysis results on sample 4174-1-6/7 and 4174-2-6/7. The results in Table 2 are the major and trace metals analysis values. Samples 4174-1-6 and 4174-2-7 were introduced into a radiological glovebox in CMR for partitioning and analysis. Samples 4174-1-7 and 4174-2-6 were assayed by gamma spectrometry and then sent to TA 48 (C-NR) for further analysis. The validated analytical procedures used by C-AAC are cited at the end of this document. The results have not been approved by formal QA release.
NASA Astrophysics Data System (ADS)
Nestler, B.; Danilov, D.; Galenko, P.
2005-07-01
A phase-field model for non-isothermal solidification in multicomponent systems [SIAM J. Appl. Math. 64 (3) (2004) 775-799] consistent with the formalism of classic irreversible thermodynamics is used for numerical simulations of crystal growth in a pure material. The relation of this approach to the phase-field model by Bragard et al. [Interface Science 10 (2-3) (2002) 121-136] is discussed. 2D and 3D simulations of dendritic structures are compared with the analytical predictions of the Brener theory [Journal of Crystal Growth 99 (1990) 165-170] and with recent experimental measurements of solidification in pure nickel [Proceedings of the TMS Annual Meeting, March 14-18, 2004, pp. 277-288; European Physical Journal B, submitted for publication]. 3D morphology transitions are obtained for variations in surface energy and kinetic anisotropies at different undercoolings. In computations, we investigate the convergence behaviour of a standard phase-field model and of its thin interface extension at different undercoolings and at different ratios between the diffuse interface thickness and the atomistic capillary length. The influence of the grid anisotropy is accurately analyzed for a finite difference method and for an adaptive finite element method in comparison.
Oxygen-enriched diesel engine performance: A comparison of analytical and experimental results
Sekar, R.R.; Marr, W.W.; Cole, R.L.; Marciniak, T.J. ); Assanis, D.N. ); Schaus, J.E. )
1990-01-01
Use of oxygen-enriched combustion air in diesel engines can lead to significant improvements in power density, as well as reductions in particulate emissions, but at the expense of higher NO{sub x} emissions. Oxygen enrichment would also lead to lower ignition delays and the opportunity to burn lower grade fuels. Analytical and experimental studies are being conducted in parallel to establish the optimal combination of oxygen level and diesel fuel properties. In this paper, cylinder pressure data acquired on a single-cylinder engine are used to generate heat release rates for operation under various oxygen contents. These derived heat release rates are in turn used to improve the combustion correlation -- and thus the prediction capability -- of the simulation code. It is shown that simulated and measured cylinder pressures and other performance parameters are in good agreement. The improved simulation can provide sufficiently accurate predictions of trends and magnitudes to be useful in parametric studies assessing the effects of oxygen enrichment and water injection on diesel engine performance. Measured ignition delays, NO{sub x} emissions, and particulate emissions are also compared with previously published data. The measured ignition delays are slightly lower than previously reported. Particulate emissions measured in this series of tests are significantly lower than previously reported. 14 refs., 10 figs., 1 tab.
Bicanic, Dane; Swarts, Jan; Luterotti, Svjetlana; Pietraperzia, Giangaetano; Dóka, Otto; de Rooij, Hans
2004-09-01
The concept of the optothermal window (OW) is proposed as a reliable analytical tool to rapidly determine the concentration of lycopene in a large variety of commercial tomato products in an extremely simple way (the determination is achieved without the need for pretreatment of the sample). The OW is a relative technique as the information is deduced from the calibration curve that relates the OW data (i.e., the product of the absorption coefficient beta and the thermal diffusion length micro) with the lycopene concentration obtained from spectrophotometric measurements. The accuracy of the method has been ascertained with a high correlation coefficient (R = 0.98) between the OW data and results acquired from the same samples by means of the conventional extraction spectrophotometric method. The intrinsic precision of the OW method is quite high (better than 1%), whereas the repeatability of the determination (RSD = 0.4-9.5%, n= 3-10) is comparable to that of spectrophotometry.
NASA Technical Reports Server (NTRS)
Holman, Gordon
2010-01-01
Accelerated electrons play an important role in the energetics of solar flares. Understanding the process or processes that accelerate these electrons to high, nonthermal energies also depends on understanding the evolution of these electrons between the acceleration region and the region where they are observed through their hard X-ray or radio emission. Energy losses in the co-spatial electric field that drives the current-neutralizing return current can flatten the electron distribution toward low energies. This in turn flattens the corresponding bremsstrahlung hard X-ray spectrum toward low energies. The lost electron beam energy also enhances heating in the coronal part of the flare loop. Extending earlier work by Knight & Sturrock (1977), Emslie (1980), Diakonov & Somov (1988), and Litvinenko & Somov (1991), I have derived analytical and semi-analytical results for the nonthermal electron distribution function and the self-consistent electric field strength in the presence of a steady-state return-current. I review these results, presented previously at the 2009 SPD Meeting in Boulder, CO, and compare them and computed X-ray spectra with numerical results obtained by Zharkova & Gordovskii (2005, 2006). The phYSical significance of similarities and differences in the results will be emphasized. This work is supported by NASA's Heliophysics Guest Investigator Program and the RHESSI Project.
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Warwick, Peter D.; Breland, F. Clayton; Hackley, Paul C.; Dulong, Frank T.; Nichols, Douglas J.; Karlsen, Alexander W.; Bustin, R. Marc; Barker, Charles E.; Willett, Jason C.; Trippi, Michael H.
2006-01-01
In 2001, and 2002, the U.S. Geological Survey (USGS) and the Louisiana Geological Survey (LGS), through a Cooperative Research and Development Agreement (CRADA) with Devon SFS Operating, Inc. (Devon), participated in an exploratory drilling and coring program for coal-bed methane in north-central Louisiana. The USGS and LGS collected 25 coal core and cuttings samples from two coal-bed methane test wells that were drilled in west-central Caldwell Parish, Louisiana. The purpose of this report is to provide the results of the analytical program conducted on the USGS/LGS samples. The data generated from this project are summarized in various topical sections that include: 1. molecular and isotopic data from coal gas samples; 2. results of low-temperature ashing and X-ray analysis; 3. palynological data; 4. down-hole temperature data; 5. detailed core descriptions and selected core photographs; 6. coal physical and chemical analytical data; 7. coal gas desorption results; 8. methane and carbon dioxide coal sorption data; 9. coal petrographic results; and 10. geophysical logs.
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; Wyrick, J. R.; Jackson, J. R.
2014-12-01
Long practiced in fisheries, visual substrate mapping of coarse-bedded rivers is eschewed by geomorphologists for inaccuracy and limited sizing data. Geomorphologists perform time-consuming measurements of surficial grains, with the few locations precluding spatially explicit mapping and analysis of sediment facies. Remote sensing works for bare land, but not vegetated or subaqueous sediments. As visual systems apply the log2 Wentworth scale made for sieving, they suffer from human inability to readily discern those classes. We hypothesized that size classes centered on the PDF of the anticipated sediment size distribution would enable field crews to accurately (i) identify presence/absence of each class in a facies patch and (ii) estimate the relative amount of each class to within 10%. We first tested 6 people using 14 measured samples with different mixtures. Next, we carried out facies mapping for ~ 37 km of the lower Yuba River in California. Finally, we tested the resulting data to see if it produced statistically significant hydraulic-sedimentary-geomorphic results. Presence/absence performance error was 0-4% for four people, 13% for one person, and 33% for one person. The last person was excluded from further effort. For the abundance estimation performance error was 1% for one person, 7-12% for three people, and 33% for one person. This last person was further trained and re-tested. We found that the samples easiest to visually quantify were unimodal and bimodal, while those most difficult had nearly equal amounts of each size. This confirms psychological studies showing that humans have a more difficult time quantifying abundances of subgroups when confronted with well-mixed groups. In the Yuba, mean grain size decreased downstream, as is typical for an alluvial river. When averaged by reach, mean grain size and bed slope were correlated with an r2 of 0.95. At the morphological unit (MU) scale, eight in-channel bed MU types had an r2 of 0.90 between mean
Bounding the Higgs width at the LHC using full analytic results for $gg → e^- e^+ \\mu^- \\mu^+$
Campbell, John M.; Ellis, R. Keith; Williams, Ciaran
2014-04-09
We revisit the hadronic production of the four-lepton final state, e^{–} e^{+} μ^{–} μ^{+}, through the fusion of initial state gluons. This process is mediated by loops of quarks and we provide first full analytic results for helicity amplitudes that account for both the effects of the quark mass in the loop and off-shell vector bosons. The analytic results have been implemented in the Monte Carlo program MCFM and are both fast, and numerically stable in the region of low Z transverse momentum. We use our results to study the interference between Higgs-mediated and continuum production of four-lepton final states, which is necessary in order to obtain accurate theoretical predictions outside the Higgs resonance region. We have confirmed and extended a recent analysis of Caola and Melnikov that proposes to use a measurement of the off-shell region to constrain the total width of the Higgs boson. Using a simple cut-and-count method, existing LHC data should bound the width at the level of 25-45 times the Standard Model expectation. We investigate the power of using a matrix element method to construct a kinematic discriminant to sharpen the constraint. Furthermore, in our analysis the bound on the Higgs width is improved by a factor of about 1.6 using a simple cut on the MEM discriminant, compared to an invariant mass cut μ_{4l }> 300 GeV.
A comparison of analytical results for 20 K LOX/hydrogen instabilities
NASA Technical Reports Server (NTRS)
Klem, Mark D.; Breisacher, Kevin J.
1990-01-01
Test data from NASA Lewis' Effect of Thrust Per Element on Combustion Stability Characteristics of Hydrogen-Oxygen Rocket Engines test program are used to validate two recently released stability analysis tools. The first tool is a design methodology called ROCCID (ROCket Combustor Interactive Design). ROCCID is an interactive design and analysis methodology that uses existing performance and combustion stability analysis codes. The second tool is HICCIP (High frequency Injection Coupled Combustion Instability Program). HICCIP is a recently developed combustion stability analysis model. Using a matrix of models, results from analytic comparisons with 20 K LOX/H2 experimental data are presented.
Effect of a parametric driving force on noise-induced transitions: Analytical results
NASA Astrophysics Data System (ADS)
Plata, J.
1999-02-01
The suppression by a parametric harmonic action of noise-induced oscillations in an underdamped pendulum with nonlinear friction, recently reported by Landa et al. [Phys. Rev. E 56, 1465 (1997)], is studied in an approximately soluble model system. In the high-frequency limit, a process of consecutive averaging over two widely different relevant time scales reveals the analogy of the problem with a noise-induced transition whose critical point is changed by the driving term. The obtainment of analytical results for the probability distribution function and the spectrum allows us to understand and control the effect.
Tank 214-AW-105, grab samples, analytical results for the finalreport
Esch, R.A.
1997-02-20
This document is the final report for tank 241-AW-105 grab samples. Twenty grabs samples were collected from risers 10A and 15A on August 20 and 21, 1996, of which eight were designated for the K Basin sludge compatibility and mixing studies. This document presents the analytical results for the remaining twelve samples. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DO). The results for the previous sampling of this tank were reported in WHC-SD-WM-DP-149, Rev. 0, 60-Day Waste Compatibility Safety Issue and Final Results for Tank 241-A W-105, Grab Samples 5A W-95-1, 5A W-95-2 and 5A W-95-3. Three supernate samples exceeded the TOC notification limit (30,000 microg C/g dry weight). Appropriate notifications were made. No immediate notifications were required for any other analyte. The TSAP requested analyses for polychlorinated biphenyls (PCB) for all liquids and centrifuged solid subsamples. The PCB analysis of the liquid samples has been delayed and will be presented in a revision to this document.
Complex dynamics of memristive circuits: Analytical results and universal slow relaxation
NASA Astrophysics Data System (ADS)
Caravelli, F.; Traversa, F. L.; Di Ventra, M.
2017-02-01
Networks with memristive elements (resistors with memory) are being explored for a variety of applications ranging from unconventional computing to models of the brain. However, analytical results that highlight the role of the graph connectivity on the memory dynamics are still few, thus limiting our understanding of these important dynamical systems. In this paper, we derive an exact matrix equation of motion that takes into account all the network constraints of a purely memristive circuit, and we employ it to derive analytical results regarding its relaxation properties. We are able to describe the memory evolution in terms of orthogonal projection operators onto the subspace of fundamental loop space of the underlying circuit. This orthogonal projection explicitly reveals the coupling between the spatial and temporal sectors of the memristive circuits and compactly describes the circuit topology. For the case of disordered graphs, we are able to explain the emergence of a power-law relaxation as a superposition of exponential relaxation times with a broad range of scales using random matrices. This power law is also universal, namely independent of the topology of the underlying graph but dependent only on the density of loops. In the case of circuits subject to alternating voltage instead, we are able to obtain an approximate solution of the dynamics, which is tested against a specific network topology. These results suggest a much richer dynamics of memristive networks than previously considered.
Van Overwalle, Frank; Baetens, Kris; Mariën, Peter; Vandekerckhove, Marie
2015-08-01
A recent meta-analysis explored the role of the cerebellum in social cognition and documented that this part of the brain is critically implicated in social cognition, especially in more abstract and complex forms of mentalizing. The authors found an overlap with clusters involved in sensorimotor (during mirror and self-judgment tasks) as well as in executive processes (across all tasks) documented in earlier nonsocial cerebellar meta-analyses, and hence interpreted their results in terms of a domain-general function of the cerebellum. However, these meta-analytic results might be interpreted in a different, complementary way. Indeed, the results reveal a striking overlap with the parcellation of cerebellar topography offered by a recent functional connectivity analysis. In particular, the majority of social cognitive activity in the cerebellum can also be explained as located within the boundaries of a default/mentalizing network of the cerebellum, with the exception of the involvement of primary and integrative somatomotor networks for self-related and mirror tasks, respectively. Given the substantial overlap, a novel interpretation of the meta-analytic findings is put forward suggesting that cerebellar activity during social judgments might reflect a more domain-specific mentalizing functionality in some areas of the cerebellum than assumed before.
Comparison of experimental and analytical results for free vibration of laminated composite plates
Maryuama, Koichi; Narita, Yoshihiro; Ichinomiya, Osamu
1995-11-01
Fibrous composite materials are being increasingly employed in high performance structures, including pressured vessel and piping applications. These materials are usually used in the form of laminated flat or curved plates, and the understanding of natural frequencies and the corresponding mode shapes is essential to a reliable structural design. Although many references have been published on analytical study of laminated composite plates, a limited number of experimental studies have appeared for dealing with vibration characteristics of the plates. This paper presents both experimental and analytical results for the problems. In the experiment, the holographic interferometry is used to measure the resonant frequencies and corresponding mode shapes of six-layered CFRP (carbon fiber reinforced plastic) composite plates. The material constants of a lamina are calculated from fiber and matrix material constants by using some different composite rules. With the calculated constants, the natural frequencies of the laminated CFRP plates are theoretically determined by the Ritz method. From the comparison of two sets of the results, the effect of choosing different composite rules is discussed in the vibration study of laminated composite plates.
Tank 241-TX-118, core 236 analytical results for the final report
ESCH, R.A.
1998-11-19
This document is the analytical laboratory report for tank 241-TX-118 push mode core segments collected between April 1, 1998 and April 13, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-118 Push Mode Core sampling and Analysis Plan (TSAP) (Benar, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al, 1995) and the Historical Model Evaluation Data Requirements (Historical DQO) (Sipson, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Benar, 1997). One sample exceeded the Total Alpha Activity (AT) analysis notification limit of 38.4{micro}Ci/g (based on a bulk density of 1.6), core 236 segment 1 lower half solids (S98T001524). Appropriate notifications were made. Plutonium 239/240 analysis was requested as a secondary analysis. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report.
Tank 241-T-203, core 190 analytical results for the final report
Steen, F.H.
1997-08-05
This document is the analytical laboratory report for tank 241-T-203 push mode core segments collected on April 17, 1997 and April 18, 1997. The segments were subsainpled and analyzed in accordance with the Tank 241-T-203 Push Mode Core Sampling andanalysis Plan (TSAP) (Schreiber, 1997a), the Safety Screening Data Quality Objective (DQO)(Dukelow, et al., 1995) and Leffer oflnstructionfor Core Sample Analysis of Tanks 241-T-201, 241-T-202, 241-T-203, and 241-T-204 (LOI)(Hall, 1997). The analytical results are included in the data summary report (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Schreiber, 1997a). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997b) and not considered in this report.
Analytical 3D views and virtual globes — scientific results in a familiar spatial context
NASA Astrophysics Data System (ADS)
Tiede, Dirk; Lang, Stefan
In this paper we introduce analytical three-dimensional (3D) views as a means for effective and comprehensible information delivery, using virtual globes and the third dimension as an additional information carrier. Four case studies are presented, in which information extraction results from very high spatial resolution (VHSR) satellite images were conditioned and aggregated or disaggregated to regular spatial units. The case studies were embedded in the context of: (1) urban life quality assessment (Salzburg/Austria); (2) post-disaster assessment (Harare/Zimbabwe); (3) emergency response (Lukole/Tanzania); and (4) contingency planning (faked crisis scenario/Germany). The results are made available in different virtual globe environments, using the implemented contextual data (such as satellite imagery, aerial photographs, and auxiliary geodata) as valuable additional context information. Both day-to-day users and high-level decision makers are addressees of this tailored information product. The degree of abstraction required for understanding a complex analytical content is balanced with the ease and appeal by which the context is conveyed.
Statistical mechanics of two hard spheres in a spherical pore, exact analytic results in D dimension
NASA Astrophysics Data System (ADS)
Urrutia, Ignacio; Szybisz, Leszek
2010-03-01
This work is devoted to the exact statistical mechanics treatment of simple inhomogeneous few-body systems. The system of two hard spheres (HSs) confined in a hard spherical pore is systematically analyzed in terms of its dimensionality D. The canonical partition function and the one- and two-body distribution functions are analytically evaluated and a scheme of iterative construction of the D +1 system properties is presented. We analyze in detail both the effect of high confinement, when particles become caged, and the low density limit. Other confinement situations are also studied analytically and several relations between the two HSs in a spherical pore, two sticked HSs in a spherical pore, and two HSs on a spherical surface partition functions are traced. These relations make meaningful the limiting caging and low density behavior. Turning to the system of two HSs in a spherical pore, we also analytically evaluate the pressure tensor. The thermodynamic properties of the system are discussed. To accomplish this statement we purposely focus in the overall characteristics of the inhomogeneous fluid system, instead of concentrate in the peculiarities of a few-body system. Hence, we analyze the equation of state, the pressure at the wall, and the fluid-substrate surface tension. The consequences of new results about the spherically confined system of two HSs in D dimension on the confined many HS system are investigated. New constant coefficients involved in the low density limit properties of the open and closed systems of many HS in a spherical pore are obtained for arbitrary D. The complementary system of many HS which surrounds a HS (a cavity inside of a bulk HS system) is also discussed.
Tank 241-A-101, cores 154 and 156 analytical results for the 45 day report
Steen, F.H.
1996-10-18
This document is the 45-day laboratory report for tank 241 -A-101 push mode core segments collected between July II, 1996 and July 25, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-A-101 Push Mode Core Sampling and Analysis Plan (TSAP) (Field, 1996) and the Safety Screening Data Quality Objective (DQO)(Dukelow, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Total Alpha Activity (AT) or Differential Scanning Calorimetry (DSC) analyses exceeded notification limits as stated in the Safety Screening DQO (Dukelow, et al., 1995). Statistical evaluation on results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Primary safety screening results and the raw data from thermogravimetric analysis (TGA) and DSC analyses are included in this report.
Silva, Romesh; Amouzou, Agbessi; Munos, Melinda; Marsh, Andrew; Hazel, Elizabeth; Victora, Cesar; Black, Robert; Bryce, Jennifer
2016-01-01
Introduction Most low-income countries lack complete and accurate vital registration systems. As a result, measures of under-five mortality rates rely mostly on household surveys. In collaboration with partners in Ethiopia, Ghana, Malawi, and Mali, we assessed the completeness and accuracy of reporting of births and deaths by community-based health workers, and the accuracy of annualized under-five mortality rate estimates derived from these data. Here we report on results from Ethiopia, Malawi and Mali. Method In all three countries, community health workers (CHWs) were trained, equipped and supported to report pregnancies, births and deaths within defined geographic areas over a period of at least fifteen months. In-country institutions collected these data every month. At each study site, we administered a full birth history (FBH) or full pregnancy history (FPH), to women of reproductive age via a census of households in Mali and via household surveys in Ethiopia and Malawi. Using these FBHs/FPHs as a validation data source, we assessed the completeness of the counts of births and deaths and the accuracy of under-five, infant, and neonatal mortality rates from the community-based method against the retrospective FBH/FPH for rolling twelve-month periods. For each method we calculated total cost, average annual cost per 1,000 population, and average cost per vital event reported. Results On average, CHWs submitted monthly vital event reports for over 95 percent of catchment areas in Ethiopia and Malawi, and for 100 percent of catchment areas in Mali. The completeness of vital events reporting by CHWs varied: we estimated that 30%-90% of annualized expected births (i.e. the number of births estimated using a FPH) were documented by CHWs and 22%-91% of annualized expected under-five deaths were documented by CHWs. Resulting annualized under-five mortality rates based on the CHW vital events reporting were, on average, under-estimated by 28% in Ethiopia, 32% in
Results and limits in the 1-D analytical modeling for the asymmetric DG SOI MOSFET
NASA Astrophysics Data System (ADS)
Cobianu, O.; Glesner, M.
2008-05-01
This paper presents the results and the limits of 1-D analytical modeling of electrostatic potential in the low-doped p type silicon body of the asymmetric n-channel DG SOI MOSFET, where the contribution to the asymmetry comes only from p- and n-type doping of polysilicon used as the gate electrodes. Solving Poisson's equation with boundary conditions based on the continuity of normal electrical displacement at interfaces and the presence of a minimum electrostatic potential by using the Matlab code we have obtained a minimum potential with a slow variation in the central zone of silicon with the value pinned around 0.46 V, where the applied VGS voltage varies from 0.45 V to 0.95 V. The paper states clearly the validity domain of the analytical solution and the important effect of the localization of the minimum electrostatic potential value on the potential variation at interfaces as a function of the applied VGS voltage.
NASA Astrophysics Data System (ADS)
Gralla, Samuel; Lupsasca, Alexandru; Philippov, Alexander
2017-01-01
Most previous studies of the pulsar magnetosphere have made three unrealistic assumptions: rapid rotation, pure magnetic dipole, and low stellar compaction (i.e. flat spacetime). We relax all three assumptions with a combined numerical-analytical technique that leverages the rotation rate as a small parameter. We consider a perfectly conducting, nearly spherical star with a force-free magnetosphere. We derive a general approach and then provide definite results for magnetic fields that are symmetric about an axis inclined relative to the rotation axis. We discuss polar cap shapes and pair production regions for a variety of magnetic field configurations. These results are relevant for X-ray pulsations as well as coherent radio emission.
Tank 241-T-105, cores 205 and 207 analytical results for the final report
Esch, R. A.
1997-10-21
This document is the final laboratory report for tank 241-T-105 push mode core segments collected between June 24, 1997 and June 30, 1997. The segments were subsampled and analyzed in accordance with the {ital Tank Push Mode Core Sampling and Analysis Plan} (TSAP) (Field,1997), the {ital Tank Safety Screening Data Quality Objective} (Safety DQO) (Dukelow, et al., 1995) and {ital Tank 241-T-105 Sample Analysis} (memo) (Field, 1997a). The analytical results are included in Table 1. None of the subsamples submitted for the differential scanning calorimetry (DSC) analysis or total alpha activity (AT) exceeded the notification limits as stated in the TSAP (Field, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report.
Tank 241-T-204, core 188 analytical results for the final report
Nuzum, J.L.
1997-07-24
TANK 241-T-204, CORE 188, ANALYTICAL RESULTS FOR THE FINAL REPORT. This document is the final laboratory report for Tank 241 -T-204. Push mode core segments were removed from Riser 3 between March 27, 1997, and April 11, 1997. Segments were received and extruded at 222-8 Laboratory. Analyses were performed in accordance with Tank 241-T-204 Push Mode Core Sampling and analysis Plan (TRAP) (Winkleman, 1997), Letter of instruction for Core Sample Analysis of Tanks 241-T-201, 241- T-202, 241-T-203, and 241-T-204 (LAY) (Bell, 1997), and Safety Screening Data Qual@ Objective (DO) ODukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DC) analyses exceeded the notification limits stated in DO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group and are not considered in this report.
Tank 241-BY-107, Cores 151 and 161, Analytical Results for the 45 day report
Fritts, L.L.
1996-09-09
This document is the 45-day laboratory report for tank 241-BY-107. Push mode core segments were removed from risers 8 and 9B between June 5, 1996, and July 26, 1996. Segments were received and extruded at the 222-S Analytical Laboratory. Analyses were performed in accordance with Tank 241-BY-107 Push Mode Core Sampling and analysis Plan (TSAP) and the Safety Screening Data Quality Objective (DQO). None of the subsamples submitted for Total Alpha Activity (AT) analysis or Differential Scanning Calorimetry (DSC) exceeded the notification limits as stated in the DQO. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Primary safety screening results are included in the data summary table. The raw data from DSC and TGA analyses are included in this report.
A visual analytics approach for understanding biclustering results from microarray data
Santamaría, Rodrigo; Therón, Roberto; Quintales, Luis
2008-01-01
Background Microarray analysis is an important area of bioinformatics. In the last few years, biclustering has become one of the most popular methods for classifying data from microarrays. Although biclustering can be used in any kind of classification problem, nowadays it is mostly used for microarray data classification. A large number of biclustering algorithms have been developed over the years, however little effort has been devoted to the representation of the results. Results We present an interactive framework that helps to infer differences or similarities between biclustering results, to unravel trends and to highlight robust groupings of genes and conditions. These linked representations of biclusters can complement biological analysis and reduce the time spent by specialists on interpreting the results. Within the framework, besides other standard representations, a visualization technique is presented which is based on a force-directed graph where biclusters are represented as flexible overlapped groups of genes and conditions. This microarray analysis framework (BicOverlapper), is available at Conclusion The main visualization technique, tested with different biclustering results on a real dataset, allows researchers to extract interesting features of the biclustering results, especially the highlighting of overlapping zones that usually represent robust groups of genes and/or conditions. The visual analytics methodology will permit biology experts to study biclustering results without inspecting an overwhelming number of biclusters individually. PMID:18505552
Szafraniec, L.L.; Beaudry, W.T.; Bossle, P.C.; Durst, H.D.; Ellzy, M.W.
1994-07-01
Nineteen samples from the United Nations Special Commission 65 on Iraq (UNSCOM 65) were analyzed for chemical warfare (CW) related compounds using a variety of highly sophisticated spectroscopic and chromatographic techniques. The samples consisted of six water, six soil, two vegetation, one cloth, one wood, and two mortar shell crosscut sections. No sulfur or nitrogen mustards, Lewsite, or any of their degradation products were detected. No nerve agents were observed, and no tin was detected precluding the presence of stannic chloride, a component of NC, a World War I choking agent. Diethyl phosphoric acid was unambiguously identified in three water samples, and ethyl phosphoric acid was tentatively identified, at very low levels, in one water sample. These phosphoric acids are degradation products of Amiton, many commercially available pesticides, as well as Tabun, and impurities in munitions-grade Tabun. No definitive conclusions concerning the source of these two chemicals could be drawn from the analytical results.
NASA Astrophysics Data System (ADS)
Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.
2016-06-01
The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.
NASA Technical Reports Server (NTRS)
Kato, Shoji; Honma, Fumio; Matsumoto, Ryoji
1988-01-01
Viscous instability of the transonic region of the conventional geometrically thin alpha-type accretion disks is examined analytically. For simplicity, isothermal disks and isothermal perturbations are assumed. It is found that when the value of alpha is larger than a critical value the disk is unstable against two types of perturbations. One is local propagating perturbations of inertial acoustic waves. Results suggest the possibility that unstable perturbations develop to overstable global oscillations which are restricted only in the innermost region of the disk. The other is standing growing perturbations localized just at the transonic point. The cause of these instabilities is that the azimuthal component of the Lagrangian velocity variation associated with the perturbations becomes in phase with the variation of the viscous stress force. Because of this phase matching work is done on perturbations, and they are amplified.
Tank 241-B-109, cores 169 and 170 analytical results for the final report
Nuzum, J.L.
1997-01-20
This document is the final laboratory report for tank 241-B-109. Push mode core segments were removed from risers 4 and 7 between August 22, 1996, and August 27, 1996. Segments were received and extruded at 222-S Analytical Laboratory. Analyses were performed in accordance with Tank 241-B-109 Push Mode Core Sampling and Analysis Plan (TSAP) and Tank Safety Screening Data Quality Objective (DQO). The results for primary safety screening data, including differential scanning calorimetry (DSC) analyses, thermogravimetric analyses (TGA), bulk density determinations, and total alpha activity analyses for each subsegment, were presented in the 45-Day report (Rev. 0 of this document). The 45-Day report is included as Part II of this revision. The raw data for DSC and TGA is found in Part II of this report. The raw data for all other analyses are included in this revision.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Propagation of CMEs in the interplanetary medium: Numerical and analytical results
NASA Astrophysics Data System (ADS)
González-Esparza, J. A.; Cantó, J.; González, R. F.; Lara, A.; Raga, A. C.
2003-08-01
We study the propagation of coronal mass ejections (CMES) from near the Sun to 1 AU by comparing results from two different models: a 1-D, hydrodynamic, single-fluid, numerical model (González-Esparza et al., 2003a) and an analytical model to study the dynamical evolution of supersonic velocity's fluctuations at the base of the solar wind applied to the propagation of CMES (Cantó et al., 2002). Both models predict that a fast CME moves initially in the inner heliosphere with a quasi-constant velocity (which has an intermediate value between the initial CME velocity and the ambient solar wind velocity ahead) until a 'critical distance' at which the CME velocity begins to decelerate approaching to the ambient solar wind velocity. This critical distance depends on the characteristics of the CME (initial velocity, density and temperature) as well as of the ambient solar wind. Given typical parameters based on observations, this critical distance can vary from 0.3 to beyond 1 AU from the Sun. These results explain the radial evolution of the velocity of fast CMEs in the inner heliosphere inferred from interplanetary scintillation (IPS) observations (Manoharan et al., 2001, 2003, Tokumaru et al., 2003). On the other hand, the numerical results show that a fast CME and its associated interplanetary (IP) shock follow different heliocentric evolutions: the IP shock always propagates faster than its CME driver and the latter begins to decelerate well before the shock.
Tank 241-S-106, cores 183, 184 and 187 analytical results for the final report
Esch, R.A.
1997-06-30
This document is the final laboratory report for tank 241-S-106 push mode core segments collected between February 12, 1997 and March 21, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (TSAP), the Tank Safety Screening Data Quality Objective (Safety DQO), the Historical Model Evaluation Data Requirements (Historical DQO) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO). The analytical results are included in Table 1. Six of the twenty-four subsamples submitted for the differential scanning calorimetry (DSC) analysis exceeded the notification limit of 480 Joules/g stated in the DQO. Appropriate notifications were made. Total Organic Carbon (TOC) analyses were performed on all samples that produced exotherms during the DSC analysis. All results were less than the notification limit of three weight percent TOC. No cyanide analysis was performed, per agreement with the Tank Safety Program. None of the samples submitted for Total Alpha Activity exceeded notification limits as stated in the TSAP. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. No core composites were created because there was insufficient solid material from any of the three core sampling events to generate a composite that would be representative of the tank contents.
NASA Technical Reports Server (NTRS)
Deckert, J. C.
1981-01-01
This paper reviews the formulation and flight test results of an algorithm to detect and isolate the first failure of any one of twelve duplex control sensor signals being monitored. The technique uses like-signal differences for fault detection while relying upon analytic redundancy relationships among unlike quantities to isolate the faulty sensor. The fault isolation logic utilizes the modified sequential probability ratio test, which explicitly accommodates the inevitable irreducible low frequency errors present in the analytic redundancy residuals. In addition, the algorithm uses sensor output selftest, which takes advantage of the duplex sensor structure by immediately removing a highly erratic sensor from control calculations and analytic redundancy relationships while awaiting a definitive fault isolation decision via analytic redundancy. This study represents a proof of concept demonstration of a methodology that can be applied to duplex or higher flight control sensor configurations and, in addition, can monitor the health of one simplex signal per analytic redundancy relationship.
Jang, Neo W.; Zakrzewski, Aaron; Rossi, Christina; Dalecki, Diane; Gracewski, Sheryl
2011-01-01
Motivated by various clinical applications of ultrasound contrast agents within blood vessels, the natural frequencies of two bubbles in a compliant tube are studied analytically, numerically, and experimentally. A lumped parameter model for a five degree of freedom system was developed, accounting for the compliance of the tube and coupled response of the two bubbles. The results were compared to those produced by two different simulation methods: (1) an axisymmetric coupled boundary element and finite element code previously used to investigate the response of a single bubble in a compliant tube and (2) finite element models developed in comsol Multiphysics. For the simplified case of two bubbles in a rigid tube, the lumped parameter model predicts two frequencies for in- and out-of-phase oscillations, in good agreement with both numerical simulation and experimental results. For two bubbles in a compliant tube, the lumped parameter model predicts four nonzero frequencies, each asymptotically converging to expected values in the rigid and compliant limits of the tube material. PMID:22088008
Jang, Neo W; Zakrzewski, Aaron; Rossi, Christina; Dalecki, Diane; Gracewski, Sheryl
2011-11-01
Motivated by various clinical applications of ultrasound contrast agents within blood vessels, the natural frequencies of two bubbles in a compliant tube are studied analytically, numerically, and experimentally. A lumped parameter model for a five degree of freedom system was developed, accounting for the compliance of the tube and coupled response of the two bubbles. The results were compared to those produced by two different simulation methods: (1) an axisymmetric coupled boundary element and finite element code previously used to investigate the response of a single bubble in a compliant tube and (2) finite element models developed in comsol Multiphysics. For the simplified case of two bubbles in a rigid tube, the lumped parameter model predicts two frequencies for in- and out-of-phase oscillations, in good agreement with both numerical simulation and experimental results. For two bubbles in a compliant tube, the lumped parameter model predicts four nonzero frequencies, each asymptotically converging to expected values in the rigid and compliant limits of the tube material.
Tank 241-B-108, cores 172 and 173 analytical results for the final report
Nuzum, J.L., Fluoro Daniel Hanford
1997-03-04
The Data Summary Table (Table 3) included in this report compiles analytical results in compliance with all applicable DQOS. Liquid subsamples that were prepared for analysis by an acid adjustment of the direct subsample are indicated by a `D` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a fusion digest are indicated by an `F` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a water digest are indicated by a I.wl. or an `I` in the A column of Table 3. Due to poor precision and accuracy in original analysis of both Lower Half Segment 2 of Core 173 and the core composite of Core 173, fusion and water digests were performed for a second time. Precision and accuracy improved with the repreparation of Core 173 Composite. Analyses with the repreparation of Lower Half Segment 2 of Core 173 did not show improvement and suggest sample heterogeneity. Results from both preparations are included in Table 3.
NASA Astrophysics Data System (ADS)
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T.
2014-09-01
The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP) and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present PIC simulation results on EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Acknowledgement: This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE grant no. DE-FC52-06NA27616 at the University of Nevada at Reno.
Tank 241-AN-103, cores 166 and 167 analytical results for the final report
Steen, F.H.
1997-05-15
This document is the analytical laboratory report for tank 241-AN-103 [Hydrogen Watch Listed] push mode core segments collected between September 13, 1996 and September 23, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-AN-103 Push Mode Core Sampling and Analysis Plan (TSAP), the Safety Screening Data Quality Objective (DQO) and the Flammable Gas Data Quality Objective (DQO). The analytical results are included in the data summary table. The raw data are included in this document. None of the samples submitted for Total Alpha Activity (AT), Total Organic Carbon (TOC) and Plutonium analyses exceeded notification limits as stated in the TSAP. One sample submitted for Differential Scanning Calorimetry (DSC) analysis exceeded the notification limit of 480 Joules/g (dry weight basis) as stated in the Safety Screening DQO. Appropriate notifications were made. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Appearance and Sample Handling Attachment 1 is a cross reference to relate the tank farm identification numbers to the 222-S Laboratory LabCore/LIMS sample numbers. The subsamples generated in the laboratory for analyses are identified in these diagrams with their sources shown. The diagrams identifying the core composites are also included. Core 166 Nineteen push mode core segments were removed from tank 241-AN-103 riser 12A between September 13, 1996 and September 17, 1996. Segments were received by the 222-S Laboratory between September 20, 1996 and September 30, 1996. Table 2 summarizes the extrusion information. Selected segments (2, 5 and 14) were sampled using the Retained Gas Sampler (RGS) and extruded by the Process Chemistry and Statistical Analysis Group. Core 167 Eighteen push mode core segments were removed from tank 241-AN-103 riser 21A between September 18, 1996 and September 23, 1996. Tank Farm Operations were
Tank 241-TX-104, cores 230 and 231 analytical results for the final report
Diaz, L.A.
1998-07-07
This document is the analytical laboratory report for tank 241-TX-104 push mode core segments collected between February 18, 1998 and February 23, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-104 Push Mode Core Sampling and Analysis Plan (TSAP) (McCain, 1997), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995) and the Safety Screening Data Quality Objective (DQO) (Dukelow, et.al., 1995). The analytical results are included in the data summary table. None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Alpha Activity (AT) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report. Appearance and Sample Handling Attachment 1 is a cross reference to relate the tank farm identification numbers to the 222-S Laboratory LabCore/LIMS sample numbers. The subsamples generated in the laboratory for analyses are identified in these diagrams with their sources shown. Core 230: Three push mode core segments were removed from tank 241-TX-104 riser 9A on February 18, 1998. Segments were received by the 222-S Laboratory on February 19, 1998. Two segments were expected for this core. However, due to poor sample recovery, an additional segment was taken and identified as 2A. Core 231: Four push mode core segments were removed from tank 241-TX-104 riser 13A between February 19, 1998 and February 23, 1998. Segments were received by the 222-S Laboratory on February 24, 1998. Two segments were expected for this core. However, due to poor sample recovery, additional segments were taken and identified as 2A and 2B. The TSAP states the core samples should be transported to the laboratory within three
Navarro-Alarcon, Miguel; Zambrano, Esmeralda; Moreno-Montoro, Miriam; Agil, Ahmad; Olalla, Manuel
2012-08-01
The assessment of daily dietary phosphorus (P) intake is a major concern in human nutrition because of its relationship with Ca and Mg metabolism and osteoporosis. Within this context, we hypothesized that several of the methods available for the assessment of daily dietary intake of P are equally accurate and reliable, although few studies have been conducted to confirm this. The aim of this study then was to evaluate daily dietary P intake, which we did by 3 methods: duplicate portion sampling of 108 hospital meals, combined either with spectrophotometric analysis or the use of food composition tables, and 24-hour dietary recall for 3 consecutive days plus the use of food composition tables. The mean P daily dietary intakes found were 1106 ± 221, 1480 ± 221, and 1515 ± 223 mg/d, respectively. Daily dietary intake of P determined by spectrophotometric analysis was significantly lower (P < .001) and closer to dietary reference intakes for adolescents aged from 14 to 18 years (88.5%) and adult subjects (158.1%) compared with the other 2 methods. Duplicate portion sampling with P analysis takes into account the influence of technological and cooking processes on the P content of foods and meals and therefore afforded the most accurate and reliable P daily dietary intakes. The use of referred food composition tables overestimated daily dietary P intake. No adverse effects in relation to P nutrition (deficiencies or toxic effects) were encountered.
Analytical Results for Scaling Properties of the Spectrum of the Fibonacci Chain
NASA Astrophysics Data System (ADS)
Piéchon, Frédéric; Benakli, Mourad; Jagannathan, Anuradha
1995-06-01
We solve the approximate renormalization group found by Niu and Nori for a quasiperiodic tight-binding Hamiltonian on the Fibonacci chain. This enables us to characterize analytically the spectral properties of this model.
Miller, G.L.
1997-06-02
Turnaround time for this project was 60 days, as required in Reference 2. The analyses were to be performed using SW-846 procedures whenever possible to meet analytical requirements as a Resource Conservation Recovery Act (RCRA) protocol project. Except for the preparation and analyses of polychlorinated biphenyl hydrocarbons (PCB) and Nickel-63, which the program deleted as a required analyte for 222-S Laboratory, all preparative and analytical work was performed at the 222-S Laboratory. Quanterra Environmental Services of Earth City, Missouri, performed the PCB analyses. During work on this project, two events occurred nearly simultaneously, which negatively impacted the 60 day deliverable schedule: an analytical hold due to waste handling issues at the 222-S Laboratory, and the discovery of PCBs at concentrations of regulatory significance in the 105-N Basin samples. Due to findings of regulatory non-compliance by the Washington State, Department of Ecology, the 222-S Laboratory placed a temporary administrative hold on its analytical work until all waste handling, designation and segregation issues were resolved. During the hold of approximately three weeks, all analytical and waste.handling procedures were rewritten to comply with the legal regulations, and all staff were retrained in the designation, segregation and disposal of RCRA liquid and solid wastes.
Pool, K.H.
1994-03-01
The potential for a ferrocyanide explosion in Hanford site single-shelled waste storage tanks (SSTS) poses a serious safety concern. This potential danger developed in the 1950s when {sup 137}Cs was scavenged during the reprocessing of uranium recovery process waste by co-precipitating it along with sodium in nickel ferrocyanide salt. Sodium or potassium ferrocyanide and nickel sulfate were added to the liquid waste stored in SSTs. The tank storage space resulting from the scavenging process was subsequently used to store other waste types. Ferrocyanide salts in combinations with oxidizing agents, such as nitrate and nitrite, are known to explode when key parameters (temperature, water content, oxidant concentration, and fuel [cyanide]) are in place. Therefore, reliable total cyanide analysis data for actual SST materials are required to address the safety issue. Accepted cyanide analysis procedures do not yield reliable results for samples containing nickel ferrocyanide materials because the compounds are insoluble in acidic media. Analytical chemists at Pacific Northwest Laboratory (PNL) have developed a modified microdistillation procedure (see below) for analyzing total cyanide in waste tank matrices containing nickel ferrocyanide materials. Pacific Northwest Laboratory analyzed samples from Hanford Waste Tank 241-C-112 cores 34, 35, and 36 for total cyanide content using technical procedure PNL-ALO-285 {open_quotes}Total Cyanide by Remote Microdistillation and Agrentometric Titration,{close_quotes} Rev. 0. This report summarizes the results of these analyses along with supporting quality control data, and, in addition, summarizes the results of the test to check the efficacy of sodium nickel ferrocyanide solubilization from an actual core sample by aqueous EDTA/en to verify that nickel ferrocyanide compounds were quantitatively solubilized before actual distillation.
Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results
Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve
2005-09-01
Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.
SEMI-ANALYTIC GALAXY EVOLUTION (SAGE): MODEL CALIBRATION AND BASIC RESULTS
Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M.
2016-02-15
This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short.{sup 5} sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling–radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population.
Semi-Analytic Galaxy Evolution (SAGE): Model Calibration and Basic Results
NASA Astrophysics Data System (ADS)
Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M.
2016-02-01
This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short.5 sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling-radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population.
Larson, Jeffrey S.; Goodman, Laurie J.; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C.; Cook, Jennifer W.; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D. B.; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J.; Whitcomb, Jeannette M.
2010-01-01
We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7–10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH). PMID:21151530
Alastuey, A; Ballenegger, V
2012-12-01
We compute thermodynamical properties of a low-density hydrogen gas within the physical picture, in which the system is described as a quantum electron-proton plasma interacting via the Coulomb potential. Our calculations are done using the exact scaled low-temperature (SLT) expansion, which provides a rigorous extension of the well-known virial expansion-valid in the fully ionized phase-into the Saha regime where the system is partially or fully recombined into hydrogen atoms. After recalling the SLT expansion of the pressure [A. Alastuey et al., J. Stat. Phys. 130, 1119 (2008)], we obtain the SLT expansions of the chemical potential and of the internal energy, up to order exp(-|E_{H}|/kT) included (E_{H}≃-13.6 eV). Those truncated expansions describe the first five nonideal corrections to the ideal Saha law. They account exactly, up to the considered order, for all effects of interactions and thermal excitations, including the formation of bound states (atom H, ions H^{-} and H_{2}^{+}, molecule H_{2},⋯) and atom-charge and atom-atom interactions. Among the five leading corrections, three are easy to evaluate, while the remaining ones involve well-defined internal partition functions for the molecule H_{2} and ions H^{-} and H_{2}^{+}, for which no closed-form analytical formula exist currently. We provide accurate low-temperature approximations for those partition functions by using known values of rotational and vibrational energies. We compare then the predictions of the SLT expansion, for the pressure and the internal energy, with, on the one hand, the equation-of-state tables obtained within the opacity program at Livermore (OPAL) and, on the other hand, data of path integral quantum Monte Carlo (PIMC) simulations. In general, a good agreement is found. At low densities, the simple analytical SLT formulas reproduce the values of the OPAL tables up to the last digit in a large range of temperatures, while at higher densities (ρ∼10^{-2} g/cm^{3}), some
NASA Astrophysics Data System (ADS)
Alastuey, A.; Ballenegger, V.
2012-12-01
We compute thermodynamical properties of a low-density hydrogen gas within the physical picture, in which the system is described as a quantum electron-proton plasma interacting via the Coulomb potential. Our calculations are done using the exact scaled low-temperature (SLT) expansion, which provides a rigorous extension of the well-known virial expansion—valid in the fully ionized phase—into the Saha regime where the system is partially or fully recombined into hydrogen atoms. After recalling the SLT expansion of the pressure [A. Alastuey , J. Stat. Phys.JSTPBS0022-471510.1007/s10955-007-9464-0 130, 1119 (2008)], we obtain the SLT expansions of the chemical potential and of the internal energy, up to order exp(-|EH|/kT) included (EH≃-13.6 eV). Those truncated expansions describe the first five nonideal corrections to the ideal Saha law. They account exactly, up to the considered order, for all effects of interactions and thermal excitations, including the formation of bound states (atom H, ions H- and H2+, molecule H2,⋯) and atom-charge and atom-atom interactions. Among the five leading corrections, three are easy to evaluate, while the remaining ones involve well-defined internal partition functions for the molecule H2 and ions H- and H2+, for which no closed-form analytical formula exist currently. We provide accurate low-temperature approximations for those partition functions by using known values of rotational and vibrational energies. We compare then the predictions of the SLT expansion, for the pressure and the internal energy, with, on the one hand, the equation-of-state tables obtained within the opacity program at Livermore (OPAL) and, on the other hand, data of path integral quantum Monte Carlo (PIMC) simulations. In general, a good agreement is found. At low densities, the simple analytical SLT formulas reproduce the values of the OPAL tables up to the last digit in a large range of temperatures, while at higher densities (ρ˜10-2 g/cm3
NASA Technical Reports Server (NTRS)
Oglebay, J. C.
1977-01-01
A thermal analytic model for a 30-cm engineering model mercury-ion thruster was developed and calibrated using the experimental test results of tests of a pre-engineering model 30-cm thruster. A series of tests, performed later, simulated a wide range of thermal environments on an operating 30-cm engineering model thruster, which was instrumented to measure the temperature distribution within it. The modified analytic model is described and analytic and experimental results compared for various operating conditions. Based on the comparisons, it is concluded that the analytic model can be used as a preliminary design tool to predict thruster steady-state temperature distributions for stage and mission studies and to define the thermal interface bewteen the thruster and other elements of a spacecraft.
Donoghue, J. K.; Dyson, E. D.; Hislop, J. S.; Leach, A. M.; Spoor, N. L.
1972-01-01
Donoghue, J. K., Dyson, E. D., Hislop, J. S., Leach, A. M., and Spoor, N. L. (1972).Brit. J. industr. Med.,29, 81-89. Human exposure to natural uranium: a case history and analytical results from some postmortem tissues. After the collapse and sudden death of an employee who had worked for 10 years in a natural uranium workshop, in which the airborne uranium was largely U3O8 with an Activity Median Aerodynamic Diameter in the range 3·5-6·0 μm and average concentration of 300 μg/m3, his internal organs were analysed for uranium. The tissues examined included lungs (1041 g), pulmonary lymph nodes (12 g), sternum (114 g), and kidneys (217 g). Uranium was estimated by neutron activation analysis, using irradiated tissue ash, and counting the delayed neutrons from uranium-235. The concentrations of uranium (μg U/g wet tissue) in the lungs, lymph nodes, sternum, and kidneys were 1·2, 1·8, 0·09, and 0·14 respectively. The weights deposited in the lungs and lymph nodes are less than 1% of the amounts calculated from the environmental data using the parameters currently applied in radiological protection. The figures are compatible with those reported by Quigley, heartherton, and Ziegler in 1958 and by Meichen in 1962. The relation between these results, the environmental exposure data, and biological monitoring data is discussed in the context of current views on the metabolism of inhaled insoluble uranium. PMID:5060250
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
A COMPRESSED SENSING METHOD WITH ANALYTICAL RESULTS FOR LIDAR FEATURE CLASSIFICATION
Allen, Josef D
2011-01-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting s unique ability to minimize or eliminate undesirable terrain data artifacts. Keywords: Compressed Sensing, Sparsity, Data Dictionary, LiDAR, ROC, K-Means, Clustering, K-SVD, Orthogonal Matching Pursuit
Working with Real Data: Getting Analytic Element Groundwater Model Results to Honor Field Data
NASA Astrophysics Data System (ADS)
Congdon, R. D.
2014-12-01
Models of groundwater flow often work best when very little field data exist. In such cases, some knowledge of the depth to the water table, annual precipitation totals, and basic geological makeup is sufficient to produce a reasonable-looking and potentially useful model. However, in this case where a good deal of information is available regarding depth to bottom of a dune field aquifer, attempting to incorporate the data set into the model has variously resulted in convergence, failure to achieve target water level criteria, or complete failure to converge. The first model did not take the data set into consideration, but used general information that the aquifer was thinner in the north and thicker in the south. This model would run and produce apparently useful results. The first attempt at satisfying the data set; in this case 51 wells showing the bottom elevation of a Pacific coast sand dune aquifer, was to use the isopach interpretation of Robinson (OFR 73-241). Using inhomogeneities (areas of equal characteristics) delineated by Robinson's isopach diagram did not enable an adequate fit to the water table lakes, and caused convergence problems when adding pumping wells. The second attempt was to use a Thiessen polygon approach, creating an aquifer thickness zone for each data point. The results for the non-pumping scenario were better, but run times were considerably greater. Also, there were frequent runs with non-convergence, especially when water supply wells were added. Non-convergence may be the result of the lake line-sinks crossing the polygon boundaries or proximity of pumping wells to inhomogeneity boundaries. The third approach was to merge adjacent polygons of similar depths; in this case within 5% of each other. The results and run times were better, but matching lake levels was not satisfactory. The fourth approach was to reduce the number of inhomogeneities to four, and to average the depth data over the inhomogeneity. The thicknesses were
NASA Astrophysics Data System (ADS)
Mickaelian, A. M.
2004-10-01
Accurate measurements of the positions of 1101 First Byurakan Survey (FBS) blue stellar objects (the Second part of the FBS) have been carried out on the DSS1 and DSS2 (red and blue images). To establish the accuracy of the DSS1 and DSS2, measurements have been made for 153 AGN for which absolute VLBI coordinates have been published. The rms errors are: 0.45 arcsec for DSS1, 0.33 arcsec for DSS2 red, and 0.59 arcsec for DSS2 blue in each coordinate, the corresponding total positional errors being 0.64 arcsec, 0.46 arcsec, and 0.83 arcsec, respectively. The highest accuracy (0.42 arcsec) is obtained by weighted averaging of the DSS1 and DSS2 red positions. It is shown that by using all three DSS images accidental errors can be significantly reduced. The comparison of DSS2 and DSS1 images made it possible to reveal positional differences and proper motions for 78 objects (for 62 of these for the first time), including new high-probability candidate white dwarfs, and to find objects showing strong variability, i.e. high-probability candidate cataclysmic variables. Table 1 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/426/367
Crock, J.G.; Smith, D.B.; Yager, T.J.B.
2009-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District, MWRD), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colorado, USA. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream bed sediment. Soils for this study were defined as the plow zone of the dry land agricultural fields - the top twelve inches of the soil column. This report presents analytical results for the soil samples collected at the Metro District farm land near Deer Trail, Colorado, during three separate sampling events during 1999, 2000, and 2002. Soil samples taken in 1999 were to be a representation of the original baseline of the agricultural soils prior to any biosolids application. The soil samples taken in 2000 represent the soils after one application of biosolids to the middle field at each site and those taken in 2002 represent the soils after two applications. There have been no biosolids applied to any of the four control fields. The next soil sampling is scheduled for the spring of 2010. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross
Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash
2016-06-01
Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.
NASA Technical Reports Server (NTRS)
Redd, L. T.; Hanson, P. W.; Wynne, E. C.
1979-01-01
A wind tunnel technique for obtaining gust frequency response functions for use in predicting the response of flexible aircraft to atmospheric turbulence is evaluated. The tunnel test results for a dynamically scaled cable supported aeroelastic model are compared with analytical and flight data. The wind tunnel technique, which employs oscillating vanes in the tunnel throat section to generate a sinusoidally varying flow field around the model, was evaluated by use of a 1/30 scale model of the B-52E airplane. Correlation between the wind tunnel results, flight test results, and analytical predictions for response in the short period and wing first elastic modes of motion are presented.
Parametric instabilities of parallel-propagating Alfven waves: Some analytical results
NASA Technical Reports Server (NTRS)
Jayanti, V.; Hollweg, Joseph V.
1993-01-01
We consider the stability of a circularly polarized Alfven wave (the pump wave) which propagates parallel to the ambient magnetic field. Only parallel-propagating perturbations are considered, and we ignore dispersive effects due to the ion cyclotron frequency. The dissipationless MHD equations are used throughout; thus possibibly important effects arising from Landau and transit time damping are omitted. We derive a series of analytical approximations to the dispersion relation using A = (Delta B/B(sub O))(exp 2) as a small expansion parameter; Delta B is the pump amplitude, and B(sub O) is the ambient magnetic field strength. We find that the plasma beta (the square of the ratio of the sound speed to the Alfven speed) plays a crucial role in determining the behavior of the parametric instabilities of the pump. If 0 less than beta less than 1 we find the familiar result that the pump decays into a forward propagating sound wave and a backward propagating Alfven wave with maximum growth rate gamma(sub max) varies A(sup 1/2), but beta cannot be too close to 0 or to 1. If beta approx. 1, we find gamma(sub max) varies A(sup 3/4), if beta greater than 1, we find gamma(sub max) varies A(sup 3/2), while if beta approx. 0, we obtain gamma(sub max) varies A(sup 1/3); moreover, if beta approx. 0 there is a nearly purely growing instability. In constrast to the familiar decay instability, for which the backward propagating Alfven wave has lower frequency and wavenumber than the pump, we find that if beta greater than or approx. equal to 1 the instability is really a beat instability which is dominated by a transverse wave which is forward propagating and has frequency and wavenumber which are nearly twice the pump values. Only the decay instability for 0 less than beta less than 1 can be regarded as producing two recognizable normal modes, namely, a sound wave and an Alfven wave. We discuss how the different characteristics of the instabilities may affect the evolution of
NASA Astrophysics Data System (ADS)
Saro, A.; De Lucia, G.; Borgani, S.; Dolag, K.
2010-08-01
We present a detailed comparison between the galaxy populations within a massive cluster, as predicted by hydrodynamical smoothed particle hydrodynamics (SPH) simulations and by a semi-analytic model (SAM) of galaxy formation. Both models include gas cooling and a simple prescription of star formation, which consists in transforming instantaneously any cold gas available into stars, while neglecting any source of energy feedback. This simplified comparison is thus not meant to be compared with observational data, but is aimed at understanding the level of agreement, at the stripped-down level considered, between two techniques that are widely used to model galaxy formation in a cosmological framework and which present complementary advantages and disadvantages. We find that, in general, galaxy populations from SAMs and SPH have similar statistical properties, in agreement with previous studies. However, when comparing galaxies on an object-by-object basis, we find a number of interesting differences: (i) the star formation histories of the brightest cluster galaxies (BCGs) from SAM and SPH models differ significantly, with the SPH BCG exhibiting a lower level of star formation activity at low redshift, and a more intense and shorter initial burst of star formation with respect to its SAM counterpart; (ii) while all stars associated with the BCG were formed in its progenitors in the SAM used here, this holds true only for half of the final BCG stellar mass in the SPH simulation, the remaining half being contributed by tidal stripping of stars from the diffuse stellar component associated with galaxies accreted on the cluster halo; (iii) SPH satellites can lose up to 90 per cent of their stellar mass at the time of accretion, due to tidal stripping, a process not included in the SAM used in this paper; (iv) in the SPH simulation, significant cooling occurs on the most massive satellite galaxies and this lasts for up to 1 Gyr after accretion. This physical process is
STABLE CONIC-HELICAL ORBITS OF PLANETS AROUND BINARY STARS: ANALYTICAL RESULTS
Oks, E.
2015-05-10
Studies of planets in binary star systems are especially important because it was estimated that about half of binary stars are capable of supporting habitable terrestrial planets within stable orbital ranges. One-planet binary star systems (OBSS) have a limited analogy to objects studied in atomic/molecular physics: one-electron Rydberg quasimolecules (ORQ). Specifically, ORQ, consisting of two fully stripped ions of the nuclear charges Z and Z′ plus one highly excited electron, are encountered in various plasmas containing more than one kind of ion. Classical analytical studies of ORQ resulted in the discovery of classical stable electronic orbits with the shape of a helix on the surface of a cone. In the present paper we show that despite several important distinctions between OBSS and ORQ, it is possible for OBSS to have stable planetary orbits in the shape of a helix on a conical surface, whose axis of symmetry coincides with the interstellar axis; the stability is not affected by the rotation of the stars. Further, we demonstrate that the eccentricity of the stars’ orbits does not affect the stability of the helical planetary motion if the center of symmetry of the helix is relatively close to the star of the larger mass. We also show that if the center of symmetry of the conic-helical planetary orbit is relatively close to the star of the smaller mass, a sufficiently large eccentricity of stars’ orbits can switch the planetary motion to the unstable mode and the planet would escape the system. We demonstrate that such planets are transitable for the overwhelming majority of inclinations of plane of the stars’ orbits (i.e., the projections of the planet and the adjacent start on the plane of the sky coincide once in a while). This means that conic-helical planetary orbits at binary stars can be detected photometrically. We consider, as an example, Kepler-16 binary stars to provide illustrative numerical data on the possible parameters and the
A stereo triangulation system for structural identification: Analytical and experimental results
NASA Technical Reports Server (NTRS)
Junkins, J. L.; James, G. H., III; Pollock, T. C.; Rahman, Z. H.
1988-01-01
Identification of large space structures' distributed mass, stiffness, and energy dissipation characteristics poses formidable analytical, numerical, and implementation difficulties. Development of reliable on-orbit structural identification methods is important for implementing active vibration suppression concepts which are under widespread study in the large space structures community. Near the heart of the identification problem lies the necessity of making a large number of spatially distributed measurements of the structure's vibratory response and the associated force/moment inputs with sufficient spatial and frequency resolution. In the present paper, we discuss a method whereby tens of active or passive (retro-reflecting) targets on the structure are tracked simultaneously by the focal planes of two or more video cameras mounted on an adjacent platform. Triangulation (optical ray intersection) of the conjugate image centroids yield inertial trajectories of each target on the structure. Given the triangulated motion of the targets, we apply and extend methodology developed by Creamer, Junkins, and Juang to identify the frequencies, mode shapes, and updated estimates for the mass/stiffness/damping parameterization of the structure. The methodology is semi-automated, for example, the post experiment analysis of the video imagery to determine the inertial trajectories of the targets typically requires less than thirty minutes of real time. Using methodology discussed herein, the frequency response of a large number of points on the structure (where reflective targets are mounted) on the structure can be determined from optical measurements alone. For comparison purposes, we also utilize measurements from accelerometers and a calibrated impulse hammer. While our experimental work remains in a research stage of development, we have successfully tracked and stereo triangulated 20 targets (on a vibrating cantilevered grid structure) at a sample frequency of 200 HZ
Externally induced metastability of an electron in a Penning trap: Analytical results
NASA Astrophysics Data System (ADS)
Brouard, S.; Plata, J.
2001-12-01
The effect of a driving field on the cyclotron mode of a relativistic electron in a Penning trap is studied analytically. The Hamiltonian dynamics of this driven nonlinear oscillator is analyzed by using linearization techniques and displaced squeezed-state formalism. With the approximate analytical expressions obtained for the eigenstates in this approach, a simplified treatment of the dissipative dynamics is carried out and some of the nontrivial features found in a recent numerical study [D. Enzer and G. Gabrielse, Phys. Rev. Lett. 78, 1211 (1997)] are unraveled. The emergence of different time scales and the generation of a metastable statistical mixture are understood in terms of the changes induced in the structure of the master equation by the nonuniform characteristics of the eigenstates; the partial revivals of specific coherent states are accounted for by the evolution of particular coherences. The control of these effects by a proper choice of the driving parameters is discussed.
Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results
NASA Astrophysics Data System (ADS)
GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.
2013-03-01
While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Kokhanovsky, Alexander A
2007-04-01
Analytical equations for the diffused scattered light correction factor of Sun photometers are derived and analyzed. It is shown that corrections are weakly dependent on the atmospheric optical thickness. They are influenced mostly by the size of aerosol particles encountered by sunlight on its way to a Sun photometer. In addition, the accuracy of the small-angle approximation used in the work is studied with numerical calculations based on the exact radiative transfer equation.
Report #15-P-0276, September 4, 2015. Inaccurate reporting of results misrepresents the impacts of pollution prevention activities provided to the public, and misinforms EPA management on the effectiveness of its investment in the program.
González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda
2012-11-01
It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
B Plant, TK-21-1, analytical results for the final report
Fritts, L.L., Westinghouse Hanford
1996-12-09
This document is the final laboratory report for B Plant Tk-21-1. A Resource Conservation and Recovery Act (RCRA) sample was taken from Tk-21 -1 September 26, 1996. This sample was received at 222-S Analytical Laboratory on September 27, 1996. Analyses were performed in accordance with the accompanying Request for Sample Analysis (RSA) and Letter of Instruction B PLANT RCRA SAMPLES TO 222S LABORATORY, LETTER OF INSTRUCTION (LOI) 2B-96-LOI-012-01 (LOI) (Westra, 1996). LOI was issued subsequent to RSA and replaces Letter of Instruction 2C-96-LOI-004-01 referenced in RSA.
Plumlee, Geoffrey S.; Martin, Deborah A.; Hoefen, Todd; Kokaly, Raymond F.; Hageman, Philip; Eckberg, Alison; Meeker, Gregory P.; Adams, Monique; Anthony, Michael; Lamothe, Paul J.
2007-01-01
Overview The U.S. Geological Survey (USGS) collected ash and burned soils from about 28 sites in southern California wildfire areas (Harris, Witch, Ammo, Santiago, Canyon and Grass Valley) from Nov. 2 through 9, 2007 (table 1). USGS researchers are applying a wide variety of analytical methods to these samples, with the goal of helping identify characteristics of the ash and soils from wildland and suburban burned areas that may be of concern for their potential to adversely affect water quality, human health, endangered species, and debris-flow or flooding hazards. These studies are part of the Southern California Multi-Hazards Demonstration Project, and preliminary findings are presented here.
Analytical results for the reactivity of a single-file system
NASA Astrophysics Data System (ADS)
Jansen, A. P.; Nedea, S. V.; Lukkien, J. J.
2003-03-01
We derive analytical expressions for the reactivity of a single-file system with fast diffusion and particles entering and leaving the system at one end. If the conversion reaction is fast, then the reactivity depends only very weakly on the system size, and the conversion is about 100%. If the reaction is slow, then the reactivity becomes proportional to the system size, the loading, and the reaction rate constant. If the system size increases the reactivity goes to the geometric mean of the reaction rate constant and the rate of particles entering and leaving the system. For large systems, the number of unconverted particles decreases exponentially with distance from the open end.
Brownian parametric oscillator: analytical results for a high-frequency driving field
NASA Astrophysics Data System (ADS)
Brouard, S.; Plata, J.
2001-12-01
The dissipative dynamics of a classical parametric oscillator is studied analytically. For a generic functional form of the parametric driving, a simplified description of the system is obtained by performing a sequence of transformations set up from the deterministic Floquet solutions. In the high-frequency regime, the application of an averaging method leads to the description of the secular dynamics as an effective bidimensional Ornstein-Uhlenbeck process. The expressions obtained for the probability density and the correlation functions allow us to unravel the mechanisms responsible for the nontrivial dependence of the variances on the driving amplitude.
Analytical results for the pulsed operation of high field constant stress coils
NASA Astrophysics Data System (ADS)
Vanbockstal, Luc; Askenazy, Salomon; Herlach, Fritz; Schneider-Muntau, Hans-Jorg
1994-07-01
Based on the analytical expressions for the radial current density in coils optimized for constant stress, the implications for pulsed operation are discussed; the pulse duration, peak power and energy are determined. A cut-off on the current density, which peaks at the inside of the coil, limits the localized heating and increases the pulse duration at the expense of center field or materials requirements. From the relation between strength, conductivity and cut-off level, optimal properties of construction materials are determined.
NASA Astrophysics Data System (ADS)
Klus, Jakub; Pořízka, Pavel; Prochazka, David; Novotný, Jan; Novotný, Karel; Kaiser, Jozef
2016-12-01
The purpose of this work is to provide detailed study of statistical behavior of different types of analytical signals in typical of Laser-Induced Breakdown Spectroscopy (LIBS) measurements. The main goal of this work is to justify usage of arithmetic mean and standard deviation as statistical estimates of expected value of selected analytical signal. In contrary to the general assumption that LIBS data show Gaussian distribution, this paper deals with the hypothesis that the data rather demonstrate Generalized Extreme Value Distribution. The study is realized on 10 selected lines measured on NIST glass standard. In order to cover wide range of possible applications three different spectra internal standardization techniques and their influence on distribution were studied. Finally, assuming that the data comes from a single distribution and the central limit theorem is valid, the influence of accumulations on the line distribution is examined and discussed. Statistical tools used and described in this paper can be utilized by other researchers to confirm their hypotheses and verify utilization of Gaussian distribution or even novel data processing methods.
The route to MBxNyCz molecular wheels: II. Results using accurate functionals and basis sets
NASA Astrophysics Data System (ADS)
Güthler, A.; Mukhopadhyay, S.; Pandey, R.; Boustani, I.
2014-04-01
Applying ab initio quantum chemical methods, molecular wheels composed of metal and light atoms were investigated. High quality basis sets 6-31G*, TZPV, and cc-pVTZ as well as exchange and non-local correlation functionals B3LYP, BP86 and B3P86 were used. The ground-state energy and structures of cyclic planar and pyramidal clusters TiBn (for n = 3-10) were computed. In addition, the relative stability and electronic structures of molecular wheels TiBxNyCz (for x, y, z = 0-10) and MBnC10-n (for n = 2 to 5 and M = Sc to Zn) were determined. This paper sustains a follow-up study to the previous one of Boustani and Pandey [Solid State Sci. 14 (2012) 1591], in which the calculations were carried out at the HF-SCF/STO3G/6-31G level of theory to determine the initial stability and properties. The results show that there is a competition between the 2D planar and the 3D pyramidal TiBn clusters (for n = 3-8). Different isomers of TiB10 clusters were also studied and a structural transition of 3D-isomer into 2D-wheel is presented. Substitution boron in TiB10 by carbon or/and nitrogen atoms enhances the stability and leads toward the most stable wheel TiB3C7. Furthermore, the computations show that Sc, Ti and V at the center of the molecular wheels are energetically favored over other transition metal atoms of the first row.
Carmical, R.; Nadella, V.; Herbert, Z.; Beckloff, N.; Chittur, S.; Rosato, C.; Perera, A.; Auer, H.; Robinson, M.; Tighe, S.; Holbrook, Jennifer
2013-01-01
It is well recognized that the field of metagenomics is becoming a critical tool for studying previously unobtainable population dynamics at both an identification of species level and a functional or transcriptional level. Because the power to resolve microbial information is so important for identifying the components in an mixed sample, metagenomics can be used to study nearly any possible environment or system including clinical, environmental, and industrial, to name a few. Clinically, it may be used to determine sub-populations colonizing regions of the body or determining a rare infection to assist in treatment strategies. Environmentally it may be used to identify microbial populations within a soil, water or air sample, or within a bioreactor to characterize a population- based functional process. The possibilities are endless. However, the accuracy of a metagenomics dataset relies on three important “gatekeepers” including 1) The ability to effectively extract all DNA or RNA from every cell within a sample, 2) The reliability of the methods used for deep or high-throughput sequencing, and 3) The software used to analyze the data. Since DNA extraction is the first step in the technical process of metagenomics, the Nucleic Acid Research Group (NARG) conducted a study to evaluate extraction methods using a synthetic microbial sample. The synthetic microbial sample was prepared from 10 known bacteria at specific concentrations and ranging in diversity. Samples were extracted in duplicate using various popular kit based methods as well as several homebrew protocols then analyzed by NextGen sequencing on an Illumina HiSeq. Results of the study include determining the percent recovery of those organisms by comparing to the known quantity in the original synthetic mix.
Lorentz resonances and the vertical structure of dusty rings - Analytical and numerical results
NASA Astrophysics Data System (ADS)
Schaffer, Les; Burns, Joseph A.
1992-03-01
The Schaffer and Burns (1987) linear theory of Lorentz resonances (LRs) in planetary rings is extended in order to accurately compute LR locations and to elucidate the nature of grain trajectories within the LR zones. Using the perturbation theory and energy arguments, it is shown that an increase in the inclination or eccentricity of a grain must be accompanied by a shift in the mean orbital radius of the particle. This shift alters the epicyclic frequencies in such a way that the infinite response of the linear resonance theory is suppressed. Chaotic motion is found for the range of charge-to-mass ratios that cause the vertical and horizontal LRs to overlap.
NASA Astrophysics Data System (ADS)
Roncoroni, Alan; Medo, Matus
2016-12-01
Models of spatial firm competition assume that customers are distributed in space and transportation costs are associated with their purchases of products from a small number of firms that are also placed at definite locations. It has been long known that the competition equilibrium is not guaranteed to exist if the most straightforward linear transportation costs are assumed. We show by simulations and also analytically that if periodic boundary conditions in a plane are assumed, the equilibrium exists for a pair of firms at any distance. When a larger number of firms is considered, we find that their total equilibrium profit is inversely proportional to the square root of the number of firms. We end with a numerical investigation of the system's behavior for a general transportation cost exponent.
Box, Stephen E.; Bookstrom, Arthur A.; Ikramuddin, Mohammed; Lindsay, James
2001-01-01
(Fe), manganese (Mn), arsenic (As), and cadmium (Cd). In general inter-laboratory correlations are better for samples within the compositional range of the Standard Reference Materials (SRMs) from the National Institute of Standards and Technology (NIST). Analyses by EWU are the most accurate relative to the NIST standards (mean recoveries within 1% for Pb, Fe, Mn, and As, 3% for Zn and 5% for Cd) and are the most precise (within 7% of the mean at the 95% confidence interval). USGS-EDXRF is similarly accurate for Pb and Zn. XRAL and ACZ are relatively accurate for Pb (within 5-8% of certified NIST values), but were considerably less accurate for the other 5 elements of concern (10-25% of NIST values). However, analyses of sample splits by more than one laboratory reveal that, for some elements, XRAL (Pb, Mn, Cd) and ACZ (Pb, Mn, Zn, Fe) analyses were comparable to EWU analyses of the same samples (when values are within the range of NIST SRMs). These results suggest that, for some elements, XRAL and ACZ dissolutions are more effective on the matrix of the CdA samples than on the matrix of the NIST samples (obtained from soils around Butte, Montana). Splits of CdA samples analyzed by CHEMEX were the least accurate, yielding values 10-25% less than those of EWU.
Timme, Marc; Geisel, Theo; Wolf, Fred
2006-03-01
We analyze the dynamics of networks of spiking neural oscillators. First, we present an exact linear stability theory of the synchronous state for networks of arbitrary connectivity. For general neuron rise functions, stability is determined by multiple operators, for which standard analysis is not suitable. We describe a general nonstandard solution to the multioperator problem. Subsequently, we derive a class of neuronal rise functions for which all stability operators become degenerate and standard eigenvalue analysis becomes a suitable tool. Interestingly, this class is found to consist of networks of leaky integrate-and-fire neurons. For random networks of inhibitory integrate-and-fire neurons, we then develop an analytical approach, based on the theory of random matrices, to precisely determine the eigenvalue distributions of the stability operators. This yields the asymptotic relaxation time for perturbations to the synchronous state which provides the characteristic time scale on which neurons can coordinate their activity in such networks. For networks with finite in-degree, i.e., finite number of presynaptic inputs per neuron, we find a speed limit to coordinating spiking activity. Even with arbitrarily strong interaction strengths neurons cannot synchronize faster than at a certain maximal speed determined by the typical in-degree.
Fabro, M A; Milanesio, H V; Robert, L M; Speranza, J L; Murphy, M; Rodríguez, G; Castañeda, R
2006-03-01
In Argentina, one analytical method is usually carried out to determine acidity in whole raw milk: the Instituto Nacional de Racionalización de Materiales standard (no. 14005), based on the Dornic method of French origin. In a national and international regulation, the Association of Official Analytical Chemists International method (no. 947.05) is proposed as the standard method of analysis. Although these methods have the same foundation, there is no evidence that results obtained using the 2 methods are equivalent. The presence of some trends and discordant data lead us to perform a statistical study to verify the equivalency of the obtained results. We analyzed 266 samples and the existence of significant differences between the results obtained by both methods was determined.
Reigel, M.; Cozzi, A.
2010-08-17
This report details the chemical analysis results for the characterization of the May 19, 2010 inadvertent transfer from the Saltstone Production Facility (SPF) to the Saltstone Disposal Facility (SDF). On May 19, 2010, the Saltstone Processing Facility (SPF) inadvertently transferred approximately 1800 gallons of untreated low-level salt solution from the salt feed tank (SFT) to Cell F of Vault 4. The transfer was identified and during safe configuration shutdown, approximately 70 gallons of SFT material was left in the Saltstone hopper. After the shutdown, the material in the hopper was undisturbed, while the SFT has received approximately 1400 gallons of drain water from the Vault 4 bleed system. The drain water path from Vault 4 to the SFT does not include the hopper (Figure 1); therefore it was determined that the material remaining in the hopper was the most representative sample of the salt solution transferred to the vault. To complete item No.5 of Reference 1, Savannah River National Laboratory (SRNL) was asked to analyze the liquid sample retrieved from the hopper for pH, and metals identified by the Resource Conservation and Recovery Act (RCRA). SRNL prepared a report to complete item No.5 and determine the hazardous nature of the transfer. Waste Solidification Engineering then instructed SRNL to provide a more detailed analysis of the slurried sample to assist in the determination of the portion of Tank 50 waste in the hopper sample.
NASA Astrophysics Data System (ADS)
Hadden, Sam; Lithwick, Yoram
2015-12-01
Several Kepler planets reside in multi-planet systems where gravitational interactions result in transit timing variations (TTVs) that provide exquisitely sensitive probes of their masses of and orbits. Measuring these planets' masses and orbits constrains their bulk compositions and can provide clues about their formation. However, inverting TTV measurements in order to infer planet properties can be challenging: it involves fitting a nonlinear model with a large number of parameters to noisy data, often with significant degeneracies between parameters. I present results from two complementary approaches to TTV inversion: Markov chain Monte Carlo simulations that use N-body integrations to compute transit times and a simplified analytic model for computing the TTVs of planets near mean motion resonances. The analytic model allows for straightforward interpretations of N-body results and provides an independent estimate of parameter uncertainties that can be compared to MCMC results which may be sensitive to factors such as priors. We have conducted extensive MCMC simulations along with analytic fits to model the TTVs of dozens of Kepler multi-planet systems. We find that the bulk of these sub-Jovian planets have low densities that necessitate significant gaseous envelopes. We also find that the planets' eccentricities are generally small but often definitively non-zero.
Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine
2012-08-31
Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2010-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver, a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey began monitoring groundwater at part of this site. In 1999, the Survey began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through the end of 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report presents analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2009. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ('priority analytes') (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2008, and this consistency continues with the samples for 2009. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 11 years of this study.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2011-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program was recently extended through the end of 2010 and is now completed. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water runoff effects. This report summarizes analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2010. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ("priority analytes") (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied (background). Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2009, and this consistency continues with the samples for 2010. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 12 years
Borsari, Lucia; Aggazzotti, Gabriella; Busani, Stefano; Mussini, Cristina; Rumpianesi, Fabio; Rossolini, Gian Maria; Girardis, Massimo
2017-01-01
Background Prompt identification of bloodstream pathogens is essential for optimal management of patients. Significant changes in analytical methods have improved the turnaround time for laboratory diagnosis. Less attention has been paid to the time elapsing from blood collection to incubation and to its potential effect on recovery of pathogens. We evaluated the performance of blood cultures collected under typical hospital conditions in relation to the length of their pre-analytical time. Methods We carried out a large retrospective study including 50,955 blood cultures collected, over a 30-month period, from 7,035 adult septic patients. Cultures were accepted by the laboratory only during opening time (Mon-Fri: 8am–4pm; Sat: 8am–2pm). Samples collected outside laboratory hours were stored at room temperature at clinical wards. All cultures were processed by automated culture systems. Day and time of blood collection and of culture incubation were known for all samples. Results A maximum pre-analytical interval of 2 hours is recommended by guidelines. When the laboratory was open, 57% of cultures were processed within 2 h. When the laboratory was closed, 4.9% of cultures were processed within 2 h (P<0.001). Samples collected when the laboratory was closed showed pre-analytical times significantly longer than those collected when laboratory was open (median time: 13 h and 1 h, respectively, P<0.001). The prevalence of positive cultures was significantly lower for samples collected when the laboratory was closed compared to open (11% vs 13%, P<0.001). The probability of a positive result decreased of 16% when the laboratory was closed (OR:0.84; 95%CI:0.80–0.89, P<0.001). Further, each hour elapsed from blood collection to incubation resulted associated with a decrease of 0.3% (OR:0.997; 95%CI:0.994–0.999, P<0.001) in the probability of a positive result. Discussion Delayed insertions of cultures into automated systems was associated with lower detection
Tank 241-BY-112, cores 174 and 177 analytical results for the final report
Nuzum, J.L.
1997-05-06
Results from bulk density tests ranged from 1.03 g/mL to 1.86 g/mL. The highest bulk density result of 1.86 g/mL was used to calculate the solid total alpha activity notification limit for this tank (33.1 uCi/g), Total Alpha (AT) Analysis. Attachment 2 contains the Data Verification and Deliverable (DVD) Summary Report for AT analyses. This report summarizes results from AT analyses and provides data qualifiers and total propagated uncertainty (TPU) values for results. The TPU values are based on the uncertainties inherent in each step of the analysis process. They may be used as an additional reference to determine reasonable RPD values which may be used to accept valid data that do not meet the TSAP acceptance criteria. A report guide is provided with the report to assist in understanding this summary report.
Analytical Results of DWPF Glass Sample Taken During Pouring of Canister S01913
Bannochie, C
2005-10-01
The Defense Waste Processing Facility (DWPF) began processing Sludge Batch 2 (SB2) (Macrobatch 3) in December 2001 as part of Sludge Receipt and Adjustment Tank (SRAT) Batch 208. Macrobatch 3 consists of the contents of Tank 40 and Tank 8 in approximately equal proportions. A glass sample was obtained while pouring Canister S01913 and was sent to the Savannah River National Laboratory (SRNL) Shielded Cells for characterization. This report contains observations of the glass sample, results for the density, the chemical composition, the Product Consistency Test (PCT) and the radionuclide results needed for the Production Record for Canister S01913. The following conclusions are drawn from this work: (1) The glass sample taken during the filling of canister S01913 received at SRNL weighed 33.04 grams and was dark and reflective with no obvious inclusions indicating the glass was homogeneous. (2) The results of the composition for glass sample S01913 are in good agreement ({+-} 15%) with the DWPF SME results for Batch Number 254, the SME Batch that was being fed to the melter when the sample was collected. (3) The calculated WDF was 2.58. (4) Acid dissolution of the glass samples may not have completely dissolved the noble metals rhodium and ruthenium. (5) The PCT results for the glass (normalized boron release of 1.18 g/L) indicate that it is greater than seven standard deviations more durable than the EA glass; thus, the glass meets the waste acceptance criterion for durability. (6) The measured density of the glass was 2.56 {+-} 0.03 g/cm{sup 3}.
Genome Wide Association for Addiction: Replicated Results and Comparisons of Two Analytic Approaches
Drgon, Tomas; Zhang, Ping-Wu; Johnson, Catherine; Walther, Donna; Hess, Judith; Nino, Michelle; Uhl, George R.
2010-01-01
Background Vulnerabilities to dependence on addictive substances are substantially heritable complex disorders whose underlying genetic architecture is likely to be polygenic, with modest contributions from variants in many individual genes. “Nontemplate” genome wide association (GWA) approaches can identity groups of chromosomal regions and genes that, taken together, are much more likely to contain allelic variants that alter vulnerability to substance dependence than expected by chance. Methodology/Principal Findings We report pooled “nontemplate” genome-wide association studies of two independent samples of substance dependent vs control research volunteers (n = 1620), one European-American and the other African-American using 1 million SNP (single nucleotide polymorphism) Affymetrix genotyping arrays. We assess convergence between results from these two samples using two related methods that seek clustering of nominally-positive results and assess significance levels with Monte Carlo and permutation approaches. Both “converge then cluster” and “cluster then converge” analyses document convergence between the results obtained from these two independent datasets in ways that are virtually never found by chance. The genes identified in this fashion are also identified by individually-genotyped dbGAP data that compare allele frequencies in cocaine dependent vs control individuals. Conclusions/Significance These overlapping results identify small chromosomal regions that are also identified by genome wide data from studies of other relevant samples to extents much greater than chance. These chromosomal regions contain more genes related to “cell adhesion” processes than expected by chance. They also contain a number of genes that encode potential targets for anti-addiction pharmacotherapeutics. “Nontemplate” GWA approaches that seek chromosomal regions in which nominally-positive associations are found in multiple independent samples are
Heat transfer from earth-coupled heat exchangers-Experimental and analytical results
Edwards, J.A.; Vitta, P.K.
1985-01-01
Experimental heat transfer results obtained with tubular heat transfer coils buried in soil are presented along with a finite difference simulation model that predicts the heat transfer to or from the buried pipes and the temperature distribution in the soil surrounding the buried pipes. The results were obtained with two different earth-coupled coils. Each coil was fabricated from 2.2 in. (5.6 cm) ID nominal 2 in. diameter cast iron pipe. The length of the heat exchanger for each earth-coupled system was 90 ft. (27.4 m). The earth-coupled coils were buried at a depth of 2.75 ft. (0.84 m) below the surface of the earth. The experimental data cover a time span of seven months and represent operation of the earth-coupled coils at various heat rates. Some of the prime quantities measured on a continuous basis are the earth's temperature at several locations in the vicinity of the buried coils, the far earth temperature, the solar insolation, moisture content of the soil, and the heat transferred to or from the buried coils to the surrounding soil. The finite difference model tracks the temperature distribution in the earth surrounding the coils on a continuous basis and predicts the earth's temperature at many locations adjacent to the earth-coupled coil with a maximum error of 4/sup 0/F (2.2/sup 0/C) during the seven month test. As parameters, the finite difference model included the moisture content of the soil, convection at the surface of the earth, emissivity of the soil, radiation exchange at the air-soil interface, as well as all of the pertinent parameters related to the flow of the heat transfer fluid through the buried pipes. The results presented, both experimental and simulated, have direct application in the design of earth-coupled water-source heat pump systems.
Nonlinear effects in a plain journal bearing. I - Analytical study. II - Results
NASA Technical Reports Server (NTRS)
Choy, F. K.; Braun, M. J.; Hu, Y.
1991-01-01
In the first part of this work, a numerical model is presented which couples the variable-property Reynolds equation with a rotor-dynamics model for the calculation of a plain journal bearing's nonlinear characteristics when working with a cryogenic fluid, LOX. The effects of load on the linear/nonlinear plain journal bearing characteristics are analyzed and presented in a parametric form. The second part of this work presents numerical results obtained for specific parametric-study input variables (lubricant inlet temperature, external load, angular rotational speed, and axial misalignment). Attention is given to the interrelations between pressure profiles and bearing linear and nonlinear characteristics.
Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System
NASA Technical Reports Server (NTRS)
Johnson, Paul
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber- Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
Semiconductor quantum wells with BenDaniel-Duke boundary conditions: approximate analytical results
NASA Astrophysics Data System (ADS)
Barsan, Victor; Ciornei, Mihaela-Cristina
2017-01-01
The Schrödinger equation for a particle moving in a square well potential with BenDaniel-Duke boundary conditions is solved. Using algebraic approximations for trigonometric functions, the transcendental equations of the bound states energy are transformed into tractable, algebraic equations. For the ground state and the first excited state, they are cubic equations; we obtain simple formulas for their physically interesting roots. The case of higher excited states is also analysed. Our results have direct applications in the physics of type I and type II semiconductor heterostructures.
Hartwell, William T.; Daniels, Jeffrey; Nikolich, George; Shadel, Craig; Giles, Ken; Karr, Lynn; Kluesner, Tammy
2012-01-01
During the period April to June 2008, at the behest of the Department of Energy (DOE), National Nuclear Security Administration, Nevada Site Office (NNSA/NSO); the Desert Research Institute (DRI) constructed and deployed two portable environmental monitoring stations at the Tonopah Test Range (TTR) as part of the Environmental Restoration Project Soils Activity. DRI has operated these stations since that time. A third station was deployed in the period May to September 2011. The TTR is located within the northwest corner of the Nevada Test and Training Range (NTTR), and covers an area of approximately 725.20 km2 (280 mi2). The primary objective of the monitoring stations is to evaluate whether and under what conditions there is wind transport of radiological contaminants from Soils Corrective Action Units (CAUs) associated with Operation Roller Coaster on TTR. Operation Roller Coaster was a series of tests, conducted in 1963, designed to examine the stability and dispersal of plutonium in storage and transportation accidents. These tests did not result in any nuclear explosive yield. However, the tests did result in the dispersal of plutonium and contamination of surface soils in the surrounding area.
Energy performance of an architectural fabric roof: Experimental and analytical results
Gridley, R.B.; Hart, G.H.; Goss, W.P.
1985-01-01
As part of a research program on the thermal performance of translucent fabric-covered buildings, a comparison between measured and predicted fabric roof heat transfer was made. Predictions, based on a steady-state ASHRAE calculation technique, were compared against measured heat transfer through three different roof systems operating under outside weather conditions. The goals of the study were to evaluate the ability to predict the net energy transfer through the fabric roof systems tested, to identify parameters that would contribute to major differences between the measured and predicted results, and to recommend improvements to those parameters. It is expected that those improvements could be made in the computer program, DOE-2. The heat transfer through a single-layer, a double-layer, and a translucent insulated fabric roof system was measured in a vertical heat flow, guarded hot box located outdoors in Granville, Ohio. The results obtained by comparing the measured and predicted net heat transfer through the three roof systems indicated that the ASHRAE calculational technique predicted heat loss to within +. 25%, but it consistently overpredicted the heat gain during cooling load situations.
Meta-analytic results of ethnic group differences in peer victimization.
Vitoroulis, Irene; Vaillancourt, Tracy
2014-11-12
Research on the prevalence of peer victimization across ethnicities indicates that no one group is consistently at higher risk. In the present two meta-analyses representing 692,548 children and adolescents (age 6-18 years), we examined ethnic group differences in peer victimization at school by including studies with (a) ethnic majority-minority group comparisons (k = 24), and (b) White and Black, Hispanic, Asian, and Aboriginal comparisons (k = 81). Methodological moderating effects (measure type, definition of bullying, publication type and year, age, and country) were examined in both analyses. Using Cohen's d, results indicated a null effect size for the ethnic majority-minority group comparison. Moderator analyses indicated that ethnic majority youth experienced more peer victimization than ethnic minorities in the US (d = .23). The analysis on multiple group comparisons between White and Black (d = .02), Hispanic (d = .08), Asian (d = .05), Aboriginal (d = -.02) and Biracial (d = -.05) groups indicated small effect sizes. Overall, results from the main and moderator analyses yielded small effects of ethnicity, suggesting that ethnicity assessed as a demographic variable is not an adequate indicator for addressing ethnic group differences in peer victimization. Although few notable differences were found between White and non-White groups regarding rates of peer victimization, certain societal and methodological limitations in the assessment of peer victimization may underestimate differences between ethnicities. Aggr. Behav. 9999:XX-XX, 2014. © 2014 Wiley Periodicals, Inc.
NiO Test Specimens for Analytical Electron Microscopy: Round-Robin Results
NASA Astrophysics Data System (ADS)
Bennett, J. C.; Egerton, R. F.
1995-08-01
Improvements in instrumentation for energy-dispersive X-ray microanalysis (EDX) and electron energy-loss spectroscopy (EELS) have underlined the need for suitable standards for measuring performance. We report the results from several laboratories that were supplied with a test specimen consisting of a thin film of nickel oxide supported on a molybdenum grid. The Ni-K[alpha]/Mo-K[alpha] count ratio was used as an indication of number of stray electrons and/or X-rays in the TEM column; the Ni-K[alpha] peak/background ratio provided a measure of the total background in the EDX spectrum, including bremsstrahlung contributions and the effect of detector electronics. By providing values typical of current instrumentation, the results illustrate how the test specimen can be used to evaluate TEM/EDX systems prior to purchase, during installation, and (periodically) during operation. The NiO films were also used to test EELS acquisition and quantification procedures: measured Ni/O elemental ratios were all within 10% of stoichiometry.
NASA Technical Reports Server (NTRS)
Farrell, C. E.; Strange, D. A.
1982-01-01
An overview of the fast integral RF evaluation (FIRE) program is presented. This program uses surface current integration to evaluate RF performance of antenna systems. It requires modeling of surfaces in X, Y, Z coordinates along equally spaced X and Y grids with Z in the focal directon. The far field contribution of each surface point includes the effects of the Z-component of surface current which is not included in the aperture integration technique. Because of this, surface current integration is the most effective and inclusive technique for predicting RF performance on non-ideal reflectors. Results obtained from use of the FIRE program and an aperture integration program to predict RF performance of a LSS antenna concept are presented.
Hybrid electric vehicle technology assessment : methodology, analytical issues, and interim results.
Plotkin, S.; Santini, D.; Vyas, A.; Anderson, J.; Wang, M.; Bharathan, D.; He, J.
2002-03-13
This report presents the results of the first phase of Argonne National Laboratory's (ANL's) examination of the costs and energy impacts of light-duty hybrid electric vehicles (HEVs). We call this research an HEV Technology Assessment, or HEVTA. HEVs are vehicles with drivetrains that combine electric drive components (electric motor, electricity storage) with a refuelable power plant (e.g., an internal combustion engine). The use of hybrid drivetrains is widely considered a key technology strategy in improving automotive fuel efficiency. Two hybrid vehicles--Toyota's Prius and Honda's Insight--have been introduced into the U.S. market, and all three auto industry participants in the Partnership for a New Generation of Vehicles (PNGV) have selected hybrid drivetrains for their prototype vehicles.
Lead fragments in tissues from wild birds: a cause of misleading analytical results.
Frank, A
1986-10-01
Seriously damaged eider ducks (Somateria mollissima) and long-tailed ducks (Clangula hyemalis) were shot in connection with an oil spill in 1974. Liver and kidney tissues were analyzed for environmental pollutants and lead analysis gave irreproducible results. By means of X-ray photographs, X-ray-dense particles could be observed in the tissues. The foreign particles were extracted by dissolution of the organ tissues in Soluene-350 (Packard Instruments Co. Inc) and then washed with toluene. The insoluble particles consisted of lead and bone splinters of varying size. The form of the former ranged from irregular fragments to dust, and arose by disruption of lead pellets upon collision with bone tissue. Birds shot with lead pellets should not be used for lead determination unless careful X-ray investigations are made prior to the chemical analysis. Determinations should be made on at least two different samples of the tissue examined.
Downstream evolution of turbulence from heated screens: Experimental and analytical results
O`Hern, T.J.; Shagam, R.N.; Neal, D.R.; Suo-Anttila, A.J.; Torczynski, J.R.
1993-02-01
This report discusses recent efforts to characterize the flow and density nonuniformities downstream of heated screens placed in a uniform flow. The Heated Screen Test Facility (HSTF) at Sandia National Laboratories and the Lockheed Palo Alto Flow Channel (LPAFC) were used to perform experiments over wide ranges of upstream velocities and heating rates. Screens of various mesh configurations were examined, including multiple screens sequentially positioned in the flow direction. Diagnostics in these experiments included pressure manometry, hot-wire anemometry, interferometry, Hartmann wavefront slope sensing, and photorefractive schlieren photography. A model was developed to describe the downstream evolution of the flow and density nonuniformities. Equations for the spatial variation of the mean flow quantities and the fluctuation magnitudes were derived by incorporating empirical correlations into the equations of motion. Numerical solutions of these equations are in fair agreement with previous and current experimental results.
Meta-analytic results of ethnic group differences in peer victimization.
Vitoroulis, Irene; Vaillancourt, Tracy
2015-03-01
Research on the prevalence of peer victimization across ethnicities indicates that no one group is consistently at higher risk. In the present two meta-analyses representing 692,548 children and adolescents (age 6-18 years), we examined ethnic group differences in peer victimization at school by including studies with (a) ethnic majority-minority group comparisons (k = 24), and (b) White and Black, Hispanic, Asian, and Aboriginal comparisons (k = 81). Methodological moderating effects (measure type, definition of bullying, publication type and year, age, and country) were examined in both analyses. Using Cohen's d, results indicated a null effect size for the ethnic majority-minority group comparison. Moderator analyses indicated that ethnic majority youth experienced more peer victimization than ethnic minorities in the US (d = .23). The analysis on multiple group comparisons between White and Black (d = .02), Hispanic (d = .08), Asian (d = .05), Aboriginal (d = -.02) and Biracial (d = -.05) groups indicated small effect sizes. Overall, results from the main and moderator analyses yielded small effects of ethnicity, suggesting that ethnicity assessed as a demographic variable is not an adequate indicator for addressing ethnic group differences in peer victimization. Although few notable differences were found between White and non-White groups regarding rates of peer victimization, certain societal and methodological limitations in the assessment of peer victimization may underestimate differences between ethnicities. Aggr. Behav. Aggr. Behav. 42:149-170, 2015. © 2014 Wiley Periodicals, Inc.
Tank 241-T-201, core 192 analytical results for the final report
Nuzum, J.L.
1997-08-07
This document is the final laboratory report for Tank 241-T-201. Push mode core segments were removed from Riser 3 between April 24, 1997, and April 25, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-T-201 Push Mode Core Sampling and Analysis Plan (TSAP) (Hu, 1997), Letter of Instruction for Core Sample Analysis of Tanks 241-T-201, 241-T-202, 241-T-203, and 241-T-204 (LOI) (Bell, 1997), Additional Core Composite Sample from Drainable Liquid Samples for Tank 241-T-2 01 (ACC) (Hall, 1997), and Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DSC) analyses exceeded the notification limits stated in DQO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group, and are not considered in this report.
Tank 241-AP-105, cores 208, 209 and 210, analytical results for the final report
Nuzum, J.L.
1997-10-24
This document is the final laboratory report for Tank 241-AP-105. Push mode core segments were removed from Risers 24 and 28 between July 2, 1997, and July 14, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-AP-105 Push Mode Core Sampling and Analysis Plan (TSAP) (Hu, 1997) and Tank Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT), differential scanning calorimetry (DSC) analysis, or total organic carbon (TOC) analysis exceeded the notification limits as stated in TSAP and DQO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group, and are not considered in this report. Appearance and Sample Handling Two cores, each consisting of four segments, were expected from Tank 241-AP-105. Three cores were sampled, and complete cores were not obtained. TSAP states core samples should be transported to the laboratory within three calendar days from the time each segment is removed from the tank. This requirement was not met for all cores. Attachment 1 illustrates subsamples generated in the laboratory for analysis and identifies their sources. This reference also relates tank farm identification numbers to their corresponding 222-S Laboratory sample numbers.
The luminosity functions of embedded stellar clusters. 1: Method of solution and analytic results
NASA Technical Reports Server (NTRS)
Fletcher, Andre B.; Stahler, Steven W.
1994-01-01
We describe a method for computing the history of the luminosity function in a young cluster still forming within a molecular cloud complex. Our method, which utilizes detailed results from stellar evolution theory, assumes that clusters arise from the continuous collapse of dense cloud cores over a protracted period of time. It is also assumed that stars reaching the main sequence are distributed in mass according to a prescribed initial mass function (IMF). We keep track separately of the contributions to the luminosity function from the populations of protostars, pre-main-sequence stars, and main-sequence stars. We derive expressions for the fractional contribution of these populations to both the total number of stars produced and the total cluster luminosity. In our model, the number of protostars rises quickly at first, but then levels off to a nearly constant value, which it maintains until the dispersal of the cloud complex. The number fraction of protostars always decreases with time. Averaged over the life of the parent cloud, this fraction is typically a few percent. The protostar mass distribution can be expressed as an integral over the IMF.
Environmental influences on fruit and vegetable intake: Results from a path analytic model
Liese, Angela D.; Bell, Bethany A.; Barnes, Timothy L.; Colabianchi, Natalie; Hibbert, James D.; Blake, Christine E.; Freedman, Darcy A.
2014-01-01
Objective Fruit and vegetable intake (F&V) is influenced by behavioral and environmental factors, but these have rarely been assessed simultaneously. We aimed to quantify the relative influence of supermarket availability, perceptions of the food environment, and shopping behavior on F&V intake. Design A cross-sectional study. Setting Eight-counties in South Carolina, USA, with verified locations of all supermarkets. Subjects A telephone survey of 831 household food shoppers ascertained F&V intake with a 17-item screener, primary food store location, shopping frequency, perceptions of healthy food availability, and calculated GIS-based supermarket availability. Path analysis was conducted. We report standardized beta coefficients on paths significant at the 0.05 level. Results Frequency of grocery shopping at primary food store (β=0.11) was the only factor exerting an independent, statistically significant direct effect on F&V intake. Supermarket availability was significantly associated with distance to food store (β=-0.24) and shopping frequency (β=0.10). Increased supermarket availability was significantly and positively related to perceived healthy food availability in the neighborhood (β=0.18) and ease of shopping access (β=0.09). Collectively considering all model paths linked to perceived availability of healthy foods, this measure was the only other factor to have a significant total effect on F&V intake. Conclusions While the majority of literature to date has suggested an independent and important role of supermarket availability for F&V intake, our study found only indirect effects of supermarket availability and suggests that food shopping frequency and perceptions of healthy food availability are two integral components of a network of influences on F&V intake. PMID:24192274
Audebert, P.; Temmar, A.
1997-05-01
In continuation of a series of tests, the original results of oak drying in an evacuated kiln are presented here for different plate temperatures and for various pressures in the kiln. These results include more particularly the drying curves, the evolution of temperature, of moisture and of pressure in and on the wood. They evidence the pressure and the levels of temperature occurring in the wood during the drying period. These results also allow the development of two types of drying models: a simple monodimensional model of drying curves from the analytical solutions of the equations of water diffusion in the wood and, moreover, a model, in two dimensions, of temperature, moisture and pressure fields in the wood by applying the finite element method. The boundary conditions of the second model can be fixed with precision thanks to the results of the first model. In both cases, the proposed solutions are justified by experimental results.
Ficklin, W.H.; Nowlan, G.A.; Preston, D.J.
1983-01-01
Water samples were collected in the vicinity of Jackman, Maine as a part of the study of the relationship of dissolved constituents in water to the sediments subjacent to the water. Each sample was analyzed for specific conductance, alkalinity, acidity, pH, fluoride, chloride, sulfate, phosphate, nitrate, sodium, potassium, calcium, magnesium, and silica. Trace elements determined were copper, zinc, molybdenum, lead, iron, manganese, arsenic, cobalt, nickel, and strontium. The longitude and latitude of each sample location and a sample site map are included in the report as well as a table of the analytical results.
Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba
2014-11-01
Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity
NASA Technical Reports Server (NTRS)
Farassat, F.; Casper, J.
2003-01-01
Alan Powell has made significant contributions to the understanding of many aeroacoustic problems, in particular, the problems of broadband noise from jets and boundary layers. In this paper, some analytic results are presented for the calculation of the correlation function of the broadband noise radiated from a wing, a propeller, and a jet in uniform forward motion. It is shown that, when the observer (or microphone) motion is suitably chosen, the geometric terms of the radiation formula become time independent. The time independence of these terms leads to a significant simplification of the statistical analysis of the radiated noise, even when the near field terms are included. For a wing in forward motion, if the observer is in the moving reference frame, then the correlation function of the near and far field noise can be related to a space-time cross-correlation function of the pressure on the wing surface. A similar result holds for a propeller in forward flight if the observer is in a reference frame that is attached to the propeller and rotates at the shaft speed. For a jet in motion, it is shown that the correlation function of the radiated noise can be related to the space-time crosscorrelation of the Lighthill stress tensor in the jet. Exact analytical results are derived for all three cases. For the cases under present consideration, the inclusion of the near field terms does not introduce additional complexity, as compared to existing formulations that are limited to the far field.
Pele, Maria; Brohée, Marcel; Anklam, Elke; Van Hengel, Arjon J
2007-12-01
Accidental exposure to hazelnut or peanut constitutes a real threat to the health of allergic consumers. Correct information regarding food product ingredients is of paramount importance for the consumer, thereby reducing exposure to food allergens. In this study, 569 cookies and chocolates on the European market were purchased. All products were analysed to determine peanut and hazelnut content, allowing a comparison of the analytical results with information provided on the product label. Compared to cookies, chocolates are more likely to contain undeclared allergens, while, in both food categories, hazelnut traces were detected at higher frequencies than peanut. The presence of a precautionary label was found to be related to a higher frequency of positive test results. The majority of chocolates carrying a precautionary label tested positive for hazelnut, whereas peanut traces were not be detected in 75% of the cookies carrying a precautionary label.
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.
2014-05-15
The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.
NASA Astrophysics Data System (ADS)
Jurjiu, Aurel; Galiceanu, Mircea; Farcasanu, Alexandru; Chiriac, Liviu; Turcu, Flaviu
2016-12-01
In this paper, we focus on the relaxation dynamics of Sierpinski hexagon fractal polymer. The relaxation dynamics of this fractal polymer is investigated in the framework of the generalized Gaussian structure model using both Rouse and Zimm approaches. In the Rouse-type approach, by performing real-space renormalization transformations, we determine analytically the complete eigenvalue spectrum of the connectivity matrix. Based on the eigenvalues obtained through iterative algebraic relations we calculate the averaged monomer displacement and the mechanical relaxation moduli (storage modulus and loss modulus). The evaluation of the dynamical properties in the Rouse-type approach reveals that they obey scaling in the intermediate time/frequency domain. In the Zimm-type approach, which includes the hydrodynamic interactions, the relaxation quantities do not show scaling. The theoretical findings with respect to scaling in the intermediate domain of the relaxation quantities are well supported by experimental results.
Hackner, Klaus; Pleil, Joachim
2017-01-09
Recent literature has touted the use of canine olfaction as a diagnostic tool for identifying pre-clinical disease status, especially cancer and infection from biological media samples. Studies have shown a wide range of outcomes, ranging from almost perfect discrimination, all the way to essentially random results. This disparity is not likely to be a detection issue; dogs have been shown to have extremely sensitive noses as proven by their use for tracking, bomb detection and search and rescue. However, in contrast to analytical instruments, dogs are subject to boredom, fatigue, hunger and external distractions. These challenges are of particular importance in a clinical environment where task repetition is prized, but not as entertaining for a dog as chasing odours outdoors. The question addressed here is how to exploit the intrinsic sensitivity and simplicity of having a dog simply sniff out disease, in the face of variability in behavior and response.
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1979-01-01
The effect of combustor operating conditions on the conversion of fuel-bound nitrogen (FBN) to nitrogen oxides NO sub x was analytically determined. The effect of FBN and of operating conditions on carbon monoxide (CO) formation was also studied. For these computations, the combustor was assumed to be a two stage, adiabatic, perfectly-stirred reactor. Propane-air was used as the combustible mixture and fuel-bound nitrogen was simulated by adding nitrogen atoms to the mixture. The oxidation of propane and formation of NO sub x and CO were modeled by a fifty-seven reaction chemical mechanism. The results for NO sub x and CO formation are given as functions of primary and secondary stage equivalence ratios and residence times.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2009-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo. (U.S.A.). In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 2008. Crock and others have presented earlier a compilation of analytical results for the biosolids samples collected and analyzed for 1999 thru 2006, and in a separate report, data for the 2007 biosolids are reported. More information about the other monitoring components is presented elsewhere in the literature. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for groundwater and sediment components.
SU-E-T-631: Preliminary Results for Analytical Investigation Into Effects of ArcCHECK Setup Errors
Kar, S; Tien, C
2015-06-15
Purpose: As three-dimensional diode arrays increase in popularity for patient-specific quality assurance for intensity-modulated radiation therapy (IMRT), it is important to evaluate an array’s susceptibility to setup errors. The ArcCHECK phantom is set up by manually aligning its outside marks with the linear accelerator’s lasers and light-field. If done correctly, this aligns the ArcCHECK cylinder’s central axis (CAX) with the linear accelerator’s axis of rotation. However, this process is prone to error. This project has developed an analytical expression including a perturbation factor to quantify the effect of shifts. Methods: The ArcCHECK is set up by aligning its machine marks with either the sagittal room lasers or the light-field of the linear accelerator at gantry zero (IEC). ArcCHECK has sixty-six evenly-spaced SunPoint diodes aligned radially in a ring 14.4 cm from CAX. The detector response function (DRF) was measured and combined with inverse-square correction to develop an analytical expression for output. The output was calculated using shifts of 0 (perfect alignment), +/−1, +/−2 and +/−5 mm. The effect on a series of simple inputs was determined: unity, 1-D ramp, steps, and hat-function to represent uniform field, wedge, evenly-spaced modulation, and single sharp modulation, respectively. Results: Geometric expressions were developed with perturbation factor included to represent shifts. DRF was modeled using sixth-degree polynomials with correlation coefficient 0.9997. The output was calculated using simple inputs such as unity, 1-D ramp, steps, and hat-function, with perturbation factors of: 0, +/−1, +/−2 and +/−5 mm. Discrepancies have been observed, but large fluctuations have been somewhat mitigated by aliasing arising from discrete diode placement. Conclusion: An analytical expression with perturbation factors was developed to estimate the impact of setup errors on an ArcCHECK phantom. Presently, this has been applied to
NASA Technical Reports Server (NTRS)
Bieber, J. W.; Gray, P. C.; Matthaeus, W. H.
1995-01-01
Parallel and perpendicular diffusion coefficients were computed numerically by following particle orbits in a simulated magnetic field. The simulated field was chosen to have delta B/B(sub o) small, so as to provide a test of quasilinear theory in a regime where the theory should be most accurate. The simulation space is large enough to contain many magnetic field correlation lengths, so that effects of field line random walk can be studied. After presenting results for parallel diffusion, we will focus on two controversial issues relating to perpendicular diffusion: (1) Do quasilinear descriptions of perpendicular diffusion retain any validity for particles whose Larmor radius is smaller than a correlation length? (2) Does field line random walk lead to particle diffusion in the usual sense, or does it produce 'compound' diffusion for which particles spread out proportionally to t(exp 1/4) instead of t(exp 1/2)?
NASA Astrophysics Data System (ADS)
Milton, Graeme W.
2016-11-01
The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer p. If p takes its maximum value, then we have a complete analytic material. Otherwise, it is incomplete analytic material of rank p. For two-dimensional materials, further progress can be made in the identification of analytic materials by using the well-known fact that a 90° rotation applied to a divergence-free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations.
Milton, Graeme W
2016-11-01
The theory of inhomogeneous analytic materials is developed. These are materials where the coefficients entering the equations involve analytic functions. Three types of analytic materials are identified. The first two types involve an integer p. If p takes its maximum value, then we have a complete analytic material. Otherwise, it is incomplete analytic material of rank p. For two-dimensional materials, further progress can be made in the identification of analytic materials by using the well-known fact that a 90(°) rotation applied to a divergence-free field in a simply connected domain yields a curl-free field, and this can then be expressed as the gradient of a potential. Other exact results for the fields in inhomogeneous media are reviewed. Also reviewed is the subject of metamaterials, as these materials provide a way of realizing desirable coefficients in the equations.
Green, Timothy R.; Freyberg, David L.
1995-01-01
Anisotropy in large-scale unsaturated hydraulic conductivity of layered soils changes with the moisture state. Here, state-dependent anisotropy is computed under conditions of large-scale gravity drainage. Soils represented by Gardner's exponential function are perfectly stratified, periodic, and inclined. Analytical integration of Darcy’s law across each layer results in a system of nonlinear equations that is solved iteratively for capillary suction at layer interfaces and for the Darcy flux normal to layering. Computed fluxes and suction profiles are used to determine both upscaled hydraulic conductivity in the principal directions and the corresponding “state-dependent” anisotropy ratio as functions of the mean suction. Three groups of layered soils are analyzed and compared with independent predictions from the stochastic results of Yeh et al. (1985b). The small-perturbation approach predicts appropriate behaviors for anisotropy under nonarid conditions. However, the stochastic results are limited to moderate values of mean suction; this limitation is linked to a Taylor series approximation in terms of a group of statistical and geometric parameters. Two alternative forms of the Taylor series provide upper and lower bounds for the state-dependent anisotropy of relatively dry soils.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Brown, Z.A.; Adams, M.G.
2008-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of non-irrigated farmland and rangeland near Deer Trail, Colorado. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site (Yager and Arnold, 2003). In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and stream bed sediment. Streams at the site are dry most of the year, so samples of stream bed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 1999 through 2006. More information about the other monitoring components is presented elsewhere in the literature (e.g., Yager and others, 2004a, 2004b, 2004c, 2004d). Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1) were higher than regulatory limits, (2) were increasing with time, or (3) were
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2008-01-01
Since late 1993, the Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colorado (U.S.A.). In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program recently has been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and streambed sediment. Streams at the site are dry most of the year, so samples of streambed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 2007. We have presented earlier a compilation of analytical results for the biosolids samples collected and analyzed for 1999 through 2006. More information about the other monitoring components is presented elsewhere in the literature. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1
Plumlee, Geoffrey S.; Casadevall, Thomas J.; Wibowo, Handoko T.; Rosenbauer, Robert J.; Johnson, Craig A.; Breit, George N.; Lowers, Heather; Wolf, Ruth E.; Hageman, Philip L.; Goldstein, Harland L.; Anthony, Michael W.; Berry, Cyrus J.; Fey, David L.; Meeker, Gregory P.; Morman, Suzette A.
2008-01-01
On May 29, 2006, mud and gases began erupting unexpectedly from a vent 150 meters away from a hydrocarbon exploration well near Sidoarjo, East Java, Indonesia. The eruption, called the LUSI (Lumpur 'mud'-Sidoarjo) mud volcano, has continued since then at rates as high as 160,000 m3 per day. At the request of the United States Department of State, the U.S. Geological Survey (USGS) has been providing technical assistance to the Indonesian Government on the geological and geochemical aspects of the mud eruption. This report presents initial characterization results of a sample of the mud collected on September 22, 2007, as well as inerpretive findings based on the analytical results. The focus is on characteristics of the mud sample (including the solid and water components of the mud) that may be of potential environmental or human health concern. Characteristics that provide insights into the possible origins of the mud and its contained solids and waters have also been evaluated.
NASA Astrophysics Data System (ADS)
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.
2014-10-01
Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of flute type vortex density structures and interaction of high frequency electromagnetic waves used for surveillance and communication with such structures. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP), and in many other applications. We will present PIC simulation results of EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Two cases will be analyzed. In the first case electromagnetic wave scattering will take place in the ionospheric plasma. In the second case laser probing in a high-beta Z-pinch plasma will be presented. This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE Grant No. DE-FC52-06NA27616 at the University of Nevada at Reno.
McHugh, J.B.; Bullock, J.H. Jr.; Roemer, T.A.; Nowlan, G.A.
1989-01-01
A U.S. Geological Survey report is presented giving analytical results and sample locality map of stream-sediment and panned-concentrate samples from the El Dorado and Ireteba Peaks Wilderness Study Areas, Clark County, Nevada.
Madsen, Berit L. . E-mail: ronblm@vmmc.org; Hsi, R. Alex; Pham, Huong T.; Fowler, Jack F.; Esagui, Laura C.; Corman, John
2007-03-15
Purpose: To evaluate the feasibility and toxicity of stereotactic hypofractionated accurate radiotherapy (SHARP) for localized prostate cancer. Methods and Materials: A Phase I/II trial of SHARP performed for localized prostate cancer using 33.5 Gy in 5 fractions, calculated to be biologically equivalent to 78 Gy in 2 Gy fractions ({alpha}/{beta} ratio of 1.5 Gy). Noncoplanar conformal fields and daily stereotactic localization of implanted fiducials were used for treatment. Genitourinary (GU) and gastrointestinal (GI) toxicity were evaluated by American Urologic Association (AUA) score and Common Toxicity Criteria (CTC). Prostate-specific antigen (PSA) values and self-reported sexual function were recorded at specified follow-up intervals. Results: The study includes 40 patients. The median follow-up is 41 months (range, 21-60 months). Acute toxicity Grade 1-2 was 48.5% (GU) and 39% (GI); 1 acute Grade 3 GU toxicity. Late Grade 1-2 toxicity was 45% (GU) and 37% (GI). No late Grade 3 or higher toxicity was reported. Twenty-six patients reported potency before therapy; 6 (23%) have developed impotence. Median time to PSA nadir was 18 months with the majority of nadirs less than 1.0 ng/mL. The actuarial 48-month biochemical freedom from relapse is 70% for the American Society for Therapeutic Radiology and Oncology definition and 90% by the alternative nadir + 2 ng/mL failure definition. Conclusions: SHARP for localized prostate cancer is feasible with minimal acute or late toxicity. Dose escalation should be possible.
Catastrophic incidents can generate a large number of samples with analytically diverse types including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface resid...
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1980-01-01
The influence of ground-based gas turbine combustor operating conditions and fuel-bound nitrogen (FBN) found in coal-derived liquid fuels on the formation of nitrogen oxides and carbon monoxide is investigated. Analytical predictions of NOx and CO concentrations are obtained for a two-stage, adiabatic, perfectly-stirred reactor operating on a propane-air mixture, with primary equivalence ratios from 0.5 to 1.7, secondary equivalence ratios of 0.5 or 0.7, primary stage residence times from 12 to 20 msec, secondary stage residence times of 1, 2 and 3 msec and fuel nitrogen contents of 0.5, 1.0 and 2.0 wt %. Minimum nitrogen oxide but maximum carbon monoxide formation is obtained at primary zone equivalence ratios between 1.4 and 1.5, with percentage conversion of FBN to NOx decreasing with increased fuel nitrogen content. Additional secondary dilution is observed to reduce final pollutant concentrations, with NOx concentration independent of secondary residence time and CO decreasing with secondary residence time; primary zone residence time is not observed to affect final NOx and CO concentrations significantly. Finally, comparison of computed results with experimental values shows a good semiquantitative agreement.
NASA Astrophysics Data System (ADS)
Sanz-Enguita, G.; Ortega, J.; Folcia, C. L.; Aramburu, I.; Etxebarria, J.
2016-02-01
We have studied the performance characteristics of a dye-doped cholesteric liquid crystal (CLC) laser as a function of the sample thickness. The study has been carried out both from the experimental and theoretical points of view. The theoretical model is based on the kinetic equations for the population of the excited states of the dye and for the power of light generated within the laser cavity. From the equations, the threshold pump radiation energy Eth and the slope efficiency η are numerically calculated. Eth is rather insensitive to thickness changes, except for small thicknesses. In comparison, η shows a much more pronounced variation, exhibiting a maximum that determines the sample thickness for optimum laser performance. The predictions are in good accordance with the experimental results. Approximate analytical expressions for Eth and η as a function of the physical characteristics of the CLC laser are also proposed. These expressions present an excellent agreement with the numerical calculations. Finally, we comment on the general features of CLC layer and dye that lead to the best laser performance.
NASA Astrophysics Data System (ADS)
Timpanaro, André M.; Prado, Carmen P. C.
2014-05-01
We discuss the exit probability of the one-dimensional q-voter model and present tools to obtain estimates about this probability, both through simulations in large networks (around 107 sites) and analytically in the limit where the network is infinitely large. We argue that the result E(ρ )=ρq/ρq+(1-ρ)q, that was found in three previous works [F. Slanina, K. Sznajd-Weron, and P. Przybyła, Europhys. Lett. 82, 18006 (2008), 10.1209/0295-5075/82/18006; R. Lambiotte and S. Redner, Europhys. Lett. 82, 18007 (2008), 10.1209/0295-5075/82/18007, for the case q =2; and P. Przybyła, K. Sznajd-Weron, and M. Tabiszewski, Phys. Rev. E 84, 031117 (2011), 10.1103/PhysRevE.84.031117, for q >2] using small networks (around 103 sites), is a good approximation, but there are noticeable deviations that appear even for small systems and that do not disappear when the system size is increased (with the notable exception of the case q =2). We also show that, under some simple and intuitive hypotheses, the exit probability must obey the inequality ρq/ρq+(1-ρ)≤E(ρ)≤ρ/ρ +(1-ρ)q in the infinite size limit. We believe this settles in the negative the suggestion made [S. Galam and A. C. R. Martins, Europhys. Lett. 95, 48005 (2001), 10.1209/0295-5075/95/48005] that this result would be a finite size effect, with the exit probability actually being a step function. We also show how the result that the exit probability cannot be a step function can be reconciled with the Galam unified frame, which was also a source of controversy.
Thorn, Conde R.; Heywood, Charles E.
2001-01-01
The City of Albuquerque, New Mexico, is interested in gaining a better understanding, both quantitative and qualitative, of the aquifer system in and around Albuquerque. Currently (2000), the City of Albuquerque and surrounding municipalities are completely dependent on ground-water reserves for their municipal water supply. This report presents the results of a long-term aquifer test conducted near the Rio Grande in Albuquerque. The long-term aquifer test was conducted during the winter of 1994-95. The City of Albuquerque Griegos 1 water production well was pumped continuously for 54 days at an average pumping rate of 2,331 gallons per minute. During the 54-day pumping and a 30-day recovery period, water levels were recorded in a monitoring network that consisted of 3 production wells and 19 piezometers located at nine sites. These wells and piezometers were screened in river alluvium and (or) the upper and middle parts of the Santa Fe Group aquifer system. In addition to the measurement of water levels, aquifer-system compaction was monitored during the aquifer test by an extensometer. Well-bore video and flowmeter surveys were conducted in the Griegos 1 water production well at the end of the recovery period to identify the location of primary water- producing zones along the screened interval. Analytical results from the aquifer test presented in this report are based on the methods used to analyze a leaky confined aquifer system and were performed using the computer software package AQTESOLV. Estimated transmissivities for the Griegos 1 and 4 water production wells ranged from 10,570 to 24,810 feet squared per day; the storage coefficient for the Griegos 4 well was 0.0025. A transmissivity of 13,540 feet squared per day and a storage coefficient of 0.0011 were estimated from the data collected from a piezometer completed in the production interval of the Griegos 1 well.
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
Grassa, Fausto; Capasso, Giorgio; Oliveri, Ygor; Sollami, Aldo; Carreira, Paula; Rosario Carvalho, M; Marques, Jose M; Nunes, Joao C
2010-06-01
A continuous-flow GC/IRMS technique has been developed to analyse delta(15)N values for molecular nitrogen in gas samples. This method provides reliable results with accuracy better than 0.15 per thousand and reproducibility (1sigma) within+/-0.1 per thousand for volumes of N(2) between 1.35 (about 56 nmol) and 48.9 muL (about 2 mumol). The method was tested on magmatic and hydrothermal gases as well as on natural gas samples collected from various sites. Since the analysis of nitrogen isotope composition may be prone to atmospheric contamination mainly in samples with low N(2) concentration, we set the instrument to determine also N(2) and (36)Ar contents in a single run. In fact, based on the simultaneously determined N(2)/(36)Ar ratios and assuming that (36)Ar content in crustal and mantle-derived fluids is negligible with respect to (36)Ar concentration in the atmosphere, for each sample, the degree of atmospheric contamination can be accurately evaluated. Therefore, the measured delta(15)N values can be properly corrected for air contamination.
Hibbard, Judith H; Greaves, Felix; Dudley, R Adams
2015-01-01
Background In the context of the Affordable Care Act, there is extensive emphasis on making provider quality transparent and publicly available. Online public reports of quality exist, but little is known about how visitors find reports or about their purpose in visiting. Objective To address this gap, we gathered website analytics data from a national group of online public reports of hospital or physician quality and surveyed real-time visitors to those websites. Methods Websites were recruited from a national group of online public reports of hospital or physician quality. Analytics data were gathered from each website: number of unique visitors, method of arrival for each unique visitor, and search terms resulting in visits. Depending on the website, a survey invitation was launched for unique visitors on landing pages or on pages with quality information. Survey topics included type of respondent (eg, consumer, health care professional), purpose of visit, areas of interest, website experience, and demographics. Results There were 116,657 unique visitors to the 18 participating websites (1440 unique visitors/month per website), with most unique visitors arriving through search (63.95%, 74,606/116,657). Websites with a higher percent of traffic from search engines garnered more unique visitors (P=.001). The most common search terms were for individual hospitals (23.25%, 27,122/74,606) and website names (19.43%, 22,672/74,606); medical condition terms were uncommon (0.81%, 605/74,606). Survey view rate was 42.48% (49,560/116,657 invited) resulting in 1755 respondents (participation rate=3.6%). There were substantial proportions of consumer (48.43%, 850/1755) and health care professional respondents (31.39%, 551/1755). Across websites, proportions of consumer (21%-71%) and health care professional respondents (16%-48%) varied. Consumers were frequently interested in using the information to choose providers or assess the quality of their provider (52.7%, 225
NASA Astrophysics Data System (ADS)
Wöhling, Thomas; Barkle, Greg; Stenger, Roland; Moorhead, Brian; Wall, Aaron; Clague, Juliet
2014-05-01
Automated equilibrium tension plate lysimeters (AETLs) are arguably the most accurate method to measure unsaturated water and contaminant fluxes below the root zone at the scale of up to 1 m². The AETL technique utilizes a porous sintered stainless-steel plate to provide a comparatively large sampling area with a continuously controlled vacuum that is in "equilibrium" with the surrounding vadose zone matric pressure to ensure measured fluxes represent those under undisturbed conditions. This novel lysimeter technique was used at an intensive research site for investigations of contaminant pathways from the land surface to the groundwater on a sheep and beef farm under pastoral land use in the Tutaeuaua subcatchment, New Zealand. The Spydia research facility was constructed in 2005 and was fully operational between 2006 and 2011. Extending from a central access caisson, 15 separately controlled AETLs with 0.2 m² surface area were installed at five depths between 0.4 m and 5.1 m into the undisturbed volcanic vadose zone materials. The unique setup of the facility ensured minimum interference of the experimental equipment and external factors with the measurements. Over the period of more than five years, a comprehensive data set was collected at each of the 15 AETL locations which comprises of time series of soil water flux, pressure head, volumetric water contents, and soil temperature. The soil water was regularly analysed for EC, pH, dissolved carbon, various nitrogen compounds (including nitrate, ammonia, and organic N), phosphorus, bromide, chloride, sulphate, silica, and a range of other major ions, as well as for various metals. Climate data was measured directly at the site (rainfall) and a climate station at 500m distance. The shallow groundwater was sampled at three different depths directly from the Spydia caisson and at various observation wells surrounding the facility. Two tracer experiments were conducted at the site in 2009 and 2010. In the 2009
NASA Astrophysics Data System (ADS)
Politi, M.; Scalas, E.; Fulger, D.; Germano, G.
2010-01-01
Random matrix theory is used to assess the significance of weak correlations and is well established for Gaussian statistics. However, many complex systems, with stock markets as a prominent example, exhibit statistics with power-law tails, that can be modelled with Lévy stable distributions. Here the derivation of an analytical expression for the spectra of covariance matrices approximated by free Lévy stable random variables is reviewed comprehensively and validated by Monte Carlo simulation.
Volkov, M. V.; Ostrovsky, V. N.
2007-02-15
Multistate generalizations of Landau-Zener model are studied by summing entire series of perturbation theory. A technique for analysis of the series is developed. Analytical expressions for probabilities of survival at the diabatic potential curves with extreme slope are proved. Degenerate situations are considered when there are several potential curves with extreme slope. Expressions for some state-to-state transition probabilities are derived in degenerate cases.
NASA Technical Reports Server (NTRS)
Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.
1981-01-01
Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.
Data-infilling in daily mean river flow records: first results using a visual analytics tool (gapIT)
NASA Astrophysics Data System (ADS)
Giustarini, Laura; Parisot, Olivier; Ghoniem, Mohammad; Trebs, Ivonne; Médoc, Nicolas; Faber, Olivier; Hostache, Renaud; Matgen, Patrick; Otjacques, Benoît
2015-04-01
Missing data in river flow records represent a loss of information and a serious drawback in water management. An incomplete time series prevents the computation of hydrological statistics and indicators. Also, records with data gaps are not suitable as input or validation data for hydrological or hydrodynamic modelling. In this work we present a visual analytics tool (gapIT), which supports experts to find the most adequate data-infilling technique for daily mean river flow records. The tool performs an automated calculation of river flow estimates using different data-infilling techniques. Donor station(s) are automatically selected based on Dynamic Time Warping, geographical proximity and upstream/downstream relationships. For each gap the tool computes several flow estimates through various data-infilling techniques, including interpolation, multiple regression, regression trees and neural networks. The visual application provides the possibility for the user to select different donor station(s) w.r.t. those automatically selected. The gapIT software was applied to 24 daily time series of river discharge recorded in Luxembourg over the period 01/01/2007 - 31/12/2013. The method was validated by randomly creating artificial gaps of different lengths and positions along the entire records. Using the RMSE and the Nash-Sutcliffe (NS) coefficient as performance measures, the method is evaluated based on a comparison with the actual measured discharge values. The application of the gapIT software to artificial gaps led to satisfactory results in terms of performance indicators (NS>0.8 for more than half of the artificial gaps). A case-by-case analysis revealed that the limited number of reconstructed record gaps characterized by a high RMSE values (NS>0.8) were caused by the temporary unavailability of the most appropriate donor station. On the other hand, some of the gaps characterized by a high accuracy of the reconstructed record were filled by using the data from
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Khan, Sheema; Morton, Thomas L.; Ronis, David
1987-05-01
The static correlations in highly charged colloidal and micellar suspensions, with and without added electrolyte, are examined using the hypernetted-chain approximation (HNC) for the macro-ion-macro-ion correlations and the mean-spherical approximation for the other correlations. By taking the point-ion limit for the counter-ions, an analytic solution for the counter-ion part of the problem can be obtained; this maps the macro-ion part of the problem onto a one-component problem where the macro-ions interact via a screened Coulomb potential with the Gouy-Chapman form for the screening length and an effective charge that depends on the macro-ion-macro-ion pair correlations. Numerical solutions of the effective one-component equation in the HNC approximation are presented, and in particular, the effects of macro-ion charge, nonadditive core diameters, and added electrolyte are examined. As we show, there can be a strong renormalization of the effective macro-ion charge and reentrant melting in colloidal crystals.
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed.
Heinrich, Verena; Stange, Jens; Dickhaus, Thorsten; Imkeller, Peter; Krüger, Ulrike; Bauer, Sebastian; Mundlos, Stefan; Robinson, Peter N; Hecht, Jochen; Krawitz, Peter M
2012-03-01
With the availability of next-generation sequencing (NGS) technology, it is expected that sequence variants may be called on a genomic scale. Here, we demonstrate that a deeper understanding of the distribution of the variant call frequencies at heterozygous loci in NGS data sets is a prerequisite for sensitive variant detection. We model the crucial steps in an NGS protocol as a stochastic branching process and derive a mathematical framework for the expected distribution of alleles at heterozygous loci before measurement that is sequencing. We confirm our theoretical results by analyzing technical replicates of human exome data and demonstrate that the variance of allele frequencies at heterozygous loci is higher than expected by a simple binomial distribution. Due to this high variance, mutation callers relying on binomial distributed priors are less sensitive for heterozygous variants that deviate strongly from the expected mean frequency. Our results also indicate that error rates can be reduced to a greater degree by technical replicates than by increasing sequencing depth.
Saraswathi, Saras; Sundaram, Suresh; Sundararajan, Narasimhan; Zimmermann, Michael; Nilsen-Hamilton, Marit
2011-01-01
A combination of Integer-Coded Genetic Algorithm (ICGA) and Particle Swarm Optimization (PSO), coupled with the neural-network-based Extreme Learning Machine (ELM), is used for gene selection and cancer classification. ICGA is used with PSO-ELM to select an optimal set of genes, which is then used to build a classifier to develop an algorithm (ICGA_PSO_ELM) that can handle sparse data and sample imbalance. We evaluate the performance of ICGA-PSO-ELM and compare our results with existing methods in the literature. An investigation into the functions of the selected genes, using a systems biology approach, revealed that many of the identified genes are involved in cell signaling and proliferation. An analysis of these gene sets shows a larger representation of genes that encode secreted proteins than found in randomly selected gene sets. Secreted proteins constitute a major means by which cells interact with their surroundings. Mounting biological evidence has identified the tumor microenvironment as a critical factor that determines tumor survival and growth. Thus, the genes identified by this study that encode secreted proteins might provide important insights to the nature of the critical biological features in the microenvironment of each tumor type that allow these cells to thrive and proliferate.
Dorofeev, S.B.; Efimenko, A.A.; Kochurko, A.S.; Sidorov, V.P.
1996-03-01
A review of hydrogen combustion research at Kurchatov Institute is presented. Criterion for spontaneous detonation onset possibility and its application to severe accidents in a nuclear power plant is discussed. Theoretical and experimental results on spontaneous detonation onset conditions are summarized. Three series of large scale turbulent jet initiation experiments have been carried out in KOPER facility (50 m{sup 3} and 150 m{sup 3}). Series of jet initiation experiments in initially confined H{sub 2} - air mixtures have been carried out in KOPER facility (20-46 m{sup 3}). Turbulent deflagration/DDT experiments were carried out in large scale confined volume of 480 m{sup 3} in RUT facility. Results showed, that the characteristic volume size should be used for conservative estimates in accident analysis. Series of experiments on detonation transition from one mixture to another of lower sensitivity has been carried in DRIVER facility. The experiments were aimed on the estimation of the minimum size of a detonation kernel. The received results are in a good agreement with the 7 cell width criterion. Results of combined hydrogen injection/ignition experiments are presented. The experiments are aimed on the investigation of possible consequences of deliberate ignition at dynamic conditions. Analysis of the experimental data showed applicability of 7 cell width criterion to dynamic conditions. The sum of the results on the scaling of spontaneous detonations is discussed in connection with the strategy of hydrogen mitigation at severe accidents.
NASA Technical Reports Server (NTRS)
Westphalen, H.; Spjeldvik, W. N.
1982-01-01
A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.
Analytical Results from Salt Solution Feed Tank (SSFT) Samples HTF-16-6 and HTF-16-40
Peters, T.
2016-09-23
Two samples from the Salt Solution Feed Tank (SSFT) were analyzed by SRNL, HTF-16-6 and HTF-16-40. Multiple analyses of these samples indicate a general composition almost identical to that of the Salt Batch 8-B feed and the Tank 21H sample results.
Novacco, Marilisa; Martini, Valeria; Grande, Carmen; Comazzi, Stefano
2015-09-01
A blood sample from a 14-year-old dog was submitted to the veterinary diagnostic laboratory of the University of Milan for marked leukocytosis with atypical cells. A diagnosis of chronic T-cell lymphocytic leukemia (CLL) was made based on blood smear evaluation and flow cytometric phenotyping. A CBC by Sysmex XT-2000iV revealed a moderate normocytic normochromic anemia. Red blood cells counted by optic flow cytometry (RBC-O) resulted in a higher value than using electrical impedance (RBC-I). The relative reticulocyte count based on RNA content and size was 35.3%, while the manual reticulocyte count was < 1%. The WBC count of 1,562,680 cells/μL was accompanied by a flag. Manual counts for RBC and WBC using the Bürker chamber confirmed the Sysmex impedance results. Finally the manual PCV was lower than HCT by Sysmex. While Sysmex XT can differentiate between RBC and WBC by impedance, even in the face of extreme lymphocytosis due to CLL, RBC-O can be affected by bias, resulting in falsely increased RBC and reticulocyte numbers. Overestimation of RBC-O may be due to incorrect Sysmex classification of leukemic cells or their fragments as reticulocytes. This phenomenon is known as pseudoreticulocytosis and can lead to misinterpretation of regenerative anemia. On the other side PCV can be affected by bias in CLL due to the trapping of RBC in the buffy coat, resulting in a pink hue in the separation area. As HGB concentration is not affected by flow cytometric or other cell-related artifacts it may represent the most reliable variable to assess the degree of anemia in cases of CLL.
Modern analytical chemistry in the contemporary world
NASA Astrophysics Data System (ADS)
Šíma, Jan
2016-12-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.
Xiao, Lihua; Alderisio, Kerri A.; Jiang, Jianlin
2006-01-01
Due to the small number of Cryptosporidium oocysts in water, the number of samples taken and the analyses performed can affect the results of detection. In this study, 42 water samples were collected from one watershed during 20 storm events over 1 year, including duplicate or quadruplicate samples from 16 storm events. Ten samples from four events had three to eight subsamples. They were processed by EPA method 1623, and Cryptosporidium oocysts present were detected by immunofluorescent microscopy or PCR. Altogether, 24 of 39 samples (47 of 67 samples and subsamples) analyzed by microscopy were positive for Cryptosporidium. In contrast, 36 of 42 samples (62 of 76 samples and subsamples) were positive by PCR, including 10 microscopy-negative samples (13 microscopy-negative samples and subsamples). Six of the 24 microscopy-positive samples were negative by PCR, and all samples had one or less oocyst in a 0.5-ml packed pellet volume calculated. Discordant results were obtained by microscopy and PCR from six and three of the storm events, respectively, with multiple samples. Discordant microscopy or PCR results were also obtained among subsamples. Most of the 14 Cryptosporidium genotypes were found over a brief period. Cryptosporidium-positive samples had a mean of 1.9 genotypes per sample, with 39 of the 62 positive samples/subsamples having more than one genotype. Samples/subsamples with more than one genotype had an overall PCR-positive rate of 73%, compared to 34% for those with one genotype. The PCR amplification rate of samples was affected by the volume of DNA used in PCR. PMID:16957214
Wilcox, Ralph
1995-01-01
The six sites investigated include silver recovery units; a buried caustic drain line; a neutralization pit; an evaporation/infiltration pond; the Manzano fire training area; and a waste oil underground storage tank. Environmental samples of soil, pond sediment, soil gas, and water and gas in floor drains were collected and analyzed. Field quality-control samples were also collected and analyzed in association with the environmental samples. The six sites were investigated because past or current activities could have resulted in contamination of soil, pond sediment, or water and sediment in drains.
BELL, K.E.
2000-05-11
This document is the format IV, final report for the tank 241-SY-102 (SY-102) grab samples taken in January 2000 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank SY-102 samples were performed as directed in Comparability Grab Sampling and Analysis Plan for Fiscal Year 2000 (Sasaki 1999). No notification limits were exceeded. Preliminary data on samples 2SY-99-5, -6, and -7 were reported in ''Format II Report on Tank 241-SY-102 Waste Compatibility Grab Samples Taken in January 2000'' (Lockrem 2000). The data presented here represent the final results.
Ivey, Wade
2013-12-17
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, received five swipe samples on December 10, 2013 from the Northern Biomedical Research Facility in Norton Shores, Michigan. The samples were analyzed for tritium and carbon-14 according to the NRC Form 303 supplied with the samples. The sample identification numbers are presented in Table 1 and the tritium and carbon-14 results are provided in Table 2. The pertinent procedure references are included with the data tables.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Accurate Evaluation of Quantum Integrals
NASA Technical Reports Server (NTRS)
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Kurtz, R.J.; Heasler, P.G.; Baird, D.B.
1994-02-01
This report summarizes the results of three previous studies to evaluate and compare the effectiveness of sampling plans for steam generator tube inspections. An analytical evaluation and Monte Carlo simulation techniques were the methods used to evaluate sampling plan performance. To test the performance of candidate sampling plans under a variety of conditions, ranges of inspection system reliability were considered along with different distributions of tube degradation. Results from the eddy current reliability studies performed with the retired-from-service Surry 2A steam generator were utilized to guide the selection of appropriate probability of detection and flaw sizing models for use in the analysis. Different distributions of tube degradation were selected to span the range of conditions that might exist in operating steam generators. The principal means of evaluating sampling performance was to determine the effectiveness of the sampling plan for detecting and plugging defective tubes. A summary of key results from the eddy current reliability studies is presented. The analytical and Monte Carlo simulation analyses are discussed along with a synopsis of key results and conclusions.
Walker, T I
1977-01-01
Statistical tests were carried out on the results of chemical analysis for total mercury concentrations of replicate samples of muscle tissue of school shark Galeorhinus australis (Macleay) and gummy shark Mustelus antarcticus Guenther from six independent analytical laboratories. These tests showed that one laboratory produced results 9% below the overall average of all results, another 1% below average while the other four were all 5% above average. Moreover, one laboratory had significantly lower scatter of results than the others, and the percentage scatter (standard error expressed as a percentage of the mean) in two of the laboratories tended to diminish as the magnitude of the results increased. Correction for what were concluded to be wild points indicated that the scatter for all laboratories was below 14%.
FULLER, R.K.
1999-02-24
This document is the final report for catch tank 241-ER-311 grab samples. Three grab samples ER311-98-1, ER311-98-2 and ER311-98-3 were taken from East riser of tank 241-ER-311 on August 4, 1998 and received by the 222-S Laboratory on August 4, 1998. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1998)and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Mulkey and Miller, 1997). The analytical results are presented in the data summary report (Table 1). No notification limits were exceeded.
STEEN, F.H.
1999-02-23
This document is the final report for tank 241-S-304 grab samples. Four grab samples were collected from riser 4 on July 30, 1998. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1998) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO). The analytical results are presented in the data summary report (Table 1). None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP (Saaaki, 1998).
FULLER, R.K.
1999-02-24
This document is the final report for tank 241-AN-101 grab samples. Three grab samples 1AN-98-1, 1AN-98-2 and 1AN-98-3 were taken from riser 16 of tank 241-AN-101 on April 8, 1998 and received by the 222-S Laboratory on April 9, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program'' (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded.
Obermeyer, Ziad; Rajaratnam, Julie Knoll; Park, Chang H.; Gakidou, Emmanuela; Hogan, Margaret C.; Lopez, Alan D.; Murray, Christopher J. L.
2010-01-01
15—the probability of a 15-y old dying before his or her 60th birthday—for 44 countries with DHS sibling survival data. Our findings suggest that levels of adult mortality prevailing in many developing countries are substantially higher than previously suggested by other analyses of sibling history data. Generally, our estimates show the risk of adult death between ages 15 and 60 y to be about 20%–35% for females and 25%–45% for males in sub-Saharan African populations largely unaffected by HIV. In countries of Southern Africa, where the HIV epidemic has been most pronounced, as many as eight out of ten men alive at age 15 y will be dead by age 60, as will six out of ten women. Adult mortality levels in populations of Asia and Latin America are generally lower than in Africa, particularly for women. The exceptions are Haiti and Cambodia, where mortality risks are comparable to many countries in Africa. In all other countries with data, the probability of dying between ages 15 and 60 y was typically around 10% for women and 20% for men, not much higher than the levels prevailing in several more developed countries. Conclusions Our results represent an expansion of direct knowledge of levels and trends in adult mortality in the developing world. The CSS method provides grounds for renewed optimism in collecting sibling survival data. We suggest that all nationally representative survey programs with adequate sample size ought to implement this critical module for tracking adult mortality in order to more reliably understand the levels and patterns of adult mortality, and how they are changing. Please see later in the article for the Editors' Summary PMID:20405004
Bonanno, Lisa M; Kwong, Tai C; DeLouise, Lisa A
2010-12-01
In this work, we evaluate for the first time the performance of a label-free porous silicon (PSi) immunosensor assay in a blind clinical study designed to screen authentic patient urine specimens for a broad range of opiates. The PSi opiate immunosensor achieved 96% concordance with liquid chromatography-mass spectrometry/tandem mass spectrometry (LC-MS/MS) results on samples that underwent standard opiate testing (n = 50). In addition, successful detection of a commonly abused opiate, oxycodone, resulted in 100% qualitative agreement between the PSi opiate sensor and LC-MS/MS. In contrast, a commercial broad opiate immunoassay technique (CEDIA) achieved 65% qualitative concordance with LC-MS/MS. Evaluation of important performance attributes including precision, accuracy, and recovery was completed on blank urine specimens spiked with test analytes. Variability of morphine detection as a model opiate target was <9% both within-run and between-day at and above the cutoff limit of 300 ng mL(-1). This study validates the analytical screening capability of label-free PSi opiate immunosensors in authentic patient samples and is the first semiquantitative demonstration of the technology's successful clinical use. These results motivate future development of label-free PSi technology to reduce complexity and cost of diagnostic testing particularly in a point-of-care setting.
Bonanno, Lisa M.; Kwong, Tai C.; DeLouise, Lisa A.
2010-01-01
In this work we evaluate for the first time the performance of a label-free porous silicon (PSi) immunosensor assay in a blind clinical study designed to screen authentic patient urine specimens for a broad range of opiates. The PSi opiate immunosensor achieved 96% concordance with liquid chromatography-mass spectrometry/tandem mass spectrometry (LC-MS/MS) results on samples that underwent standard opiate testing (n=50). In addition, successful detection of a commonly abused opiate, oxycodone, resulted in 100% qualitative agreement between the PSi opiate sensor and LC-MS/MS. In contrast, a commercial broad opiate immunoassay technique (CEDIA®) achieved 65% qualitative concordance with LC-MS/MS. Evaluation of important performance attributes including precision, accuracy, and recovery was completed on blank urine specimens spiked with test analytes. Variability of morphine detection as a model opiate target was < 9% both within-run and between-day at and above the cutoff limit of 300 ng ml−1. This study validates the analytical screening capability of label-free PSi opiate immunosensors in authentic patient samples and is the first semi-quantitative demonstration of the technology’s successful clinical use. These results motivate future development of PSi technology to reduce complexity and cost of diagnostic testing particularly in a point-of-care setting. PMID:21062030
FULLER, R.K.
1999-02-23
This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document.
Diaz, L.A.
1998-03-20
This document is the final report for tank 241-S-302 grab samples. Three grab samples were collected on January 30, 1998. Analyses were performed on samples 302-S-97-1, 302-S-97-2 and 302-S-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1997) and the Data Quality Objectives (DQO) for Tank Farms Waste Compatibility Program (Mulkey, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. The sample breakdown diagrams (Attachment 1) are provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information. Visual observation indicated that the sample was a clear, light-yellow liquid with less than one percent solids. No organic layer was observed. The 125 mL sample was submitted to the laboratory for analysis of inorganic analytes and radionuclides.
Analytical approximations to the spectra of quark antiquark potentials
NASA Astrophysics Data System (ADS)
Amore, Paolo; DePace, Arturo; Lopez, Jorge
2006-07-01
A method recently devised to obtain analytical approximations to certain classes of integrals is used in combination with the WKB expansion to derive accurate analytical expressions for the spectrum of quantum potentials. The accuracy of our results is verified by comparing them both with the literature on the subject and with the numerical results obtained with a Fortran code. As an application of the method that we propose, we consider meson spectroscopy with various phenomenological potentials.
Georgakopoulos, Panagiotis; Zachari, Rodanthi; Mataragas, Marios; Athanasopoulos, Panagiotis; Drosinos, Eleftherios H; Skandamis, Panagiotis N
2011-09-15
Three low-fatty baby food matrices were fortified with 0.01-0.2mg/kg of phorate, diazinon, chlorpyrifos and methidathion. A "quick, easy, cheap, effective, rugged and safe" - like method (QuEChERS) was used. Quantities of octadecyl (C18) sorbent differed with fortification level and matrix fat, based on central composite experimental design. Quantification was performed by Nitrogen-Phosphorus Detector gas chromatography, using matrix-matched standards. The highest (p<0.05) recoveries were observed for methidathion, the lowest fortification levels for a specific C18 amount and the lowest C18 amounts. In meals containing vegetables (1.9% fat) and lamb (3.0% fat), 180-210mg C18 gave recoveries from 67.0% to 105.0% and absence of co-extracts. Yogurt dessert (4.5% fat) required 200-230mg C18 for similar results. Recoveries could also be predicted with <20% error by a polynomial model. The results suggest that modified QuEChERS could be effectively used in the low-fatty baby meals residue analysis.
NASA Technical Reports Server (NTRS)
Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.
1993-01-01
The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.
NASA Astrophysics Data System (ADS)
Sahin, O. K.; Asci, M.
2014-12-01
At this study, determination of theoretical parameters for inversion process of Trabzon-Sürmene-Kutlular ore bed anomalies was examined. Making a decision of which model equation can be used for inversion is the most important step for the beginning. It is thought that will give a chance to get more accurate results. So, sections were evaluated with sphere-cylinder nomogram. After that, same sections were analyzed with cylinder-dike nomogram to determine the theoretical parameters for inversion process for every single model equations. After comparison of results, we saw that only one of them was more close to parameters of nomogram evaluations. But, other inversion result parameters were different from their nomogram parameters.
Accurate spectral color measurements
NASA Astrophysics Data System (ADS)
Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.
1999-08-01
Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.
Analytic streamline calculations on linear tetrahedra
Diachin, D.P.; Herzog, J.A.
1997-06-01
Analytic solutions for streamlines within tetrahedra are used to define operators that accurately and efficiently compute streamlines. The method presented here is based on linear interpolation, and therefore produces exact results for linear velocity fields. In addition, the method requires less computation than the forward Euler numerical method. Results are presented that compare accuracy measurements of the method with forward Euler and fourth order Runge-Kutta applied to both a linear and a nonlinear velocity field.
Accurate Modeling of Stability and Control Properties for Fighter Aircraft from CFD
2012-03-01
accurately placed and calibrated , etc. The results of the wind tunnel test must then be properly filtered and scaled to the proper size while taking...1 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Wind Tunnel . . . . . . . . . . . . . . . . . . . 2...analysis, wind tunnel testing, flight testing, and Com- putational Fluid Dynamics (CFD). Analytical analysis includes linear aerodynamic techniques
Phatharacorn, Prateep; Chiangga, Surasak; Yupapin, Preecha
2016-11-20
The whispering gallery mode (WGM) is generated by light propagating within a nonlinear micro-ring resonator, which is modeled and made by an InGaAsP/InP material, and called a Panda ring resonator. An imaging probe can also be formed by the micro-conjugate mirror function for the appropriate Panda ring parameter control. The 3D WGM probe can be generated and used for a 3D sensor head and imaging probe. The analytical details and simulation results are given, in which the simulation results are obtained by using the MATLAB and Optiwave programs. From the obtained results, such a design system can be configured to be a thin-film sensor system that can contact the sample surface for the required measurements The outputs of the system are in the form of a WGM beam, in which the 3D WGM probe is also available with the micro-conjugate mirror function. Such a 3D probe can penetrate into the blood vessel and content, from which the time delay among those probes can be detected and measured, and where finally the blood flow rate can be calculated and the blood content 3D image can also be seen and used for medical diagnosis. The tested results have shown that the blood flow rate of 0.72-1.11 μs^{-1}, with the blood density of 1060 kgm^{-3}, can be obtained.
Meganck, Jeffrey A; Kozloff, Kenneth M; Thornton, Michael M; Broski, Stephen M; Goldstein, Steven A
2009-12-01
Bone mineral density (BMD) measurements are critical in many research studies investigating skeletal integrity. For pre-clinical research, micro-computed tomography (microCT) has become an essential tool in these studies. However, the ability to measure the BMD directly from microCT images can be biased by artifacts, such as beam hardening, in the image. This three-part study was designed to understand how the image acquisition process can affect the resulting BMD measurements and to verify that the BMD measurements are accurate. In the first part of this study, the effect of beam hardening-induced cupping artifacts on BMD measurements was examined. In the second part of this study, the number of bones in the X-ray path and the sampling process during scanning was examined. In the third part of this study, microCT-based BMD measurements were compared with ash weights to verify the accuracy of the measurements. The results indicate that beam hardening artifacts of up to 32.6% can occur in sample sizes of interest in studies investigating mineralized tissue and affect mineral density measurements. Beam filtration can be used to minimize these artifacts. The results also indicate that, for murine femora, the scan setup can impact densitometry measurements for both cortical and trabecular bone and morphologic measurements of trabecular bone. Last, when a scan setup that minimized all of these artifacts was used, the microCT-based measurements correlated well with ash weight measurements (R(2)=0.983 when air was excluded), indicating that microCT can be an accurate tool for murine bone densitometry.
NASA Astrophysics Data System (ADS)
Moldabekov, Zh A.; Ramazanov, T. S.; Gabdullin, M. T.
2016-11-01
In this work, using recently obtained expansion of the dielectric function in the long wave length limit by Moldabekov et al (2015 Phys. Plasmas 22 102104), we extended previously obtained formulas for the equation of state of the semiclassical dense plasma from Ramazanov et al (2015 Phys. Rev. E 92 023104) to the quantum case. Inner energy and contribution to the pressure due to plasma non-ideality derived for both Coulomb pair interaction and quantum pair interaction potentials. Obtained analytical result for the equation of state reproduces the Montroll-Ward contribution, which corresponds to the quantum ring sum. It was shown that the obtained results are consistent with the Thomas-Fermi approximation with the first order gradient correction. Additionally, the generalization of the quantum Deutsch potential to the case of the degenerate electrons is discussed. Obtained results will be useful for understanding of the physics of dense plasmas as well as for further development of the dense plasma simulation on the basis of the quantum potentials.
Steen, F.H.
1997-12-22
This document is the final report for tank 241-AP-107 grab samples. Three grab samples were collected from riser 1 on September 11, 1997. Analyses were performed on samples 7AP-97-1, 7AP-97-2 and 7AP-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1997) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Nuier, 1997). The analytical results are presented in the data summary report (Table 1). A notification was made to East Tank Farms Operations concerning low hydroxide in the tank and a hydroxide (caustic) demand analysis was requested. The request for sample analysis (RSA) (Attachment 2) received for AP-107 indicated that the samples were polychlorinated biphenyl (PCB) suspects. Therefore, prior to performing the requested analyses, aliquots were made to perform PCB analysis in accordance with the 222-S Laboratory administrative procedure, LAP-101-100. The results of this analysis indicated that no PCBs were present at 50 ppm and analysis proceeded as non-PCB samples. The results and raw data for the PCB analysis will be included in a revision to this document. The sample breakdown diagrams (Attachment 1) are provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed.
Williams, M.; Jantzen, C.; Burket, P.; Crawford, C.; Daniel, G.; Aponte, C.; Johnson, C.
2009-12-28
TTT steam reforming process ability to destroy organics in the Tank 48 simulant and produce a soluble carbonate waste form. The ESTD was operated at varying feed rates and Denitration and Mineralization Reformer (DMR) temperatures, and at a constant Carbon Reduction Reformer (CRR) temperature of 950 C. The process produced a dissolvable carbonate product suitable for processing downstream. ESTD testing was performed in 2009 at the Hazen facility to demonstrate the long term operability of an integrated FBSR processing system with carbonate product and carbonate slurry handling capability. The final testing demonstrated the integrated TTT FBSR capability to process the Tank 48 simulant from a slurry feed into a greater than 99.9% organic free and primarily dissolved carbonate FBSR product slurry. This paper will discuss the SRNL analytical results of samples analyzed from the 2008 and 2009 THOR{reg_sign} steam reforming ESTD performed with Tank 48H simulant at HRI in Golden, Colorado. The final analytical results will be compared to prior analytical results from samples in terms of organic, nitrite, and nitrate destruction.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Guertal, William R.; Stewart, Marie; Barbaro, Jeffrey R.; McHale, Timthoy J.
2004-01-01
A joint study by the Dover National Test Site and the U.S. Geological Survey was conducted from June 27 through July 18, 2001 to determine the spatial distribution of the gasoline oxygenate additive methyl tert-butyl ether and selected water-quality constituents in the surficial aquifer underlying the Dover National Test Site at Dover Air Force Base, Delaware. The study was conducted to support a planned enhanced bio-remediation demonstration and to assist the Dover National Test Site in identifying possible locations for future methyl tert-butyl ether remediation demonstrations. This report presents the analytical results from ground-water samples collected during the direct-push ground-water sampling study. A direct-push drill rig was used to quickly collect 115 ground-water samples over a large area at varying depths. The ground-water samples and associated quality-control samples were analyzed for volatile organic compounds and methyl tert-butyl ether by the Dover National Test Site analytical laboratory. Volatile organic compounds were above the method reporting limits in 59 of the 115 ground-water samples. The concentrations ranged from below detection limits to maximum values of 12.4 micrograms per liter of cis-1,2-dichloroethene, 1.14 micrograms per liter of trichloroethene, 2.65 micrograms per liter of tetrachloroethene, 1,070 micrograms per liter of methyl tert-butyl ether, 4.36 micrograms per liter of benzene, and 1.8 micrograms per liter of toluene. Vinyl chloride, ethylbenzene, p,m-xylene, and o-xylene were not detected in any of the samples collected during this investigation. Methyl tert-butyl ether was detected in 47 of the 115 ground-water samples. The highest methyl tert-butyl ether concentrations were found in the surficial aquifer from -4.6 to 6.4 feet mean sea level, however, methyl tert-butyl ether was detected as deep as -9.5 feet mean sea level. Increased methane concentrations and decreased dissolved oxygen concentrations were found in
NASA Astrophysics Data System (ADS)
Groote, S.; Körner, J. G.; Tuvike, P.
2013-05-01
We provide analytical O( α s ) results for the three polarized decay structure functions H ++, H 00 and H - that describe the decay of a polarized W boson into massive quark-antiquark pairs. As an application we consider the decay t→ b+ W + involving the helicity fractions ρ mm of the W + boson followed by the polarized decay W+(\\uparrow)to q1bar{q}2 described by the polarized decay structure functions H mm . We thereby determine the O( α s ) polar angle decay distribution of the cascade decay process tto b+W+(to q1bar{q}2). As a second example we analyze quark mass and off-shell effects in the cascade decays Hto W-+W^{ast+}(to q1bar{q}2) and Hto Z+Z^{ast}(to qbar{q}). For the decays Hto W-+W^{ast+}(to cbar{b}) and Hto Z+Z^{ast}(to bbar{b}) we find substantial deviations from the mass-zero approximation in particular in the vicinity of the threshold region.
NASA Astrophysics Data System (ADS)
Buchberger, G.; Schoeftner, J.
2013-03-01
In this work a theory for a slender piezoelectric laminated beam taking into account lossy electrodes is developed. For the modeling of the bending behavior of the beam with conductivity, the kinematical assumptions of Bernoulli-Euler and a simplified form of the Telegraph equations are used. Applying d’Alembert’s principle, Gauss’ law of electrostatics and Kirchhoff’s voltage and current rules, the partial differential equations of motion are derived, describing the bending vibrations of the beam and the voltage distribution and current flow along the resistive electrodes. The theory is valid for applications that are used for actuation and for sensing. In the first case the voltage at a certain location on the electrodes is prescribed and the beam is deformed, whereas in the second case the structure is excited by a distributed external load and the voltage distribution is a result of the structural deformation. For a bimorph with constant width and constant material properties the beam is governed by two coupled partial differential equations for the elastic deformation and for the voltage distribution: the first one is an extension of the Bernoulli-Euler equation of an elastic beam, the second one is a diffusion equation for the voltage. The analytical results of the developed theory are validated by means of three-dimensional electromechanically coupled finite element simulations with ANSYS 11.0. Different mechanical and electrical boundary conditions and resistances of the electrodes are considered in the numerical case study. Eigenfrequencies are compared and the frequency responses of the mechanical and electrical quantities show a good agreement between the proposed beam theory and FE results.
NASA Astrophysics Data System (ADS)
Gliese, U.; Gershman, D. J.; Dorelli, J.; Avanov, L. A.; Barrie, A. C.; Clark, G. B.; Kujawski, J. T.; Mariano, A. J.; Coffey, V. N.; Tucker, C. J.; Chornay, D. J.; Cao, N. T.; Zeuch, M. A.; Dickson, C.; Smith, D. L.; Salo, C.; MacDonald, E.; Kreisler, S.; Jacques, A. D.; Giles, B. L.; Pollock, C. J.
2015-12-01
The Fast Plasma Investigation (FPI) on NASA's Magnetospheric MultiScale (MMS) mission employs 16 Dual Electron Spectrometers and 16 Dual Ion Spectrometers with 4 of each type on each of 4 spacecraft to enable fast (30 ms for electrons; 150 ms for ions) and spatially differentiated measurements of the full 3D particle velocity distributions. This approach presents a new and challenging aspect to the calibration and operation of these instruments on ground and in flight. The response uniformity, the reliability of their calibration and the approach to handling any temporal evolution of these calibrated characteristics all assume enhanced importance in this application, where we attempt to understand the meaning of particle distributions within the ion and electron diffusion regions of magnetically reconnecting plasmas. We have developed a detailed model of the spectrometer detection system, its behavior and its signal, crosstalk and noise sources. Based on this, we have devised a new calibration method that enables accurate and repeatable measurement of micro-channel plate (MCP) gain, signal loss due to variation in MCP gain and crosstalk effects in one single measurement. The foundational concepts of this new calibration method, named threshold scan, are presented. It is shown how this method has been successfully applied both on ground and in-flight to achieve highly accurate and precise calibration of all 64 spectrometers. Calibration parameters that will evolve in flight are determined daily providing a robust characterization of sensor suite performance, as a basis for both in-situ hardware adjustment and data processing to scientific units, throughout mission lifetime. This is shown to be very desirable as the instruments will produce higher quality raw science data that will require smaller post-acquisition data-corrections using results from in-flight derived pitch angle distribution measurements and ground calibration measurements. The practical application
Kapoor, Alok; Kraemer, Kevin L.; Smith, Kenneth J.; Roberts, Mark S.; Saitz, Richard
2009-01-01
Background The %carbohydrate deficient transferrin (%CDT) test offers objective evidence of unhealthy alcohol use but its cost-effectiveness in primary care conditions is unknown. Methods Using a decision tree and Markov model, we performed a literature-based cost-effectiveness analysis of 4 strategies for detecting unhealthy alcohol use in adult primary care patients: (i) Questionnaire Only, using a validated 3-item alcohol questionnaire; (ii) %CDT Only; (iii) Questionnaire followed by %CDT (Questionnaire-%CDT) if the questionnaire is negative; and (iv) No Screening. For those patients screening positive, clinicians performed more detailed assessment to characterize unhealthy use and determine therapy. We estimated costs using Medicare reimbursement and the Medical Expenditure Panel Survey. We determined sensitivity, specificity, prevalence of disease, and mortality from the medical literature. In the base case, we calculated the incremental cost-effectiveness ratio (ICER) in 2006 dollars per quality-adjusted life year ($/QALY) for a 50-year-old cohort. Results In the base case, the ICER for the Questionnaire-%CDT strategy was $15,500/QALY compared with the Questionnaire Only strategy. Other strategies were dominated. When the prevalence of unhealthy alcohol use exceeded 15% and screening age was <60 years, the Questionnaire-%CDT strategy costs less than $50,000/QALY compared to the Questionnaire Only strategy. Conclusions Adding %CDT to questionnaire-based screening for unhealthy alcohol use was cost-effective in our literature-based decision analytic model set in typical primary care conditions. Screening with %CDT should be considered for adults up to the age of 60 when the prevalence of unhealthy alcohol use is 15% or more and screening questionnaires are negative. PMID:19426168
Klima, Miriam; Altenburger, Markus J; Kempf, Jürgen; Auwärter, Volker; Neukamm, Merja A
2016-08-01
In burnt or skeletonized bodies dental hard tissue sometimes is the only remaining specimen available. Therefore, it could be used as an alternative matrix in post mortem toxicology. Additionally, analysis of dental tissues could provide a unique retrospective window of detection. For forensic interpretation, routes and rates of incorporation of different drugs as well as physicochemical differences between tooth root, tooth crown and carious material have to be taken into account. In a pilot study, one post mortem tooth each from three drug users was analyzed for medicinal and illicit drugs. The pulp was removed in two cases; in one case the tooth was root canal treated. The teeth were separated into root, crown and carious material and drugs were extracted from the powdered material with methanol under ultrasonication. The extracts were screened for drugs by LC-MS(n) (ToxTyper™) and quantitatively analyzed with LC-ESI-MS/MS in MRM mode. The findings were compared to the analytical results for cardiac blood, femoral blood, urine, stomach content and hair. In dental hard tissues, 11 drugs (amphetamine, MDMA, morphine, codeine, norcodeine, methadone, EDDP, fentanyl, tramadol, diazepam, nordazepam, and promethazine) could be detected and concentrations ranged from approximately 0.13pg/mg to 2,400pg/mg. The concentrations declined in the following order: carious material>root>crown. Only the root canal treated tooth showed higher concentrations in the crown than in the root. In post mortem toxicology, dental hard tissue could be a useful alternative matrix facilitating a more differentiated consideration of drug consumption patterns, as the window of detection seems to overlap those for body fluids and hair.
Coplen, T.B.; Qi, H.
2012-01-01
Because there are no internationally distributed stable hydrogen and oxygen isotopic reference materials of human hair, the U.S. Geological Survey (USGS) has prepared two such materials, USGS42 and USGS43. These reference materials span values commonly encountered in human hair stable isotope analysis and are isotopically homogeneous at sample sizes larger than 0.2 mg. USGS42 and USGS43 human-hair isotopic reference materials are intended for calibration of δ(2)H and δ(18)O measurements of unknown human hair by quantifying (1) drift with time, (2) mass-dependent isotopic fractionation, and (3) isotope-ratio-scale contraction. While they are intended for measurements of the stable isotopes of hydrogen and oxygen, they also are suitable for measurements of the stable isotopes of carbon, nitrogen, and sulfur in human and mammalian hair. Preliminary isotopic compositions of the non-exchangeable fractions of these materials are USGS42(Tibetan hair)δ(2)H(VSMOW-SLAP) = -78.5 ± 2.3‰ (n = 62) and δ(18)O(VSMOW-SLAP) = +8.56 ± 0.10‰ (n = 18) USGS42(Indian hair)δ(2)H(VSMOW-SLAP) = -50.3 ± 2.8‰ (n = 64) and δ(18)O(VSMOW-SLAP) = +14.11 ± 0.10‰ (n = 18). Using recommended analytical protocols presented herein for δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurements, the least squares fit regression of 11 human hair reference materials is δ(2)H(VSMOW-SLAP) = 6.085δ(2)O(VSMOW-SLAP) - 136.0‰ with an R-square value of 0.95. The δ(2)H difference between the calibrated results of human hair in this investigation and a commonly accepted human-hair relationship is a remarkable 34‰. It is critical that readers pay attention to the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) of isotopic reference materials in publications, and they need to adjust the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurement results of human hair in previous publications, as needed, to ensure all results on are on the same scales.
Coplen, Tyler B; Qi, Haiping
2012-01-10
Because there are no internationally distributed stable hydrogen and oxygen isotopic reference materials of human hair, the U.S. Geological Survey (USGS) has prepared two such materials, USGS42 and USGS43. These reference materials span values commonly encountered in human hair stable isotope analysis and are isotopically homogeneous at sample sizes larger than 0.2 mg. USGS42 and USGS43 human-hair isotopic reference materials are intended for calibration of δ(2)H and δ(18)O measurements of unknown human hair by quantifying (1) drift with time, (2) mass-dependent isotopic fractionation, and (3) isotope-ratio-scale contraction. While they are intended for measurements of the stable isotopes of hydrogen and oxygen, they also are suitable for measurements of the stable isotopes of carbon, nitrogen, and sulfur in human and mammalian hair. Preliminary isotopic compositions of the non-exchangeable fractions of these materials are USGS42(Tibetan hair)δ(2)H(VSMOW-SLAP) = -78.5 ± 2.3‰ (n = 62) and δ(18)O(VSMOW-SLAP) = +8.56 ± 0.10‰ (n = 18) USGS42(Indian hair)δ(2)H(VSMOW-SLAP) = -50.3 ± 2.8‰ (n = 64) and δ(18)O(VSMOW-SLAP) = +14.11 ± 0.10‰ (n = 18). Using recommended analytical protocols presented herein for δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurements, the least squares fit regression of 11 human hair reference materials is δ(2)H(VSMOW-SLAP) = 6.085δ(2)O(VSMOW-SLAP) - 136.0‰ with an R-square value of 0.95. The δ(2)H difference between the calibrated results of human hair in this investigation and a commonly accepted human-hair relationship is a remarkable 34‰. It is critical that readers pay attention to the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) of isotopic reference materials in publications, and they need to adjust the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurement results of human hair in previous publications, as needed, to ensure all results on are on the same scales.
Accurate ab Initio Spin Densities.
Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus
2012-06-12
We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].
ERIC Educational Resources Information Center
Kilgo, Cindy A.; Pascarella, Ernest T.
2016-01-01
This study examines the effects of undergraduate students participating in independent research with faculty members on four-year graduation and graduate/professional degree aspirations. We analyzed four-year longitudinal data from the Wabash National Study of Liberal Arts Education using multiple analytic techniques. The findings support the…
ERIC Educational Resources Information Center
Olson, Carol Booth; Kim, James S.; Scarcella, Robin; Kramer, Jason; Pearson, Matthew; van Dyk, David A.; Collins, Penny; Land, Robert E.
2012-01-01
In this study, 72 secondary English teachers from the Santa Ana Unified School District were randomly assigned to participate in the Pathway Project, a cognitive strategies approach to teaching interpretive reading and analytical writing, or to a control condition involving typical district training focusing on teaching content from the textbook.…
Tank 241-AX-101 grab samples 1AX-97-1 through 1AX-97-3 analytical results for the final report
Esch, R.A.
1997-11-13
This document is the final report for tank 241-AX-101 grab samples. Four grab samples were collected from riser 5B on July 29, 1997. Analyses were performed on samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Miller, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. All four samples contained settled solids that appeared to be large salt crystals that precipitated upon cooling to ambient temperature. Less than 25 % settled solids were present in the first three samples, therefore only the supernate was sampled and analyzed. Sample 1AX-97-4 contained approximately 25.3 % settled solids. Compatibility analyses were not performed on this sample. Attachment 1 is provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information. All four samples contained settled solids that appeared to be large salt crystal that precipitated upon cooling to ambient temperature. The settled solids in samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 were less than 25% by volume. Therefore, for these three samples, two 15-mL subsamples were pipetted to the surface of the liquid and submitted to the laboratory for analysis. In addition, a portion of the liquid was taken from each of the these three samples to perform an acidified ammonia analysis. No analysis was performed on the settled solid portion of the samples. Sample 1AX-97-4 was reserved for the Process Chemistry group to perform boil down and dissolution testing in accordance with Letter of Instruction for Non-Routine Analysis of Single-Shell Tank 241-AX-101 Grab Samples (Field, 1997) (Correspondence 1). However, prior to the analysis, the sample was inadvertently
NASA Technical Reports Server (NTRS)
Frady, Greg; Smaolloey, Kurt; LaVerde, Bruce; Bishop, Jim
2004-01-01
The paper will discuss practical and analytical findings of a test program conducted to assist engineers in determining which analytical strain fields are most appropriate to describe the crack initiating and crack propagating stresses in thin walled cylindrical hardware that serves as part of the Space Shuttle Main Engine's fuel system. In service the hardware is excited by fluctuating dynamic pressures in a cryogenic fuel that arise from turbulent flow/pump cavitation. A bench test using a simplified system was conducted using acoustic energy in air to excite the test articles. Strain measurements were used to reveal response characteristics of two Flowliner test articles that are assembled as a pair when installed in the engine feed system.
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad
Enengel, Barbara; Penker, Marianne; Muhar, Andreas; Williams, Rachael
2011-04-01
Participatory processes in general and also in relation to managing landscape issues are gathering importance mostly due to arguments surrounding legitimacy and effectiveness in decision-making. The main aim of this research, based on a transaction costs perspective, is to present an integrated analytical framework in order to determine individual efforts (time, money), benefits and risks of participants in landscape co-management processes. Furthermore a reflection on the analytical approach developed and arising lessons to be learned for landscape co-management are presented. In the analytical framework benefit-components comprise of factors such as 'contributing to landscape maintenance/development and nature protection', 'representing one's interest group', 'co-deciding on relevant topics', 'providing and broadening one's knowledge' and 'building networks'. The risks of participation are related to a lack of information and agreements, missing support and actual decision-making power. The analytical framework is applied to two case studies in Austria: an EU LIFE-Nature project and a Cultural Landscape Project of the Provincial Government of Lower Austria. Analysis of the effort-benefit-relations provides an indication for a more effective design of co-management. Although the processes are rated as quite adequate, there is a low willingness of participants to commit additional time to co-management processes. In contrast to the Cultural Landscape Project, in the LIFE-Nature project, professionally involved persons participate next to partly and full volunteers. These uneven conditions of participation and an unfair distribution of transaction costs, jeopardize the promising chances co-management bears for landscape governance.
Not Available
2006-06-01
In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.
Selecting MODFLOW cell sizes for accurate flow fields.
Haitjema, H; Kelson, V; de Lange, W
2001-01-01
Contaminant transport models often use a velocity field derived from a MODFLOW flow field. Consequently, the accuracy of MODFLOW in representing a ground water flow field determines in part the accuracy of the transport predictions, particularly when advective transport is dominant. We compared MODFLOW ground water flow rates and MODPATH particle traces (advective transport) for a variety of conceptual models and different grid spacings to exact or approximate analytic solutions. All of our numerical experiments concerned flow in a single confined or semiconfined aquifer. While MODFLOW appeared robust in terms of both local and global water balance, we found that ground water flow rates, particle traces, and associated ground water travel times are accurate only when sufficiently small cells are used. For instance, a minimum of four or five cells are required to accurately model total ground water inflow in tributaries or other narrow surface water bodies that end inside the model domain. Also, about 50 cells are needed to represent zones of differing transmissivities or an incorrect flow field and (locally) inaccurate ground water travel times may result. Finally, to adequately represent leakage through aquitards or through the bottom of surface water bodies it was found that the maximum allowable cell dimensions should not exceed a characteristic leakage length lambda, which is defined as the square root of the aquifer transmissivity times the resistance of the aquitard or stream bottom. In some cases a cell size of one-tenth of lambda is necessary to obtain accurate results.
Loser, H
1985-12-01
The environmental control and life support subsystem (ECLS) of the Spacelab module provides various functions which can be assigned to its various branches as follows: Thermal insulation from the external environment is achieved by the passive thermal control subsystem (PTCS). Rejection of the heat produced by the Spacelab subsystem equipment and by the various experiments is the task of the active thermal control subsystem (ATCS). Life support in the form of cabin air ventilation, oxygen/carbon dioxide partial pressure control; total pressure and air temperature/humidity control is achieved by the life support subsystem (LSS). In the first part of the paper a brief description of the various elements and loops forming the Spacelab ECLS will be given by discussing the PTCS, ATCS and LSS in some detail. Objective of the verification flight test program--as implied in the title already--is the verification of major requirements the ECLS has to comply with. Those requirements will then be comprehensively discussed in the second part of the paper. A description of the analytical approach is given in the third part of the paper. However, only those areas will be addressed which were included in the verification flight test program. A brief description of the flight instrumentation, the data transmission and collection follows then in the fourth part of the paper. In the fifth part of the paper the approach to select and compile flight test data obtained during the first mission (Shuttle flight STS 9) from November 28 till December 8, 1983 is illustrated. Flight test data are compared with the analytical predictions in form of examples. In the sixth and last portion of the paper the actual/measured performance is compared with the requirements, and conclusions are drawn with respect to comprehensiveness/accuracy of the flight test verification and the compliance of the Spacelab actual performance with the requirements.
Dynamical correction of control laws for marine ships' accurate steering
NASA Astrophysics Data System (ADS)
Veremey, Evgeny I.
2014-06-01
The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
Accurate orbit propagation with planetary close encounters
NASA Astrophysics Data System (ADS)
Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca
2015-08-01
We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1989-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potenital and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1990-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potential and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
NASA Astrophysics Data System (ADS)
Panetta, Robert James; Seed, Mike
2016-04-01
Stable isotope applications that call for preconcentration (i.e., greenhouse gas measurements, small carbonate samples, etc.) universally call for cryogenic fluids such as liquid nitrogen, dry ice slurries, or expensive external recirculation chillers. This adds significant complexity, first and foremost in the requirements to store and handle such dangerous materials. A second layer of complexity is the instrument itself - with mechanisms to physically move either coolant around the trap, or move a trap in or out of the coolant. Not to mention design requirements for hardware that can safely isolate the fluid from other sensitive areas. In an effort to simplify the isotopic analysis of gases requiring preconcentration, we have developed a new separation technology, UltiTrapTM (patent pending), which leverage's the proprietary Advanced Purge & Trap (APT) Technology employed in elemental analysers from Elementar Analysensysteme GmbH products. UltiTrapTM has been specially developed as a micro volume, dynamically heated GC separation column. The introduction of solid-state cooling technology enables sub-zero temperatures without cryogenics or refrigerants, eliminates all moving parts, and increases analytical longevity due to no boiling losses of coolant . This new technology makes it possible for the system to be deployed as both a focussing device and as a gas separation device. Initial data on synthetic gas mixtures (CO2/CH4/N2O in air), and real-world applications including long-term room air and a comparison between carbonated waters of different origins show excellent agreement with previous technologies.
On numerically accurate finite element
NASA Technical Reports Server (NTRS)
Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.
1974-01-01
A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
Fontaine, Johannes; Schirmer, Barbara; Hörr, Jutta
2002-07-03
Further NIRS calibrations were developed for the accurate and fast prediction of the total contents of methionine, cystine, lysine, threonine, tryptophan, and other essential amino acids, protein, and moisture in the most important cereals and brans or middlings for animal feed production. More than 1100 samples of global origin collected over five years were analyzed for amino acids following the Official Methods of the United States and European Union. Detailed data and graphics are given to characterize the obtained calibration equations. NIRS was validated with 98 independent samples for wheat and 78 samples for corn and compared to amino acid predictions using linear crude protein regression equations. With a few exceptions, validation showed that 70-98% of the amino acid variance in the samples could be explained using NIRS. Especially for lysine and methionine, the most limiting amino acids for farm animals, NIRS can predict contents in cereals much better than crude protein regressions. Through low cost and high speed of analysis NIRS enables the amino acid analysis of many samples in order to improve the accuracy of feed formulation and obtain better quality and lower production costs.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis.
An Accurate and Efficient Method of Computing Differential Seismograms
NASA Astrophysics Data System (ADS)
Hu, S.; Zhu, L.
2013-12-01
Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
McCurry, M.; Welhan, J.A.
1996-07-01
This report summarizes results of groundwater analyses for samples collected from wells USGS-44, -45, -46 and -59 in conjunction with the INEL Oversight Program straddle-packer project between 1992 and 1995. The purpose of this project was to develop and deploy a high-quality straddle-packer system for characterization of the three-dimensional geometry of solute plumes and aquifer hydrology near the Idaho Chemical Processing Plant (ICPP). Principle objectives included (1) characterizing vertical variations in aquifer chemistry; (2) documenting deviations in aquifer chemistry from that monitored by the existing network, and (3) making recommendations for improving monitoring efforts.
NASA Technical Reports Server (NTRS)
Johnson, Paul K.
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
STEEN, F.H.
1999-12-01
This document is the format IV, final report for the tank 241-S-111 (S-111) grab samples taken in August 1999 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank S-111 samples were performed as directed in Compatibility Grab Sampling and Analysis Plan for Fiscal Year 1999 (Sasaki 1999a,b). Any deviations from the instructions provided in the tank sampling and analysis plan (TSAP) were discussed in this narrative. The notification limit for {sup 137}Cs was exceeded on two samples. Results are discussed in Section 5.3.2. No other notification limits were exceeded.
BELL, K.E.
1999-08-12
This document is the format IV, final report for the tank 241-AP-107 (AP-107) grab samples taken in May 1999 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank AP-107 samples were performed as directed in Compatibility Grab Sampling and Analysis Plan for Fiscal year 1999. Any deviations from the instructions provided in the tank sampling and analysis plan (TSAP) were discussed in this narrative. Interim data were provided earlier to River Protection Project (RPP) personnel, however, the data presented here represent the official results. No notification limits were exceeded.
Analytic theory for the selection of 2-D needle crystal at arbitrary Peclet number
NASA Technical Reports Server (NTRS)
Tanveer, Saleh
1989-01-01
An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.
Analytic theory for the selection of a two-dimensional needle crystal at arbitrary Peclet number
NASA Technical Reports Server (NTRS)
Tanveer, S.
1989-01-01
An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.
Stauffer, Eric
2006-09-01
This paper reviews the literature on the analysis of vegetable (and animal) oil residues from fire debris samples. The examination sequence starts with the solvent extraction of the residues from the substrate. The extract is then prepared for instrumental analysis by derivatizing fatty acids (FAs) into fatty acid methyl esters. The analysis is then carried out by gas chromatography or gas chromatography-mass spectrometry. The interpretation of the results is a difficult operation seriously limited by a lack of research on the subject. The present data analysis scheme utilizes FA ratios to determine the presence of vegetable oils and their propensity to self-heat and possibly, to spontaneously ignite. Preliminary work has demonstrated that it is possible to detect chemical compounds specific to an oil that underwent spontaneous ignition. Guidelines to conduct future research in the analysis of vegetable oil residues from fire debris samples are also presented.
NASA Technical Reports Server (NTRS)
Eades, J. B., Jr.
1974-01-01
The mathematical developments carried out for this investigation are reported. In addition to describing and discussing the solutions which were acquired, there are compendia of data presented herein which summarize the equations and describe them as representative trace geometries. In this analysis the relative motion problems have been referred to two particular frames of reference; one which is inertially aligned, and one which is (local) horizon oriented. In addition to obtaining the classical initial values solutions, there are results which describe cases having applied specific forces serving as forcing functions. Also, in order to provide a complete state representation the speed components, as well as the displacements, have been described. These coordinates are traced on representative planes analogous to the displacement geometries. By this procedure a complete description of a relative motion is developed; and, as a consequence range rate as well as range information is obtained.
Menoni, O; Battevi, N; Colombini, D; Ricci, M G; Occhipinti, E; Zecchi, G
1999-01-01
The paper reports the results of risk evaluation of patient lifting or moving obtained from a multicentre study on 216 wards, for both acute hospital patients and in geriatric residences. In all situations the exposure to patient lifting was assessed using a concise index (MAPO). Analysis of the results showed that only 9% of the workers could be considered as exposed to negligible risk (MAPO Index = 0-1.5); of these 95.7% worked in hospital wards and only 4.3% in geriatric wards. A further confirmation of the higher level of exposure of workers in long-term hospitalization was that 42.3% were exposed to elevated levels (MAPO Index > 5) compared with 27.7% observed in hospital ward workers. The mean values of the exposure index were 6.8 for hospital wards and 9.64 for geriatric residences and, although much higher in the latter, both categories showed high exposure. In the orthopaedic departments of the hospitals the values were higher than in the geriatric wards (MAPO Index = 10.1); medical and surgical departments showed values similar to the mean values observed in the geriatric wards. These high values were due to: severe shortage of equipment life lifting devices (95.5%) and minor aids (99.5%), partial inadequacy of the working environment (69.2%), poor training and information (96.1% lacking); only the supply of wheelchairs was adequate (65.8%). All of which points to an almost generalized non-observance of the regulations listed under Chapter V of Law No. 626/94. However, the proposed method of evaluation allows anyone who has to carry out prevention and improvement measures to identify priority criteria specifically aimed at the individual factors taken into consideration. By simulating an intervention for improvement aimed at equipment and training, 96% of the wards would be included in the negligible exposure class (MAPO Index 0-1.5).
Analytical Relativity of Black Holes
NASA Astrophysics Data System (ADS)
Damour, Thibault
The successful detection and analysis of gravitational wave (GW) signals from coalescing binary black holes necessitates the accurate prior knowledge of the form of the GW signals. This knowledge can be acquired through a synergy between Analytical Relativity (AR) methods and Numerical Relativity (NR) ones. We describe here the most promising AR formalism for describing the motion and radiation of coalescing binary black holes, the Effective One Body (EOB) method, and discuss its comparison with NR simulations.
NASA Astrophysics Data System (ADS)
Jacobs, Kurt; Nurdin, Hendra I.; Strauch, Frederick W.; James, Matthew
2015-04-01
We show that in the regime of ground-state cooling, simple expressions can be derived for the performance of resolved-sideband cooling—an example of coherent feedback control—and optimal linear measurement-based feedback cooling for a harmonic oscillator. These results are valid to leading order in the small parameters that define this regime. They provide insight into the origins of the limitations of coherent and measurement-based feedback for linear systems, and the relationship between them. These limitations are not fundamental bounds imposed by quantum mechanics, but are due to the fact that both cooling methods are restricted to use only a linear interaction with the resonator. We compare the performance of the two methods on an equal footing—that is, for the same interaction strength—and confirm that coherent feedback is able to make much better use of the linear interaction than measurement-based feedback. We find that this performance gap is caused not by the back-action noise of the measurement but by the projection noise. We also obtain simple expressions for the maximal cooling that can be obtained by both methods in this regime, optimized over the interaction strength.
NASA Technical Reports Server (NTRS)
Zolensky, M. E.; Floss, C.; Allen, C.; Bajit, S.; Bechtel, H. A.; Borg, J.; Brenker, F.; Bridges, J; Brownlee, D. E.; Burchell, M.; Burghammer, M.; Butterworth, A. L.; Cloetens, P.; Davis, A. M.; Doll, R.; Flynn, G. J.; Frank, D.; Gainsforth, Z.; Grun, E.; Heck, P. R.; Hillier, J. K.; Hoppe, P
2011-01-01
In addition to samples from comet 81P/Wild 2, NASA's Stardust mission may have returned the first samples of contemporary interstellar dust. The interstellar tray collected particles for 229 days during two exposures prior to the spacecraft encounter with Wild 2 and tracked the interstellar dust stream for all but 34 days of that time. In addition to aerogel capture cells, the tray contains Al foils that make up approx.15% of the total exposed collection surface . Interstellar dust fluxes are poorly constrained, but suggest that on the order of 12-15 particles may have impacted the total exposed foil area of 15,300 sq mm; 2/3 of these are estimated to be less than approx.1 micrometer in size . Examination of the interstellar foils to locate the small rare craters expected from these impacts is proceeding under the auspices of the Stardust Interstellar Preliminary Examination (ISPE) plan. Below we outline the automated high-resolution imaging protocol we have established for this work and report results obtained from two interstellar foils.
Montesinos, Andres; Ardiaca, Maria; Juan-Sallés, Carles; Tesouro, Miguel A
2015-03-01
In this study we evaluated the effects of meloxicam administered at 0.5 mg/kg IM q12h for 14 days on hematologic and plasma biochemical values and on kidney tissue in 11 healthy African grey parrots (Psittacus erithacus). Before treatment with meloxicam, blood samples were collected and renal biopsy samples were obtained from the cranial portion of the left kidney from each of the birds. On day 14 of treatment, a second blood sample and biopsy from the middle portion of the left kidney were obtained from each bird. All birds remained clinically normal throughout the study period. No significant differences were found between hematologic and plasma biochemical values before and after 14 days of treatment with meloxicam, except for a slight increase in median beta globulin and corresponding total globulin concentrations, and a slight decrease in median phosphorus concentration. Renal lesions were absent in 9 of 10 representative posttreatment biopsy samples. On the basis of these results, meloxicam administered at the dosage used in this study protocol does not appear to cause renal disease in African grey parrots.
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
NASA Technical Reports Server (NTRS)
Harrington, Douglas (Technical Monitor); Schweiger, P.; Stern, A.; Gamble, E.; Barber, T.; Chiappetta, L.; LaBarre, R.; Salikuddin, M.; Shin, H.; Majjigi, R.
2005-01-01
Hot flow aero-acoustic tests were conducted with Pratt & Whitney's High-Speed Civil Transport (HSCT) Mixer-Ejector Exhaust Nozzles by General Electric Aircraft Engines (GEAE) in the GEAE Anechoic Freejet Noise Facility (Cell 41) located in Evendale, Ohio. The tests evaluated the impact of various geometric and design parameters on the noise generated by a two-dimensional (2-D) shrouded, 8-lobed, mixer-ejector exhaust nozzle. The shrouded mixer-ejector provides noise suppression by mixing relatively low energy ambient air with the hot, high-speed primary exhaust jet. Additional attenuation was obtained by lining the shroud internal walls with acoustic panels, which absorb acoustic energy generated during the mixing process. Two mixer designs were investigated, the high mixing "vortical" and aligned flow "axial", along with variations in the shroud internal mixing area ratios and shroud length. The shrouds were tested as hardwall or lined with acoustic panels packed with a bulk absorber. A total of 21 model configurations at 1:11.47 scale were tested. The models were tested over a range of primary nozzle pressure ratios and primary exhaust temperatures representative of typical HSCT aero thermodynamic cycles. Static as well as flight simulated data were acquired during testing. A round convergent unshrouded nozzle was tested to provide an acoustic baseline for comparison to the test configurations. Comparisons were made to previous test results obtained with this hardware at NASA Glenn's 9- by 15-foot low-speed wind tunnel (LSWT). Laser velocimetry was used to investigate external as well as ejector internal velocity profiles for comparison to computational predictions. Ejector interior wall static pressure data were also obtained. A significant reduction in exhaust system noise was demonstrated with the 2-D shrouded nozzle designs.
NASA Astrophysics Data System (ADS)
Sobral, R. R.; Guimarães, A. P.; da Silva, X. A.
1994-10-01
The eigenvalues of the Crystalline Electric Field (CEF) Hamiltonian with cubic symmetry are analytically obtained for trivalent rare-earth ions of ground state J= {5}/{2}, {7}/{2}, 4, {9}/{2}, 6, {15}/{2} and 8, via a Computer Algebra approach. In the presence of both CEF and an effective exchange field, Computer Algebra still allows a partial factorization of the characteristic polynomial equation associated to the total Hamiltonian, a result of interest to the study of the magnetic behavior of rare-earth intermetallics. An application to the PrX2 intermetallic compounds ( X = Mg, Al, Ru, Rh, Pt) is reported.
Analytical Chemistry of Nitric Oxide
Hetrick, Evan M.
2013-01-01
Nitric oxide (NO) is the focus of intense research, owing primarily to its wide-ranging biological and physiological actions. A requirement for understanding its origin, activity, and regulation is the need for accurate and precise measurement techniques. Unfortunately, analytical assays for monitoring NO are challenged by NO’s unique chemical and physical properties, including its reactivity, rapid diffusion, and short half-life. Moreover, NO concentrations may span pM to µM in physiological milieu, requiring techniques with wide dynamic response ranges. Despite such challenges, many analytical techniques have emerged for the detection of NO. Herein, we review the most common spectroscopic and electrochemical methods, with special focus on the fundamentals behind each technique and approaches that have been coupled with modern analytical measurement tools or exploited to create novel NO sensors. PMID:20636069
Analytical chemistry of nitric oxide.
Hetrick, Evan M; Schoenfisch, Mark H
2009-01-01
Nitric oxide (NO) is the focus of intense research primarily because of its wide-ranging biological and physiological actions. To understand its origin, activity, and regulation, accurate and precise measurement techniques are needed. Unfortunately, analytical assays for monitoring NO are challenged by NO's unique chemical and physical properties, including its reactivity, rapid diffusion, and short half-life. Moreover, NO concentrations may span the picomolar-to-micromolar range in physiological milieus, requiring techniques with wide dynamic response ranges. Despite such challenges, many analytical techniques have emerged for the detection of NO. Herein, we review the most common spectroscopic and electrochemical methods, with a focus on the underlying mechanism of each technique and on approaches that have been coupled with modern analytical measurement tools to create novel NO sensors.
Manyazewal, Tsegahun; Paterniti, Antonio D; Redfield, Robert R; Marinucci, Francesco
2013-01-01
Providing regular external quality assessment of primary level laboratories and timely feedback is crucial to ensure the reliability of testing capacity of the whole laboratory network. This study was aimed to assess the diagnostic performances of primary level laboratories in Southwest Showa Zone in Ethiopia. An external quality assessment protocol was devised whereby from among all the samples collected on-site at 4 health centers (HCs), each HC sent to a district hospital (DH) on a weekly basis 2 TB slides (1 Ziehl-Neelsen stained and another unstained), 2 malaria slides (1 Giemsa stained and another unstained), and 2 blood samples for HIV testing (1 whole blood and another plasma) for a comparative analysis. Similarly, the DH preserved the same amount and type of specimens to send to each HC for retesting. From October to November 2011, 192 single-blinded specimens were rechecked: 64 TB slides, 64 malaria slides, and 64 blood specimens for HIV testing. The analyses demonstrated an overall agreement of 95.3% (183/192) between the test and the retest, and 98.4% (63/64), 92.2% (59/64,) and 95.3% (61/64) for TB microscopy, malaria microscopy, and HIV rapid testing, respectively. Of the total TB slides tested positive, 20/23 (87%) were quantified similar in both laboratories. The agreement on HIV rapid testing was 100% (32/32) when plasma samples were tested either at HC (16/16) or at DH (16/16), while when whole blood specimens were tested, the agreement was 87.5% (14/16) and 93.8% (15/16) for samples prepared by HCs and DH, respectively. Results of this new approach proved that secondary laboratories could play a vital role in assuring laboratory qualities at primary level HCs, without depending on remotely located national and regional laboratories to provide this support.
Groote, S.; Koerner, J. G.
2009-08-01
We determine the O({alpha}{sub s}) radiative corrections to polarized top quark pair production in e{sup +}e{sup -} annihilations with a specified gluon energy cut. We write down fully analytical results for the unpolarized and polarized O({alpha}{sub s}) cross sections e{sup +}e{sup -}{yields}tt(G) and e{sup +}e{sup -}{yields}tt{sup {up_arrow}}(G) including their polar orientation dependence relative to the beam direction. In the soft-gluon limit we recover the usual factorizing form known from the soft-gluon approximation. In the limit when the gluon energy cut takes its maximum value we recover the totally inclusive unpolarized and polarized cross sections calculated previously. We provide some numerical results on the cutoff dependence of the various polarized and unpolarized cross sections and discuss how the exact results numerically differ from the approximate soft-gluon results.
Understanding Business Analytics
2015-01-05
Business Analytics, Decision Analytics, Business Intelligence, Advanced Analytics, Data Science. . . to a certain degree, to label is to limit - if only... Business Analytics. 2004 2006 2008 2010 2012 2014 Figure 1: Google trending of daily searches for various analytic disciplines “The limits of my
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
A simple analytical solution for slab detachment
NASA Astrophysics Data System (ADS)
Schmalholz, Stefan M.
2011-04-01
An analytical solution is presented for the nonlinear dynamics of high amplitude necking in a free layer of power-law fluid extended in layer-parallel direction due to buoyancy stress. The solution is one-dimensional (1-D) and contains three dimensionless parameters: the thinning factor (i.e. ratio of current to initial layer thickness), the power-law stress exponent, n, and the ratio of time to the characteristic deformation time of a viscous layer under buoyancy stress, t/ tc. tc is the ratio of the layer's effective viscosity to the applied buoyancy stress. The value of tc/ n specifies the time for detachment, i.e. the time it takes until the layer thickness has thinned to zero. The first-order accuracy of the 1-D solution is confirmed with 2-D finite element simulations of buoyancy-driven necking in a layer of power-law fluid embedded in a linear or power-law viscous medium. The analytical solution is accurate within a factor about 2 if the effective viscosity ratio between the layer and the medium is larger than about 100 and if the medium is a power-law fluid. The analytical solution is applied to slab detachment using dislocation creep laws for dry and wet olivine. Results show that one of the most important parameters controlling the dynamics of slab detachment is the strength of the slab which strongly depends on temperature and rheological parameters. The fundamental conclusions concerning slab detachment resulting from both the analytical solution and from earlier published thermo-mechanical numerical simulations agree well, indicating the usefulness of the highly simplified analytical solution for better understanding slab detachment. Slab detachment resulting from viscous necking is a combination of inhomogeneous thinning due to varying buoyancy stress within the slab and a necking instability due to the power-law viscous rheology ( n > 1). Application of the analytical solution to the Hindu Kush slab provides no "order-of-magnitude argument" against
Accurate and fast computation of transmission cross coefficients
NASA Astrophysics Data System (ADS)
Apostol, Štefan; Hurley, Paul; Ionescu, Radu-Cristian
2015-03-01
Precise and fast computation of aerial images are essential. Typical lithographic simulators employ a Köhler illumination system for which aerial imagery is obtained using a large number of Transmission Cross Coefficients (TCCs). These are generally computed by a slow numerical evaluation of a double integral. We review the general framework in which the 2D imagery is solved and then propose a fast and accurate method to obtain the TCCs. We acquire analytical solutions and thus avoid the complexity-accuracy trade-off encountered with numerical integration. Compared to other analytical integration methods, the one presented is faster, more general and more tractable.
NASA Astrophysics Data System (ADS)
Swanson, Charles; Kaganovich, I. D.
2016-09-01
The technique of suppressing secondary electron emission (SEE) from a surface by texturing it is developing rapidly in recent years. We have specific and general results in support of this technique: We have performed numerical and analytic calculations for determining the effective secondary electron yield (SEY) from velvet, which is an array of long cylinders on the micro-scale, and found velvet to be suitable for suppressing SEY from a normally incident primary distribution. We have performed numerical and analytic calculations also for metallic foams, which are an isotropic lattice of fibers on the micro-scale, and found foams to be suitable for suppressing SEY from an isotropic primary distribution. More generally, we have created a geometric weighted view factor model for determining the SEY suppression of a given surface geometry, which has optimization of SEY as a natural application. The optimal surface for suppressing SEY does not have finite area and has no smallest feature size, making it fractal in nature. This model gives simple criteria for a physical, non-fractal surface to suppress SEY. We found families of optimal surfaces to suppress SEY given a finite surface area. The research is supported by Air Force Office of Scientific Research (AFSOR).
Analytic prediction of airplane equilibrium spin characteristics
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.
1972-01-01
The nonlinear equations of motion are solved algebraically for conditions for which an airplane is in an equilibrium spin. Constrained minimization techniques are employed in obtaining the solution. Linear characteristics of the airplane about the equilibrium points are also presented and their significance in identifying the stability characteristics of the equilibrium points is discussed. Computer time requirements are small making the method appear potentially applicable in airplane design. Results are obtained for several configurations and are compared with other analytic-numerical methods employed in spin prediction. Correlation with experimental results is discussed for one configuration for which a rather extensive data base was available. A need is indicated for higher Reynolds number data taken under conditions which more accurately simulate a spin.
King, Harley D.; Chaffee, Maurice A.
2000-01-01
Desert BLM Resource Area and vicinity. Included in the 1,245 stream-sediment samples collected by the USGS are 284 samples collected as part of the current study, 817 samples collected as part of investigations of the12 BLM WSAs and re-analyzed for the present study, 45 samples from the Needles 1? X 2? quadrangle, and 99 samples from the El Centro 1? X 2? quadrangle. The NURE stream-sediment and soil samples were re-analyzed as part of the USGS study in the Needles quadrangle. Analytical data for samples from the Chocolate Mountain Aerial Gunnery Range, which is located within the area of the NECD, were previously reported (King and Chaffee, 1999a). For completeness, these results are also included in this report. Analytical data for samples from the area of Joshua Tree National Park that is within the NECD have also been reported (King and Chaffee, 1999b). These results are not included in this report. The analytical data presented here can be used for baseline geochemical, mineral resource, and environmental geochemical studies.
How accurate is the Kubelka-Munk theory of diffuse reflection? A quantitative answer
NASA Astrophysics Data System (ADS)
Joseph, Richard I.; Thomas, Michael E.
2012-10-01
The (heuristic) Kubelka-Munk theory of diffuse reflectance and transmittance of a film on a substrate, which is widely used because it gives simple analytic results, is compared to the rigorous radiative transfer model of Chandrasekhar. The rigorous model has to be numerically solved, thus is less intuitive. The Kubelka-Munk theory uses an absorption coefficient and scatter coefficient as inputs, similar to the rigorous model of Chandrasekhar. The relationship between these two sets of coefficients is addressed. It is shown that the Kubelka-Munk theory is remarkably accurate if one uses the proper albedo parameter.
Accurate analysis of planar optical waveguide devices using higher-order FDTD scheme.
Kong, Fanmin; Li, Kang; Liu, Xin
2006-11-27
A higher-order finite-difference time-domain (HO-FDTD) numerical method is proposed for the time-domain analysis of planar optical waveguide devices. The anisotropic perfectly matched layer (APML) absorbing boundary condition for the HO-FDTD scheme is implemented and the numerical dispersion of this scheme is studied. The numerical simulations for the parallel-slab directional coupler are presented and the computing results using this scheme are in highly accordance with analytical solutions. Compared with conventional FDTD method, this scheme can save considerable computational resource without sacrificing solution accuracy and especially could be applied in the accurate analysis of optical devices.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Accurate formula for conversion of tunneling current in dynamic atomic force spectroscopy
NASA Astrophysics Data System (ADS)
Sader, John E.; Sugimoto, Yoshiaki
2010-07-01
Recent developments in frequency modulation atomic force microscopy enable simultaneous measurement of frequency shift and time-averaged tunneling current. Determination of the interaction force is facilitated using an analytical formula, valid for arbitrary oscillation amplitudes [Sader and Jarvis, Appl. Phys. Lett. 84, 1801 (2004)]. Here we present the complementary formula for evaluation of the instantaneous tunneling current from the time-averaged tunneling current. This simple and accurate formula is valid for any oscillation amplitude and current law. The resulting theoretical framework allows for simultaneous measurement of the instantaneous tunneling current and interaction force in dynamic atomic force microscopy.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Tank 241U102 Grab Samples 2U-99-1 and 2U-99-2 and 2U-99-3 Analytical Results for the Final Report
STEEN, F.H.
1999-08-03
This document is the final report for tank 241-U-102 grab samples. Five grab samples were collected from riser 13 on May 26, 1999 and received by the 222-S laboratory on May 26 and May 27, 1999. Samples 2U-99-3 and 2U-99-4 were submitted to the Process Chemistry Laboratory for special studies. Samples 2U-99-1, 2U-99-2 and 2U-99-5 were submitted to the laboratory for analyses. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan for Fiscal year 1999 (TSAP) (Sasaki, 1999) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Fowler 1995, Mulkey and Miller 1998). The analytical results are presented in the data summary report. None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP.
ERIC Educational Resources Information Center
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
ERIC Educational Resources Information Center
Oblinger, Diana G.
2012-01-01
Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)
Technology Transfer Automated Retrieval System (TEKTRAN)
Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...
Green analytical chemistry--theory and practice.
Tobiszewski, Marek; Mechlińska, Agata; Namieśnik, Jacek
2010-08-01
This tutorial review summarises the current state of green analytical chemistry with special emphasis on environmentally friendly sample preparation techniques. Green analytical chemistry is a part of the sustainable development concept; its history and origins are described. Miniaturisation of analytical devices and shortening the time elapsing between performing analysis and obtaining reliable analytical results are important aspects of green analytical chemistry. Solventless extraction techniques, the application of alternative solvents and assisted extractions are considered to be the main approaches complying with green analytical chemistry principles.
A non linear analytical model of switched reluctance machines
NASA Astrophysics Data System (ADS)
Sofiane, Y.; Tounzi, A.; Piriou, F.
2002-06-01
Nowadays, the switched reluctance machine are widely used. To determine their performances and to elaborate control strategy, we generally use the linear analytical model. Unhappily, this last is not very accurate. To yield accurate modelling results, we use then numerical models based on either 2D or 3D Finite Element Method. However, this approach is very expensive in terms of computation time and remains suitable to study the behaviour of eventually a whole device. However, it is not, a priori, adapted to elaborate control strategy for electrical machines. This paper deals with a non linear analytical model in terms of variable inductances. The theoretical development of the proposed model is introduced. Then, the model is applied to study the behaviour of a whole controlled switched reluctance machine. The parameters of the structure are identified from a 2D numerical model. They can also be determined from an experimental bench. Then, the results given by the proposed model are compared to those issue from the 2D-FEM approach and from the classical linear analytical model.
An Accurate, Simplified Model Intrabeam Scattering
Bane, Karl LF
2002-05-23
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.
On accurate determination of contact angle
NASA Technical Reports Server (NTRS)
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Fabricating Cotton Analytical Devices.
Lin, Shang-Chi; Hsu, Min-Yen; Kuan, Chen-Meng; Tseng, Fan-Gang; Cheng, Chao-Min
2016-08-30
A robust, low-cost analytical device should be user-friendly, rapid, and affordable. Such devices should also be able to operate with scarce samples and provide information for follow-up treatment. Here, we demonstrate the development of a cotton-based urinalysis (i.e., nitrite, total protein, and urobilinogen assays) analytical device that employs a lateral flow-based format, and is inexpensive, easily fabricated, rapid, and can be used to conduct multiple tests without cross-contamination worries. Cotton is composed of cellulose fibers with natural absorptive properties that can be leveraged for flow-based analysis. The simple but elegant fabrication process of our cotton-based analytical device is described in this study. The arrangement of the cotton structure and test pad takes advantage of the hydrophobicity and absorptive strength of each material. Because of these physical characteristics, colorimetric results can persistently adhere to the test pad. This device enables physicians to receive clinical information in a timely manner and shows great potential as a tool for early intervention.
Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium
Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying
2015-01-01
A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium. PMID:26073066
Electron Microprobe Analysis Techniques for Accurate Measurements of Apatite
NASA Astrophysics Data System (ADS)
Goldoff, B. A.; Webster, J. D.; Harlov, D. E.
2010-12-01
Apatite [Ca5(PO4)3(F, Cl, OH)] is a ubiquitous accessory mineral in igneous, metamorphic, and sedimentary rocks. The mineral contains halogens and hydroxyl ions, which can provide important constraints on fugacities of volatile components in fluids and other phases in igneous and metamorphic environments in which apatite has equilibrated. Accurate measurements of these components in apatite are therefore necessary. Analyzing apatite by electron microprobe (EMPA), which is a commonly used geochemical analytical technique, has often been found to be problematic and previous studies have identified sources of error. For example, Stormer et al. (1993) demonstrated that the orientation of an apatite grain relative to the incident electron beam could significantly affect the concentration results. In this study, a variety of alternative EMPA operating conditions for apatite analysis were investigated: a range of electron beam settings, count times, crystal grain orientations, and calibration standards were tested. Twenty synthetic anhydrous apatite samples that span the fluorapatite-chlorapatite solid solution series, and whose halogen concentrations were determined by wet chemistry, were analyzed. Accurate measurements of these samples were obtained with many EMPA techniques. One effective method includes setting a static electron beam to 10-15nA, 15kV, and 10 microns in diameter. Additionally, the apatite sample is oriented with the crystal’s c-axis parallel to the slide surface and the count times are moderate. Importantly, the F and Cl EMPA concentrations are in extremely good agreement with the wet-chemical data. We also present EMPA operating conditions and techniques that are problematic and should be avoided. J.C. Stormer, Jr. et al., Am. Mineral. 78 (1993) 641-648.
Davenport, Thomas H
2006-01-01
We all know the power of the killer app. It's not just a support tool; it's a strategic weapon. Companies questing for killer apps generally focus all their firepower on the one area that promises to create the greatest competitive advantage. But a new breed of organization has upped the stakes: Amazon, Harrah's, Capital One, and the Boston Red Sox have all dominated their fields by deploying industrial-strength analytics across a wide variety of activities. At a time when firms in many industries offer similar products and use comparable technologies, business processes are among the few remaining points of differentiation--and analytics competitors wring every last drop of value from those processes. Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions. In companies that compete on analytics, senior executives make it clear--from the top down--that analytics is central to strategy. Such organizations launch multiple initiatives involving complex data and statistical analysis, and quantitative activity is managed atthe enterprise (not departmental) level. In this article, professor Thomas H. Davenport lays out the characteristics and practices of these statistical masters and describes some of the very substantial changes other companies must undergo in order to compete on quantitative turf. As one would expect, the transformation requires a significant investment in technology, the accumulation of massive stores of data, and the formulation of company-wide strategies for managing the data. But, at least as important, it also requires executives' vocal, unswerving commitment and willingness to change the way employees think, work, and are treated.
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Storti, Simona; Masotti, Silvia; Prontera, Concetta; Franzini, Maria; Buzzi, Paola; Casagranda, Ivo; Ciofini, Enrica; Zucchelli, Gian Carlo; Ndreu, Rudina; Passino, Claudio; Clerico, Aldo
2015-12-07
The study aims are to evaluate the analytical performance and the clinical results of the chemiluminescent Access AccuTnI+3 immunoassay for the determination of cardiac troponin I (cTnI) with DxI 800 and Access2 platforms and to compare the clinical results obtained with this method with those of three cTnI immunoassays, recently introduced in the European market. The limits of blank (LoB), detection (LoD), and quantitation (LoQ) at 20% CV and 10% CV were 4.5 ng/L and 10.9 ng/L, 17.1 and 30.4 ng/L, respectively. The results of STAT Architect high Sensitive TnI (Abbott Diagnostics), ADVIA Centaur Troponin I Ultra (Siemens Healthcare Diagnostics), ST AIA-Pack cTnI third generation (Tosoh Bioscience), and Access AccuTnI+3 (Beckman Coulter Diagnostics) showed very close correlations (R ranging from 0.901 to 0.994) in 122 samples of patients admitted to the emergency department. However, on average there was a difference up to 2.4-fold between the method measuring the highest (ADVIA method) and lowest cTnI values (AccuTnI+3 method). The consensus mean values between methods ranged from 6.2% to 29.6% in 18 quality control samples distributed in an external quality control study (cTnI concentrations ranging from 29.3 ng/L to 1557.5 ng/L). In conclusion, the results of our analytical evaluation concerning the AccuTnI+3 method, using the DxI platform, are well in agreement with those suggested by the manufacturer as well as those reported by some recent studies using the Access2 platform. Our results confirm that the AccuTnI+3 method for the Access2 and DxI 800 platforms is a clinically usable method for cTnI measurement.
NASA Astrophysics Data System (ADS)
Wu, Su-Yong; Long, Xing-Wu; Yang, Kai-Yong
2009-09-01
To improve the current status of home multilayer optical coating design with low speed and poor efficiency when a large layer number occurs, the accurate calculation and fast realization of merit function’s gradient and Hesse matrix is pointed out. Based on the matrix method to calculate the spectral properties of multilayer optical coating, an analytic model is established theoretically. And the corresponding accurate and fast computation is successfully achieved by programming with Matlab. Theoretical and simulated results indicate that this model is mathematically strict and accurate, and its maximal precision can reach floating-point operations in the computer, with short time and fast speed. Thus it is very suitable to improve the optimal search speed and efficiency of local optimization methods based on the derivatives of merit function. It has outstanding performance in multilayer optical coating design with a large layer number.
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Modern Analytical Chemistry in the Contemporary World
ERIC Educational Resources Information Center
Šíma, Jan
2016-01-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among…
Application and Evaluation of Analytic Gaming
Riensche, Roderick M.; Martucci, Louis M.; Scholtz, Jean; Whiting, Mark A.
2009-08-31
We describe an "analytic gaming" framework and methodology, and introduce formal methods for evaluation of the analytic gaming process. This process involves conception, development, and playing of games that are informed by predictive models and driven by players. Evaluation of analytic gaming examines both the process of game development and the results of game play exercises.
Analytical models of steady-state plumes undergoing sequential first-order degradation.
Burnell, Daniel K; Mercer, James W; Sims, Lawrence S
2012-01-01
An exact, closed-form analytical solution is derived for one-dimensional (1D), coupled, steady-state advection-dispersion equations with sequential first-order degradation of three dissolved species in groundwater. Dimensionless and mathematical analyses are used to examine the sensitivity of longitudinal dispersivity in the parent and daughter analytical solutions. The results indicate that the relative error decreases to less than 15% for the 1D advection-dominated and advection-dispersion analytical solutions of the parent and daughter when the Damköhler number of the parent decreases to less than 1 (slow degradation rate) and the Peclet number increases to greater than 6 (advection-dominated). To estimate first-order daughter product rate constants in advection-dominated zones, 1D, two-dimensional (2D), and three-dimensional (3D) steady-state analytical solutions with zero longitudinal dispersivity are also derived for three first-order sequentially degrading compounds. The closed form of these exact analytical solutions has the advantage of having (1) no numerical integration or evaluation of complex-valued error function arguments, (2) computational efficiency compared to problems with long times to reach steady state, and (3) minimal effort for incorporation into spreadsheets. These multispecies analytical solutions indicate that BIOCHLOR produces accurate results for 1D steady-state, applications with longitudinal dispersion. Although BIOCHLOR is inaccurate in multidimensional applications with longitudinal dispersion, these multidimensional multispecies analytical solutions indicate that BIOCHLOR produces accurate steady-state results when the longitudinal dispersion is zero. As an application, the 1D advection-dominated analytical solution is applied to estimate field-scale rate constants of 0.81, 0.74, and 0.69/year for trichloroethene, cis-1,2-dichloroethene, and vinyl chloride, respectively, at the Harris Palm Bay, FL, CERCLA site.
Jakubowska, Natalia; Beldì, Giorgia; Peychès Bach, Aurélie; Simoneau, Catherine
2014-01-01
This paper presents the outcome of the development, optimisation and validation at European Union level of an analytical method for using poly(2,6-diphenyl phenylene oxide--PPPO), which is stipulated in Regulation (EU) No. 10/2011, as food simulant E for testing specific migration from plastics into dry foodstuffs. Two methods for fortifying respectively PPPO and a low-density polyethylene (LDPE) film with surrogate substances that are relevant to food contact were developed. A protocol for cleaning the PPPO and an efficient analytical method were developed for the quantification of butylhydroxytoluene (BHT), benzophenone (BP), diisobutylphthalate (DiBP), bis(2-ethylhexyl) adipate (DEHA) and 1,2-cyclohexanedicarboxylic acid, diisononyl ester (DINCH) from PPPO. A protocol for a migration test from plastics using small migration cells was also developed. The method was validated by an inter-laboratory comparison (ILC) with 16 national reference laboratories for food contact materials in the European Union. This allowed for the first time data to be obtained on the precision and laboratory performance of both migration and quantification. The results showed that the validation ILC was successful even when taking into account the complexity of the exercise. The results showed that the method performance was 7-9% repeatability standard deviation (rSD) for most substances (regardless of concentration), with 12% rSD for the high level of BHT and for DiBP at very low levels. The reproducibility standard deviation results for the 16 European Union laboratories were in the range of 20-30% for the quantification from PPPO (for the three levels of concentrations of the five substances) and 15-40% from migration experiments from the fortified plastic at 60°C for 10 days and subsequent quantification. Considering the lack of data previously available in the literature, this work has demonstrated that the validation of a method is possible both for migration from a film and for
Temporal Learning Analytics for Adaptive Assessment
ERIC Educational Resources Information Center
Papamitsiou, Zacharoula; Economides, Anastasios A.
2014-01-01
Accurate and early predictions of student performance could significantly affect interventions during teaching and assessment, which gradually could lead to improved learning outcomes. In our research, we seek to identify and formalize temporal parameters as predictors of performance ("temporal learning analytics" or TLA) and examine…
Monsanto analytical testing program for NPDES discharge self-monitoring
Hoogheem, T.J.; Woods, L.A.
1985-06-01
The Monsanto Analytical Testing (MAT) program was devised and implemented in order to provide analytical standards to Monsanto manufacturing plants involved in the self-monitoring of plant discharges as required by National Pollutant Discharge Elimination System (NPDES) permit conditions. Standards are prepared and supplied at concentration levels normally observed at each individual plant. These levels were established by canvassing all Monsanto plants having NPDES permits and by determining which analyses and concentrations were most appropriate. Standards are prepared by Monsanto's analyses and concentrations were most appropriate. Standards are prepared by Monsanto's Environmental Sciences Center (ESC) using Environmental Protection Agency (EPA) methods. Eleven standards are currently available, each in three concentrations. Standards are issued quarterly in a company internal round-robin program or on a per request basis or both. Since initiation of the MAT program in 1981, the internal round-robin program has become an integral part of Monsanto's overall Good Laboratory Practices (GLP) program. Overall, results have shown that the company's plant analytical personnel can accurately analyze and report standard test samples. More importantly, such personnel have gained increased confidence in their ability to report accurate values for compounds regulated in their respective plant NPDES permits. 3 references, 3 tables.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Raman Spectroscopy as an Accurate Probe of Defects in Graphene
NASA Astrophysics Data System (ADS)
Rodriguez-Nieva, Joaquin; Barros, Eduardo; Saito, Riichiro; Dresselhaus, Mildred
2014-03-01
Raman Spectroscopy has proved to be an invaluable non-destructive technique that allows us to obtain intrinsic information about graphene. Furthermore, defect-induced Raman features, namely the D and D' bands, have previously been used to assess the purity of graphitic samples. However, quantitative studies of the signatures of the different types of defects on the Raman spectra is still an open problem. Experimental results already suggest that the Raman intensity ratio ID /ID' may allow us to identify the nature of the defects. We study from a theoretical point of view the power and limitations of Raman spectroscopy in the study of defects in graphene. We derive an analytic model that describes the Double Resonance Raman process of disordered graphene samples, and which explicitly shows the role played by both the defect-dependent parameters as well as the experimentally-controlled variables. We compare our model with previous Raman experiments, and use it to guide new ways in which defects in graphene can be accurately probed with Raman spectroscopy. We acknowledge support from NSF grant DMR1004147.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.; Christel, Michael; Ribarsky, Martin W.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Analytical instrument qualification in capillary electrophoresis.
Cianciulli, Claudia; Wätzig, Hermann
2012-06-01
Capillary electrophoresis (CE) is a well-established and frequently used technique in the pharmaceutical industry. Therefore an appropriate analytical instrument qualification (AIQ) is required for quality assurance. AIQ forms the basis of a quality management followed by analytical method validation, system suitability tests (SSTs) and quality control checks. Two parts of the AIQ, namely the operational qualification (OQ) and the performance qualification (PQ) are of particular interest in the daily routine of the laboratory. A new concept for OQ and PQ was developed to assure the correct function of a CE system. The significance of each parameter, possible test methods as well as acceptance criteria will be presented and discussed in detail. Especially temperature adjustment by the cooling system and the voltage supply must be tested for accurate and precise operation. The detector noise, wavelength accuracy and detector linearity have to be checked as well. Finally, the injection linearity, accuracy and precision need to be qualified. The proposed set of qualification procedures is easy to implement and was already tested on five CE instruments from three different manufacturers. A time- and cost-saving continuous PQ was derived, using results from method-specific SSTs and some additional experiments. This holistic concept continuously surveys the most relevant parameters, hence assuring the suitability of the used instruments and decreasing their downtimes.
Analytic model of electromagnetic fields around a plasma bubble in the blow-out regime
Yi, S. A.; Khudik, V.; Siemon, C.; Shvets, G.
2013-01-15
An analytic model of the electric and magnetic fields surrounding the nonlinear plasma 'bubble' formed around the high-current electron bunch in a plasma wakefield accelerator is developed. The model, justified by the results of particle-in-cell simulations, accurately captures the thin high-density plasma sheath and extended return current layer surrounding the bubble. The resulting global fields inside and outside the bubble are used to investigate electron self-injection in a plasma with a smooth density gradient. It is shown that accurate description of the current/density sheaths is crucial for quantitative description of self-injection.
NASA Technical Reports Server (NTRS)
Graves, R. A., Jr.
1975-01-01
The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.
Analytical Challenges in Biotechnology.
ERIC Educational Resources Information Center
Glajch, Joseph L.
1986-01-01
Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Analytical prediction of digital signal crosstalk of FCC
NASA Technical Reports Server (NTRS)
Belleisle, A. P.
1972-01-01
The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.
Analytical models of optical refraction in the troposphere.
Nener, Brett D; Fowkes, Neville; Borredon, Laurent
2003-05-01
An extremely accurate but simple asymptotic description (with known error) is obtained for the path of a ray propagating over a curved Earth with radial variations in refractive index. The result is sufficiently simple that analytic solutions for the path can be obtained for linear and quadratic index profiles. As well as rendering the inverse problem trivial for these profiles, this formulation shows that images are uniformly magnified in the vertical direction when viewed through a quadratic refractive-index profile. Nonuniform vertical distortions occur for higher-order refractive-index profiles.
[Application of analytical pyrolysis in air pollution control for green sand casting industry].
Wang, Yu-jue; Zhao, Qi; Chen, Ying; Wang, Cheng-wen
2010-02-01
Analytic pyrolysis was conducted to simulate the heating conditions that the raw materials of green sand would experience during metal casting process. The volatile organic compound (VOC) and hazardous air pollutant (HAP) emissions from analytical pyrolysis were analyzed by gas chromatograph-flame ionization detector/mass spectrometry (GC-FID/MS). The emissions from analytical pyrolysis exhibited some similarity in the compositions and distributions with those from actual casting processes. The major compositions of the emissions included benzene, toluene and phenol. The relative changes of emission levels that were observed in analytical pyrolysis of the various raw materials also showed similar trends with those observed in actual metal casting processes. The emission testing results of both analytic pyrolysis and pre-production foundry have shown that compared to the conventional phenolic urethane binder, the new non-naphthalene phenolic urethane binder diminished more than 50% of polycyclic aromatic hydrocarbon emissions, and the protein-based binder diminished more than 90% of HAP emissions. The similar trends in the two sets of tests offered promise that analytical pyrolysis techniques could be a fast and accurate way to establish the emission inventories, and to evaluate the relative emission levels of various raw materials of casting industry. The results of analytical pyrolysis could provide useful guides for the foundries to select and develop proper clean raw materials for the casting production.
Analyticity without Differentiability
ERIC Educational Resources Information Center
Kirillova, Evgenia; Spindler, Karlheinz
2008-01-01
In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…
Analytic orbit propagation of planets in binary star systems
NASA Astrophysics Data System (ADS)
Eggl, Siegfried; Georgakarakos, Nikolaos
2015-08-01
We present an analytical framework that accurately describes the motion of co-planar planets in binary star systems on orbital as well as secular timescales. The method builds upon analytic solutions of the differential equations governing the behavior of the system's perturbed Laplace-Runge-Lenz vectors. Multiple time-scale analysis is used to derive the short period evolutions of the system, while octupole secular theory is applied to describe its long term behavior. A post Newtonian correction on the stellar orbit is included for circumbinary planets. Our model is tested against results from numerical integrations of the full equations of motion. An application to circumbinary planetary systems discovered by NASA's Kepler satellite reveals that the formation history of the systems Kepler-34 and Kepler-413 has most likely been different from the one of Kepler-16, Kepler-35, Kepler-38 and Kepler-64, as the former systems are not compatible with the assumption of almost circular initial planetary orbits.
Analytical evaluation of chromatic dispersion in photonic crystal fibers.
Silvestre, Enrique; Pinheiro-Ortega, Teresa; Andrés, Pedro; Miret, Juan J; Ortigosa-Blanch, Arturo
2005-03-01
We present a two-dimensional modal approach for the evaluation, in an analytical manner, of chromatic dispersion in any kind of optical fiber. It combines an iterative Fourier technique to compute the propagation constant at any fixed wavelength and an analytical procedure to calculate its derivatives. The proposed formulation takes into account the effective anisotropy of the interfaces and allows us to deal with microstructured fibers, in general, and specifically with realistic photonic crystal fibers (PCFs), including arbitrary spatial refractive-index distributions of dispersive and absorbing materials. This fast and accurate numerical technique is extremely useful for both analysis and design. We show some results of analysis of PCFs with high anisotropy, and we also describe PCFs with new dispersive properties.
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; McInerney, M. A.; Tamkin, G. S.; Thompson, J. H.; Gill, R.; Grieg, C. M.
2012-12-01
MERRA Analytic Services (MERRA/AS) is a cyberinfrastructure resource for developing and evaluating a new generation of climate data analysis capabilities. MERRA/AS supports OBS4MIP activities by reducing the time spent in the preparation of Modern Era Retrospective-Analysis for Research and Applications (MERRA) data used in data-model intercomparison. It also provides a testbed for experimental development of high-performance analytics. MERRA/AS is a cloud-based service built around the Virtual Climate Data Server (vCDS) technology that is currently used by the NASA Center for Climate Simulation (NCCS) to deliver Intergovernmental Panel on Climate Change (IPCC) data to the Earth System Grid Federation (ESGF). Crucial to its effectiveness, MERRA/AS's servers will use a workflow-generated realizable object capability to perform analyses over the MERRA data using the MapReduce approach to parallel storage-based computation. The results produced by these operations will be stored by the vCDS, which will also be able to host code sets for those who wish to explore the use of MapReduce for more advanced analytics. While the work described here will focus on the MERRA collection, these technologies can be used to publish other reanalysis, observational, and ancillary OBS4MIP data to ESGF and, importantly, offer an architectural approach to climate data services that can be generalized to applications and customers beyond the traditional climate research community. In this presentation, we describe our approach, experiences, lessons learned,and plans for the future.; (A) MERRA/AS software stack. (B) Example MERRA/AS interfaces.
The analytical validation of the Oncotype DX Recurrence Score assay
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust.
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking
Barbosa, Marconi; James, Andrew C.
2014-01-01
A range of applications in visual science rely on accurate tracking of the human pupil’s movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
Analytical derivation of the radial distribution function in spherical dark matter halos
NASA Astrophysics Data System (ADS)
Eilersen, Andreas; Hansen, Steen H.; Zhang, Xingyu
2017-01-01
The velocity distribution of dark matter near the Earth is important for an accurate analysis of the signals in terrestrial detectors. This distribution is typically extracted from numerical simulations. Here we address the possibility of deriving the velocity distribution function analytically. We derive a differential equation which is a function of radius and the radial component of the velocity. Under various assumptions this can be solved, and we compare the solution with the results from controlled numerical simulations. Our findings complement the previously derived tangential velocity distribution. We hereby demonstrate that the entire distribution function, below ˜0.7vesc, can be derived analytically for spherical and equilibrated dark matter structures.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-11-04
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2017-03-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
Rapid Non-Linear Uncertainty Propagation via Analytical Techniques
NASA Astrophysics Data System (ADS)
Fujimoto, K.; Scheeres, D. J.
2012-09-01
Space situational awareness (SSA) is known to be a data starved problem compared to traditional estimation problems in that observation gaps per object may span over days if not weeks. Therefore, consistent characterization of the uncertainty associated with these objects including non-linear effects is crucial in maintaining an accurate catalog of objects in Earth orbit. Simultaneously, the motion of satellites in Earth orbit is well-modeled in that it is particularly amenable to having their solution and their uncertainty described through analytic or semi-analytic techniques. Even when stronger non-gravitational perturbations such as solar radiation pressure and atmospheric drag are encountered, these perturbations generally have deterministic components that are substantially larger than their time-varying stochastic components. Analytic techniques are powerful because time propagation is only a matter of changing the time parameter, allowing for rapid computational turnaround. These two ideas are combined in this paper: a method of analytically propagating non-linear orbit uncertainties is discussed. In particular, the uncertainty is expressed as an analytic probability density function (pdf) for all time. For a deterministic system model, such pdfs may be obtained if the initial pdf and the system states for all time are also given analytically. Even when closed-form solutions are not available, approximate solutions exist in the form of Edgeworth series for pdfs and Taylor series for the states. The coefficients of the latter expansion are referred to as state transition tensors (STTs), which are a generalization of state transition matrices to arbitrary order. Analytically expressed pdfs can be incorporated in many practical tasks in SSA. One can compute the mean and covariance of the uncertainty, for example, with the moments of the initial pdf as inputs. This process does not involve any sampling and its accuracy can be determined a priori. Analytical
NASA Astrophysics Data System (ADS)
Papp, P.; Matejčík, Š.; Mach, P.; Urban, J.; Paidarová, I.; Horáček, J.
2013-06-01
The method of analytic continuation in the coupling constant (ACCC) in combination with use of the statistical Padé approximation is applied to the determination of resonance energy and width of some amino acids and formic acid dimer. Standard quantum chemistry codes provide accurate data which can be used for analytic continuation in the coupling constant to obtain the resonance energy and width of organic molecules with a good accuracy. The obtained results are compared with the existing experimental ones.
The analytic renormalization group
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2016-08-01
Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients Gk, k ∈ Z, associated with the Matsubara frequencies νk = 2 πk / β. We show that analyticity implies that the coefficients Gk must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct "Analytic Renormalization Group" linear maps Aμ which, for any choice of cut-off μ, allow to express the low energy Fourier coefficients for |νk | < μ (with the possible exception of the zero mode G0), together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for |νk | ≥ μ. Operating a simple numerical algorithm, we show that the exact universal linear constraints on Gk can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Vastano, John A.; Lomax, Harvard
1992-01-01
Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.
Magnetic anomaly depth and structural index estimation using different height analytic signals data
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian; Su, Chao
2016-09-01
This paper proposes a new semi-automatic inversion method for magnetic anomaly data interpretation that uses the combination of analytic signals of the anomaly at different heights to determine the depth and the structural index N of the sources. The new method utilizes analytic signals of the original anomaly at different height to effectively suppress the noise contained in the anomaly. Compared with the other high-order derivative calculation methods based on analytic signals, our method only computes first-order derivatives of the anomaly, which can be used to obtain more stable and accurate results. Tests on synthetic noise-free and noise-corrupted magnetic data indicate that the new method can estimate the depth and N efficiently. The technique is applied to a real measured magnetic anomaly in Southern Illinois caused by a known dike, and the result is in agreement with the drilling information and inversion results within acceptable calculation error.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
NASA Technical Reports Server (NTRS)
Boyd, D. E.; Rao, C. K. P.
1973-01-01
The derivation and application of a Rayleigh-Ritz modal vibration analysis are presented for ring and/or stringer stiffened noncircular cylindrical shells with arbitrary end conditions. Comparisons with previous results from experimental and analytical studies showed this method of analysis to be accurate for a variety of end conditions. Results indicate a greater effect of rings on natural frequencies than of stringers.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations
NASA Technical Reports Server (NTRS)
Gayley, K. G.
1993-01-01
Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.
Accurate free and forced rotational motions of rigid Venus
NASA Astrophysics Data System (ADS)
Cottereau, L.; Souchay, J.; Aljbaae, S.
2010-06-01
Context. The precise and accurate modelling of a terrestrial planet like Venus is an exciting and challenging topic, all the more interesting because it can be compared with that of Earth for which such a modelling has already been achieved at the milli-arcsecond level. Aims: We aim to complete a previous study, by determining the polhody at the milli-arcsecond level, i.e. the torque-free motion of the angular momentum axis of a rigid Venus in a body-fixed frame, as well as the nutation of its third axis of figure in space, which is fundamental from an observational point of view. Methods: We use the same theoretical framework as Kinoshita (1977, Celest. Mech., 15, 277) did to determine the precession-nutation motion of a rigid Earth. It is based on a representation of the rotation of a rigid Venus, with the help of Andoyer variables and a set of canonical equations in Hamiltonian formalism. Results: In a first part we computed the polhody, we showed that this motion is highly elliptical, with a very long period of 525 cy compared with 430 d for the Earth. This is due to the very small dynamical flattening of Venus in comparison with our planet. In a second part we precisely computed the Oppolzer terms, which allow us to represent the motion in space of the third Venus figure axis with respect to the Venus angular momentum axis under the influence of the solar gravitational torque. We determined the corresponding tables of the nutation coefficients of the third figure axis both in longitude and in obliquity due to the Sun, which are of the same order of amplitude as for the Earth. We showed that the nutation coefficients for the third figure axis are significantly different from those of the angular momentum axis on the contrary of the Earth. Our analytical results have been validated by a numerical integration, which revealed the indirect planetary effects.
Efficient and accurate computation of the incomplete Airy functions
NASA Technical Reports Server (NTRS)
Constantinides, E. D.; Marhefka, R. J.
1993-01-01
The incomplete Airy integrals serve as canonical functions for the uniform ray optical solutions to several high-frequency scattering and diffraction problems that involve a class of integrals characterized by two stationary points that are arbitrarily close to one another or to an integration endpoint. Integrals with such analytical properties describe transition region phenomena associated with composite shadow boundaries. An efficient and accurate method for computing the incomplete Airy functions would make the solutions to such problems useful for engineering purposes. In this paper a convergent series solution for the incomplete Airy functions is derived. Asymptotic expansions involving several terms are also developed and serve as large argument approximations. The combination of the series solution with the asymptotic formulae provides for an efficient and accurate computation of the incomplete Airy functions. Validation of accuracy is accomplished using direct numerical integration data.
Analytical Chemistry in Russia.
Zolotov, Yuri
2016-09-06
Research in Russian analytical chemistry (AC) is carried out on a significant scale, and the analytical service solves practical tasks of geological survey, environmental protection, medicine, industry, agriculture, etc. The education system trains highly skilled professionals in AC. The development and especially manufacturing of analytical instruments should be improved; in spite of this, there are several good domestic instruments and other satisfy some requirements. Russian AC has rather good historical roots.
Science Update: Analytical Chemistry.
ERIC Educational Resources Information Center
Worthy, Ward
1980-01-01
Briefly discusses new instrumentation in the field of analytical chemistry. Advances in liquid chromatography, photoacoustic spectroscopy, the use of lasers, and mass spectrometry are also discussed. (CS)
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability.
Road Transportable Analytical Laboratory (RTAL) system
Finger, S.M.
1995-10-01
The goal of the Road Transportable Analytical Laboratory (RTAL) Project is the development and demonstration of a system to meet the unique needs of the DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. This laboratory system has been designed to provide the field and laboratory analytical equipment necessary to detect and quantify radionuclides, organics, heavy metals and other inorganic compounds. The laboratory system consists of a set of individual laboratory modules deployable independently or as an interconnected group to meet each DOE site`s specific needs.
Krings, Thomas; Mauerhofer, Eric
2011-06-01
This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.
Assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry
Taylor, H.E.; Garbarino, J.R.
1988-01-01
A thorough assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry was conducted for selected analytes of importance in water quality applications and hydrologic research. A multielement calibration curve technique was designed to produce accurate and precise results in analysis times of approximately one minute. The suite of elements included Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Hg, Li, Mn, Mo, Ni, Pb, Se, Sr, V, and Zn. The effects of sample matrix composition on the accuracy of the determinations showed that matrix elements (such as Na, Ca, Mg, and K) that may be present in natural water samples at concentration levels greater than 50 mg/L resulted in as much as a 10% suppression in ion current for analyte elements. Operational detection limits are presented.
NASA Astrophysics Data System (ADS)
Zhu, Ting-Lei; Zhao, Chang-Yin; Zhang, Ming-Jiang
2017-04-01
This paper aims to obtain an analytic approximation to the evolution of circular orbits governed by the Earth's J2 and the luni-solar gravitational perturbations. Assuming that the lunar orbital plane coincides with the ecliptic plane, Allan and Cook (Proc. R. Soc. A, Math. Phys. Eng. Sci. 280(1380):97, 1964) derived an analytic solution to the orbital plane evolution of circular orbits. Using their result as an intermediate solution, we establish an approximate analytic model with lunar orbital inclination and its node regression be taken into account. Finally, an approximate analytic expression is derived, which is accurate compared to the numerical results except for the resonant cases when the period of the reference orbit approximately equals the integer multiples (especially 1 or 2 times) of lunar node regression period.
Accurate Cross Sections for Microanalysis
Rez, Peter
2002-01-01
To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747
Photovoltaic Degradation Rates -- An Analytical Review
Jordan, D. C.; Kurtz, S. R.
2012-06-01
As photovoltaic penetration of the power grid increases, accurate predictions of return on investment require accurate prediction of decreased power output over time. Degradation rates must be known in order to predict power delivery. This article reviews degradation rates of flat-plate terrestrial modules and systems reported in published literature from field testing throughout the last 40 years. Nearly 2000 degradation rates, measured on individual modules or entire systems, have been assembled from the literature, showing a median value of 0.5%/year. The review consists of three parts: a brief historical outline, an analytical summary of degradation rates, and a detailed bibliography partitioned by technology.
Fast and spectrally accurate Ewald summation for 2-periodic electrostatic systems
NASA Astrophysics Data System (ADS)
Lindbo, Dag; Tornberg, Anna-Karin
2012-04-01
A new method for Ewald summation in planar/slablike geometry, i.e., systems where periodicity applies in two dimensions and the last dimension is "free" (2P), is presented. We employ a spectral representation in terms of both Fourier series and integrals. This allows us to concisely derive both the 2P Ewald sum and a fast particle mesh Ewald (PME)-type method suitable for large-scale computations. The primary results are: (i) close and illuminating connections between the 2P problem and the standard Ewald sum and associated fast methods for full periodicity; (ii) a fast, O(N log N), and spectrally accurate PME-type method for the 2P k-space Ewald sum that uses vastly less memory than traditional PME methods; (iii) errors that decouple, such that parameter selection is simplified. We give analytical and numerical results to support this.
Estimation of bone permeability using accurate microstructural measurements.
Beno, Thoma; Yoon, Young-June; Cowin, Stephen C; Fritton, Susannah P
2006-01-01
While interstitial fluid flow is necessary for the viability of osteocytes, it is also believed to play a role in bone's mechanosensory system by shearing bone cell membranes or causing cytoskeleton deformation and thus activating biochemical responses that lead to the process of bone adaptation. However, the fluid flow properties that regulate bone's adaptive response are poorly understood. In this paper, we present an analytical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity in bone. First, we estimate the total number of canaliculi emanating from each osteocyte lacuna based on published measurements from parallel-fibered shaft bones of several species (chick, rabbit, bovine, horse, dog, and human). Next, we determine the local three-dimensional permeability of the lacunar-canalicular porosity for these species using recent microstructural measurements and adapting a previously developed model. Results demonstrated that the number of canaliculi per osteocyte lacuna ranged from 41 for human to 115 for horse. Permeability coefficients were found to be different in three local principal directions, indicating local orthotropic symmetry of bone permeability in parallel-fibered cortical bone for all species examined. For the range of parameters investigated, the local lacunar-canalicular permeability varied more than three orders of magnitude, with the osteocyte lacunar shape and size along with the 3-D canalicular distribution determining the degree of anisotropy of the local permeability. This two-step theoretical approach to determine the degree of anisotropy of the permeability of the lacunar-canalicular porosity will be useful for accurate quantification of interstitial fluid movement in bone.
Signals: Applying Academic Analytics
ERIC Educational Resources Information Center
Arnold, Kimberly E.
2010-01-01
Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…
Extreme Scale Visual Analytics
Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio
2012-05-08
Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.
Learning Analytics Considered Harmful
ERIC Educational Resources Information Center
Dringus, Laurie P.
2012-01-01
This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…
Not Available
1990-01-01
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Analytical mass spectrometry. Abstracts
Not Available
1990-12-31
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
[Analytical epidemiology of urolithiasis].
Kodama, H; Ohno, Y
1989-06-01
In this paper, urolithiasis is reviewed from the standpoint of analytical epidemiology, which examines a statistical association between a given disease and a hypothesized factor with an aim of inferring its causality. Factors incriminated epidemiologically for stone formation include age, sex, occupation, social class (level of affluence), season of the year and climate, dietary and fluid intake and genetic prodisposition. Since some of these factors are interlinked, they are broadly classified into five categories and epidemiologically looked over here. Genetic predisposition is essentially endorsed by the more frequent episodes of stone formation in the family members of stone formers, as compared to non-stone formers. Nevertheless, some environmental factors (likely to be dietary habits) shared by family members are believed to be relatively more important than genetic predisposition. A hot, sunny climate may influence stone formation through inducing dehydration with increased perspiration and increased solute concentration with decreased urine volume, coupled with inadequate liquid intake, and possibly through the greater exposure to ultraviolet radiation which eventually results in an increased vitamin D production, conceivably correlated with seasonal variation in calcium and oxalate excretion to the urine. Urinary tract infections are importantly involved in the formation of magnesium ammonium phosphate stones in particular. The association with regional water hardness is still in controversy. Excessive intake of coffee, tea and alcoholic beverages seemingly increase the risk of renal calculi, though not consistently confirmed. Many dietary elements have been suggested by numerous clinical and experimental investigations, but a few elements are substantiated by analytical epidemiological investigations. An increased ingestion of animal protein and sugar and a decreased ingestion of dietary fiber and green-yellow vegetables are linked with the higher
Analytical approximations for spiral waves
Löber, Jakob Engel, Harald
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Analytical approximations for spiral waves.
Löber, Jakob; Engel, Harald
2013-12-01
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
LSM: perceptually accurate line segment merging
NASA Astrophysics Data System (ADS)
Hamid, Naila; Khan, Nazar
2016-11-01
Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate and precise zinc isotope ratio measurements in urban aerosols.
Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly
2008-12-15
We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).
A non-grey analytical model for irradiated atmospheres. II. Analytical vs. numerical solutions
NASA Astrophysics Data System (ADS)
Parmentier, Vivien; Guillot, Tristan; Fortney, Jonathan J.; Marley, Mark S.
2015-02-01
Context. The recent discovery and characterization of the diversity of the atmospheres of exoplanets and brown dwarfs calls for the development of fast and accurate analytical models. Aims: We wish to assess the goodness of the different approximations used to solve the radiative transfer problem in irradiated atmospheres analytically, and we aim to provide a useful tool for a fast computation of analytical temperature profiles that remains correct over a wide range of atmospheric characteristics. Methods: We quantify the accuracy of the analytical solution derived in paper I for an irradiated, non-grey atmosphere by comparing it to a state-of-the-art radiative transfer model. Then, using a grid of numerical models, we calibrate the different coefficients of our analytical model for irradiated solar-composition atmospheres of giant exoplanets and brown dwarfs. Results: We show that the so-called Eddington approximation used to solve the angular dependency of the radiation field leads to relative errors of up to ~5% on the temperature profile. For grey or semi-grey atmospheres (i.e., when the visible and thermal opacities, respectively, can be considered independent of wavelength), we show that the presence of a convective zone has a limited effect on the radiative atmosphere above it and leads to modifications of the radiative temperature profile of approximately ~2%. However, for realistic non-grey planetary atmospheres, the presence of a convective zone that extends to optical depths smaller than unity can lead to changes in the radiative temperature profile on the order of 20% or more. When the convective zone is located at deeper levels (such as for strongly irradiated hot Jupiters), its effect on the radiative atmosphere is again on the same order (~2%) as in the semi-grey case. We show that the temperature inversion induced by a strong absorber in the optical, such as TiO or VO is mainly due to non-grey thermal effects reducing the ability of the upper
Problems in publishing accurate color in IEEE journals.
Vrhel, Michael J; Trussell, H J
2002-01-01
To demonstrate the performance of color image processing algorithms, it is desirable to be able to accurately display color images in archival publications. In poster presentations, the authors have substantial control of the printing process, although little control of the illumination. For journal publication, the authors must rely on professional intermediaries (printers) to accurately reproduce their results. Our previous work describes requirements for accurately rendering images using your own equipment. This paper discusses the problems of dealing with intermediaries and offers suggestions for improved communication and rendering.
2017-01-01
%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Evaluation of higher order PMD effects using Jones matrix analytical models: a comparative study
NASA Astrophysics Data System (ADS)
Ferreira, M. F.
2006-04-01
A comparative study among the Jones matrix analytical models with high-order PMD is presented. The models that make use of an exponential expansion arrested up to second order or consider the dispersion vector as a Taylor series expansion do not give good results in the approximation of high-order PMD effects, because of the nonlimited behavior with respect to frequency of the modulus of their dispersion vectors. On the other hand, the analytical model which describes the dispersion vector as rotating on a circumference in the Stokes space is found to be the most accurate. Moreover, it can be used to obtain an analytical expression of the pulse broadening, which is often chosen as a quality-system parameter.
Scholtz, Jean; Burtner, Edwin R.; Cook, Kristin A.
2016-06-13
This course will introduce the field of Visual Analytics to HCI researchers and practitioners highlighting the contributions they can make to this field. Topics will include a definition of visual analytics along with examples of current systems, types of tasks and end users, issues in defining user requirements, design of visualizations and interactions, guidelines and heuristics, the current state of user-centered evaluations, and metrics for evaluation. We encourage designers, HCI researchers, and HCI practitioners to attend to learn how their skills can contribute to advancing the state of the art of visual analytics
An analytic Pade-motivated QCD coupling
Martinez, H. E.; Cvetic, G.
2010-08-04
We consider a modification of the Minimal Analytic (MA) coupling of Shirkov and Solovtsov. This modified MA (mMA) coupling reflects the desired analytic properties of the space-like observables. We show that an approximation by Dirac deltas of its discontinuity function {rho} is equivalent to a Pade(rational) approximation of the mMA coupling that keeps its analytic structure. We propose a modification to mMA that, as preliminary results indicate, could be an improvement in the evaluation of low-energy observables compared with other analytic couplings.
Quench in superconducting magnets. 2: Analytic solution
NASA Astrophysics Data System (ADS)
Shajii, A.; Freidberg, J. P.
1994-09-01
A set of analytic solutions for the Quencher model, as described in Part 1 (Shajii and Freidberg, 1994), is presented in this paper. These analytic solutions represent the first such results that remain valid for the long time scales of interest during a quench process. The assumptions and the resulting simplifications that lead to the analytic solutions are discussed, and the regimes of validity of the various approximations are specified. The predictions of the analytic results are shown to be in very good agreement with numerical as well as experimental results. Important analytic scaling relations are verified by such comparisons, and the consequences of some of these scalings on currently designed superconducting magnets are discussed.
Idealized textile composites for experimental/analytical correlation
NASA Technical Reports Server (NTRS)
Adams, Daniel O.
1994-01-01
Textile composites are fiber reinforced materials produced by weaving, braiding, knitting, or stitching. These materials offer possible reductions in manufacturing costs compared to conventional laminated composites. Thus, they are attractive candidate materials for aircraft structures. To date, numerous experimental studies have been performed to characterize the mechanical performance of specific textile architectures. Since many materials and architectures are of interest, there is a need for analytical models to predict the mechanical properties of a specific textile composite material. Models of varying sophistication have been proposed based on mechanics of materials, classical laminated plate theory, and the finite element method. These modeling approaches assume an idealized textile architecture and generally consider a single unit cell. Due to randomness of the textile architectures produced using conventional processing techniques, experimental data obtained has been of limited use for verifying the accuracy of these analytical approaches. This research is focused on fabricating woven textile composites with highly aligned and accurately placed fiber tows that closely represent the idealized architectures assumed in analytical models. These idealized textile composites have been fabricated with three types of layer nesting configurations: stacked, diagonal, and split-span. Compression testing results have identified strength variations as a function of nesting. Moire interferometry experiments are being used to determine localized deformations for detailed correlation with model predictions.
An Analytic Function of Lunar Surface Temperature for Exospheric Modeling
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David
2014-01-01
We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.
Analytical techniques: A compilation
NASA Technical Reports Server (NTRS)
1975-01-01
A compilation, containing articles on a number of analytical techniques for quality control engineers and laboratory workers, is presented. Data cover techniques for testing electronic, mechanical, and optical systems, nondestructive testing techniques, and gas analysis techniques.
ENVIRONMENTAL ANALYTICAL CHEMISTRY OF ...
Within the scope of a number of emerging contaminant issues in environmental analysis, one area that has received a great deal of public interest has been the assessment of the role of pharmaceuticals and personal care products (PPCPs) as stressors and agents of change in ecosystems as well as their role in unplanned human exposure. The relationship between personal actions and the occurrence of PPCPs in the environment is clear-cut and comprehensible to the public. In this overview, we attempt to examine the separations aspect of the analytical approach to the vast array of potential analytes among this class of compounds. We also highlight the relationship between these compounds and endocrine disrupting compounds (EDCs) and between PPCPs and EDCs and the more traditional environmental analytes such as the persistent organic pollutants (POPs). Although the spectrum of chemical behavior extends from hydrophobic to hydrophilic, the current focus has shifted to moderately and highly polar analytes. Thus, emphasis on HPLC and LC/MS has grown and MS/MS has become a detection technique of choice with either electrospray ionization or atmospheric pressure chemical ionization. This contrasts markedly with the bench mark approach of capillary GC, GC/MS and electron ionization in traditional environmental analysis. The expansion of the analyte list has fostered new vigor in the development of environmental analytical chemistry, modernized the range of tools appli
Analytic boosted boson discrimination
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-20
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less
Analytic boosted boson discrimination
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-20
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D_{2}, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.
Fast and accurate exhaled breath ammonia measurement.
Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H
2014-06-11
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.
NASA Astrophysics Data System (ADS)
Liu, Qianlong
2011-09-01
Prosperetti's seminal Physalis method, an Immersed Boundary/spectral method, had been used extensively to investigate fluid flows with suspended solid particles. Its underlying idea of creating a cage and using a spectral general analytical solution around a discontinuity in a surrounding field as a computational mechanism to enable the accommodation of physical and geometric discontinuities is a general concept, and can be applied to other problems of importance to physics, mechanics, and chemistry. In this paper we provide a foundation for the application of this approach to the determination of the distribution of electric charge in heterogeneous mixtures of dielectrics and conductors. The proposed Physalis method is remarkably accurate and efficient. In the method, a spectral analytical solution is used to tackle the discontinuity and thus the discontinuous boundary conditions at the interface of two media are satisfied exactly. Owing to the hybrid finite difference and spectral schemes, the method is spectrally accurate if the modes are not sufficiently resolved, while higher than second-order accurate if the modes are sufficiently resolved, for the solved potential field. Because of the features of the analytical solutions, the derivative quantities of importance, such as electric field, charge distribution, and force, have the same order of accuracy as the solved potential field during postprocessing. This is an important advantage of the Physalis method over other numerical methods involving interpolation, differentiation, and integration during postprocessing, which may significantly degrade the accuracy of the derivative quantities of importance. The analytical solutions enable the user to use relatively few mesh points to accurately represent the regions of discontinuity. In addition, the spectral convergence and a linear relationship between the cost of computer memory/computation and particle numbers results in a very efficient method. In the present
Coarse-grained red blood cell model with accurate mechanical properties, rheology and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George E
2009-01-01
We present a coarse-grained red blood cell (RBC) model with accurate and realistic mechanical properties, rheology and dynamics. The modeled membrane is represented by a triangular mesh which incorporates shear inplane energy, bending energy, and area and volume conservation constraints. The macroscopic membrane elastic properties are imposed through semi-analytic theory, and are matched with those obtained in optical tweezers stretching experiments. Rheological measurements characterized by time-dependent complex modulus are extracted from the membrane thermal fluctuations, and compared with those obtained from the optical magnetic twisting cytometry results. The results allow us to define a meaningful characteristic time of the membrane. The dynamics of RBCs observed in shear flow suggests that a purely elastic model for the RBC membrane is not appropriate, and therefore a viscoelastic model is required. The set of proposed analyses and numerical tests can be used as a complete model testbed in order to calibrate the modeled viscoelastic membranes to accurately represent RBCs in health and disease.
NASA Astrophysics Data System (ADS)
Allegretti, O.; Dionisi-Vici, P.; Bontadi, J.; Raffaelli, F.
2017-01-01
In this paper, a long-term monitoring experience on a panel painting, carried out using a whole set of analytical tools before the scheduled conservation intervention, is described. The object under analysis is a painting of the XVI century, "The daughters of the Emperor Ferdinand I" by Jakob Seisenegger, painted around 1534 and preserved in the storerooms of the Superintendency of the Trento Province and of pertinence of the Buonconsiglio Castle. The wooden support is built using many narrow planks of pear wood; some degradation evidences are visible, caused by insects and by the previous storing conditions. Old strong conservation interventions, when the present battened crosspieces structure was applied, have induced the non-optimal state of conservation as well. The panel painting has been monitored for some years in its storeroom, climatically uncontrolled for most of the time, using: displacement transducers, put in different positions, relevant from the structural point of view;
Importance of Accurate Measurements in Nutrition Research: Dietary Flavonoids as a Case Study
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate measurements of the secondary metabolites in natural products and plant foods are critical to establishing diet/health relationships. There are as many as 50,000 secondary metabolites which may influence human health. Their structural and chemical diversity present a challenge to analytic...
Comparison between analytical and numerical solution of mathematical drying model
NASA Astrophysics Data System (ADS)
Shahari, N.; Rasmani, K.; Jamil, N.
2016-02-01
Drying is often related to the food industry as a process of shifting heat and mass inside food, which helps in preserving food. Previous research using a mass transfer equation showed that the results were mostly concerned with the comparison between the simulation model and the experimental data. In this paper, the finite difference method was used to solve a mass equation during drying using different kinds of boundary condition, which are equilibrium and convective boundary conditions. The results of these two models provide a comparison between the analytical and the numerical solution. The result shows a close match between the two solution curves. It is concluded that the two proposed models produce an accurate solution to describe the moisture distribution content during the drying process. This analysis indicates that we have confidence in the behaviour of moisture in the numerical simulation. This result demonstrated that a combined analytical and numerical approach prove that the system is behaving physically. Based on this assumption, the model of mass transfer was extended to include the temperature transfer, and the result shows a similar trend to those presented in the simpler case.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-07
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Insight solutions are correct more often than analytic solutions
Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark
2016-01-01
How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960
Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.
Ganyecz, Ádám; Kállay, Mihály; Csontos, József
2017-02-09
An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.
Accurate equilibrium structures for piperidine and cyclohexane.
Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter
2015-03-05
Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.
Accurate upper body rehabilitation system using kinect.
Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit
2016-08-01
The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.
SU-E-T-517: Analytic Formalism to Compute in Real Time Dose Distributions Delivered by HDR Units
Pokhrel, S; Loyalka, S; Palaniswaamy, G; Rangaraj, D; Izaguirre, E
2014-06-01
Purpose: Develop an analytical algorithm to compute the dose delivered by Ir-192 dwell positions with high accuracy using the 3-dimensional (3D) dose distribution of an HDR source. Using our analytical function, the dose delivered by an HDR unit as treatment progresses can be determined using the actual delivered temporal and positional data of each individual dwell. Consequently, true delivered dose can be computed when each catheter becomes active. We hypothesize that the knowledge of such analytical formulation will allow developing HDR systems with a real time treatment evaluation tool to avoid mistreatments. Methods: In our analytic formulation, the dose is computed by using the full anisotropic function data of the TG 43 formalism with 3D ellipsoidal function. The discrepancy between the planned dose and the delivered dose is computed using an analytic perturbation method over the initial dose distribution. This methodology speeds up the computation because only changes in dose discrepancies originated by spatial and temporal deviations are computed. A dose difference map at the point of interest is obtained from these functions and this difference can be shown during treatment in real time to examine the treatment accuracy. Results: We determine the analytical solution and a perturbation function for the 3 translational 3 rotational, and 1D temporal errors in source distributions. The analytic formulation is a sequence of simple equations that can be processed in any modern computer in few seconds. Because computations are based in an analytical solution, small deviations of the dose when sub-millimeter positional changes occur can be detected. Conclusions: We formulated an analytical method to compute 4D dose distributions and dose differences based on an analytical solution and perturbations to the original dose. This method is highly accurate and can be.
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Advances in analytical chemistry
NASA Technical Reports Server (NTRS)
Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.
1991-01-01
Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.
Competing on talent analytics.
Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy
2010-10-01
Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.
Physics-based analytical model for ferromagnetic single electron transistor
NASA Astrophysics Data System (ADS)
Jamshidnezhad, K.; Sharifi, M. J.
2017-03-01
A physically based compact analytical model is proposed for a ferromagnetic single electron transistor (FSET). This model is based on the orthodox theory and solves the master equation, spin conservation equation, and charge neutrality equation simultaneously. The model can be applied to both symmetric and asymmetric devices and does not introduce any limitation on the applied bias voltages. This feature makes the model suitable for both analog and digital applications. To verify the accuracy of the model, its results regarding a typical FSET in both low and high voltage regimes are compared with the existing numerical results. Moreover, the model's results of a parallel configuration FSET, where no spin accumulation exists in the island, are compared with the results obtained from a Monte Carlo simulation using SIMON. These two comparisons show that our model is valid and accurate. As another comparison, the model is compared analytically with an existing model for a double barrier ferromagnetic junction (having no gate). This also verifies the accuracy of the model.
Analytical optical scattering in clouds
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.
1989-01-01
An analytical optical model for scattering of light due to lightning by clouds of different geometry is being developed. The self-consistent approach and the equivalent medium concept of Twersky was used to treat the case corresponding to outside illumination. Thus, the resulting multiple scattering problem is transformed with the knowledge of the bulk parameters, into scattering by a single obstacle in isolation. Based on the size parameter of a typical water droplet as compared to the incident wave length, the problem for the single scatterer equivalent to the distribution of cloud particles can be solved either by Mie or Rayleigh scattering theory. The super computing code of Wiscombe can be used immediately to produce results that can be compared to the Monte Carlo computer simulation for outside incidence. A fairly reasonable inverse approach using the solution of the outside illumination case was proposed to model analytically the situation for point sources located inside the thick optical cloud. Its mathematical details are still being investigated. When finished, it will provide scientists an enhanced capability to study more realistic clouds. For testing purposes, the direct approach to the inside illumination of clouds by lightning is under consideration. Presently, an analytical solution for the cubic cloud will soon be obtained. For cylindrical or spherical clouds, preliminary results are needed for scattering by bounded obstacles above or below a penetrable surface interface.
Monitoring the analytic surface.
Spence, D P; Mayes, L C; Dahl, H
1994-01-01
How do we listen during an analytic hour? Systematic analysis of the speech patterns of one patient (Mrs. C.) strongly suggests that the clustering of shared pronouns (e.g., you/me) represents an important aspect of the analytic surface, preconsciously sensed by the analyst and used by him to determine when to intervene. Sensitivity to these patterns increases over the course of treatment, and in a final block of 10 hours shows a striking degree of contingent responsivity: specific utterances by the patient are consistently echoed by the analyst's interventions.
Frontiers in analytical chemistry
Amato, I.
1988-12-15
Doing more with less was the modus operandi of R. Buckminster Fuller, the late science genius, and inventor of such things as the geodesic dome. In late September, chemists described their own version of this maxim--learning more chemistry from less material and in less time--in a symposium titled Frontiers in Analytical Chemistry at the 196th National Meeting of the American Chemical Society in Los Angeles. Symposium organizer Allen J. Bard of the University of Texas at Austin assembled six speakers, himself among them, to survey pretty widely different areas of analytical chemistry.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
Accurate Measurement of the in vivo Ammonium Concentration in Saccharomyces cerevisiae.
Cueto-Rojas, Hugo F; Maleki Seifar, Reza; Ten Pierick, Angela; Heijnen, Sef J; Wahl, Aljoscha
2016-04-23
Ammonium (NH₄⁺) is the most common N-source for yeast fermentations, and N-limitation is frequently applied to reduce growth and increase product yields. While there is significant molecular knowledge on NH₄⁺ transport and assimilation, there have been few attempts to measure the in vivo concentration of this metabolite. In this article, we present a sensitive and accurate analytical method to quantify the in vivo intracellular ammonium concentration in Saccharomyces cerevisiae based on standard rapid sampling and metabolomics techniques. The method validation experiments required the development of a proper sample processing protocol to minimize ammonium production/consumption during biomass extraction by assessing the impact of amino acid degradation-an element that is often overlooked. The resulting cold chloroform metabolite extraction method, together with quantification using ultra high performance liquid chromatography-isotope dilution mass spectrometry (UHPLC-IDMS), was not only more sensitive than most of the existing methods but also more accurate than methods that use electrodes, enzymatic reactions, or boiling water or boiling ethanol biomass extraction because it minimized ammonium consumption/production during sampling processing and interference from other metabolites in the quantification of intracellular ammonium. Finally, our validation experiments showed that other metabolites such as pyruvate or 2-oxoglutarate (αKG) need to be extracted with cold chloroform to avoid measurements being biased by the degradation of other metabolites (e.g., amino acids).
Accurate Computation of Divided Differences of the Exponential Function,
1983-06-01
differences are not for arbitrary smooth functions f but for well known analytic functions such as exp. sin and cos. Thus we can exploit their properties in...have a bad name in practice. However in a number of applications the functional form of f is known (e.g. exp) and can be exploited to obtain accurate...n do X =s(1) s(1)=d(i) For j=2.....-1 do11=t, (j) z=Y next j next i SS7 . (Shift back and stop.] ,-tt+77. d(i).-e"d(i), s(i-1)’e~ s(i-i) for i=2
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Capacity of the circular plate condenser: analytical solutions for large gaps between the plates
NASA Astrophysics Data System (ADS)
Rao, T. V.
2005-11-01
A solution of Love's integral equation (Love E R 1949 Q. J. Mech. Appl. Math. 2 428), which forms the basis for the analysis of the electrostatic field due to two equal circular co-axial parallel conducting plates, is considered for the case when the ratio, τ, of distance of separation to radius of the plates is greater than 2. The kernel of the integral equation is expanded into an infinite series in odd powers of 1/τ and an approximate kernel accurate to {\\cal O}(\\tau^{-(2N+1)}) is deduced therefrom by terminating the series after an arbitrary but finite number of terms, N. The approximate kernel is rearranged into a degenerate form and the integral equation with this kernel is reduced to a system of N linear equations. An explicit analytical solution is obtained for N = 4 and the resulting analytical expression for the capacity of the circular plate condenser is shown to be accurate to {\\cal O}(\\tau^{-9}) . Analytical expressions of lower orders of accuracy with respect to 1/τ are deduced from the four-term (i.e., N = 4) solution and predictions (of capacity) from the expressions of different orders of accuracy (with respect to 1/τ) are compared with very accurate numerical solutions obtained by solving the linear system for large enough N. It is shown that the {\\cal O}(\\tau^{-9}) approximation predicts the capacity extremely well for any τ >= 2 and an {\\cal O}(\\tau^{-3}) approximation gives, for all practical purposes, results of adequate accuracy for τ >= 4. It is further shown that an approximate solution, applicable for the case of large distances of separation between the plates, due to Sneddon (Sneddon I N 1966 Mixed Boundary Value Problems in Potential Theory (Amsterdam: North-Holland) pp 230-46) is accurate to {\\cal O}(\\tau^{-6}) for τ >= 2.
Visual Analytics for MOOC Data.
Qu, Huamin; Chen, Qing
2015-01-01
With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.
Analytical Services Management System
Church, Shane; Nigbor, Mike; Hillman, Daniel
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standard chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.
Analytics: Changing the Conversation
ERIC Educational Resources Information Center
Oblinger, Diana G.
2013-01-01
In this third and concluding discussion on analytics, the author notes that we live in an information culture. We are accustomed to having information instantly available and accessible, along with feedback and recommendations. We want to know what people think and like (or dislike). We want to know how we compare with "others like me."…
Analytical Chemistry Laboratory
NASA Technical Reports Server (NTRS)
Anderson, Mark
2013-01-01
The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.
ERIC Educational Resources Information Center
Buckingham Shum, Simon; Ferguson, Rebecca
2012-01-01
We propose that the design and implementation of effective "Social Learning Analytics (SLA)" present significant challenges and opportunities for both research and enterprise, in three important respects. The first is that the learning landscape is extraordinarily turbulent at present, in no small part due to technological drivers.…
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii
2016-10-01
Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85-100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis of
NASA Astrophysics Data System (ADS)
Gong, J.; Thompson, L.; Li, G.
2016-12-01
A semi-analytical model for determining the equilibrium configuration and the radial breathing mode (RBM) frequency of single-wall carbon nanotubes (CNTs) is presented. By taking advantage of the symmetry characteristics, a CNT structure is represented by five independent variables. A line search optimization procedure is employed to determine the equilibrium values of these variables by minimizing the potential energy. With the equilibrium configuration obtained, the semi-analytical model enables an efficient calculation of the RBM frequency of the CNTs. The radius and radial breathing mode frequency results obtained from the semi-analytical approach are compared with those from molecular dynamics (MD) and ab initio calculations. The results demonstrate that the semi-analytical approach offers an efficient and accurate way to determine the equilibrium structure and radial breathing mode frequency of CNTs.
Palm: Easing the Burden of Analytical Performance Modeling
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexity (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.
A new and accurate continuum description of moving fronts
NASA Astrophysics Data System (ADS)
Johnston, S. T.; Baker, R. E.; Simpson, M. J.
2017-03-01
Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Lee, Ping I
2011-10-10
The purpose of this review is to provide an overview of approximate analytical solutions to the general moving boundary diffusion problems encountered during the release of a dispersed drug from matrix systems. Starting from the theoretical basis of the Higuchi equation and its subsequent improvement and refinement, available approximate analytical solutions for the more complicated cases involving heterogeneous matrix, boundary layer effect, finite release medium, surface erosion, and finite dissolution rate are also discussed. Among various modeling approaches, the pseudo-steady state assumption employed in deriving the Higuchi equation and related approximate analytical solutions appears to yield reasonably accurate results in describing the early stage release of a dispersed drug from matrices of different geometries whenever the initial drug loading (A) is much larger than the drug solubility (C(s)) in the matrix (or A≫C(s)). However, when the drug loading is not in great excess of the drug solubility (i.e. low A/C(s) values) or when the drug loading approaches the drug solubility (A→C(s)) which occurs often with drugs of high aqueous solubility, approximate analytical solutions based on the pseudo-steady state assumption tend to fail, with the Higuchi equation for planar geometry exhibiting a 11.38% error as compared with the exact solution. In contrast, approximate analytical solutions to this problem without making the pseudo-steady state assumption, based on either the double-integration refinement of the heat balance integral method or the direct simplification of available exact analytical solutions, show close agreement with the exact solutions in different geometries, particularly in the case of low A/C(s) values or drug loading approaching the drug solubility (A→C(s)). However, the double-integration heat balance integral approach is generally more useful in obtaining approximate analytical solutions especially when exact solutions are not
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
Nonexposure Accurate Location K-Anonymity Algorithm in LBS
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060
Nonexposure accurate location K-anonymity algorithm in LBS.
Jia, Jinying; Zhang, Fengli
2014-01-01
This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.
Accurate torque-speed performance prediction for brushless dc motors
NASA Astrophysics Data System (ADS)
Gipper, Patrick D.
Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air
NASA Technical Reports Server (NTRS)
Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.
2007-01-01
The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.
Requirements for Predictive Analytics
Troy Hiltbrand
2012-03-01
It is important to have a clear understanding of how traditional Business Intelligence (BI) and analytics are different and how they fit together in optimizing organizational decision making. With tradition BI, activities are focused primarily on providing context to enhance a known set of information through aggregation, data cleansing and delivery mechanisms. As these organizations mature their BI ecosystems, they achieve a clearer picture of the key performance indicators signaling the relative health of their operations. Organizations that embark on activities surrounding predictive analytics and data mining go beyond simply presenting the data in a manner that will allow decisions makers to have a complete context around the information. These organizations generate models based on known information and then apply other organizational data against these models to reveal unknown information.
Multifunctional nanoparticles: analytical prospects.
de Dios, Alejandro Simón; Díaz-García, Marta Elena
2010-05-07
Multifunctional nanoparticles are among the most exciting nanomaterials with promising applications in analytical chemistry. These applications include (bio)sensing, (bio)assays, catalysis and separations. Although most of these applications are based on the magnetic, optical and electrochemical properties of multifunctional nanoparticles, other aspects such as the synergistic effect of the functional groups and the amplification effect associated with the nanoscale dimension have also been observed. Considering not only the nature of the raw material but also the shape, there is a huge variety of nanoparticles. In this review only magnetic, quantum dots, gold nanoparticles, carbon and inorganic nanotubes as well as silica, titania and gadolinium oxide nanoparticles are addressed. This review presents a narrative summary on the use of multifunctional nanoparticles for analytical applications, along with a discussion on some critical challenges existing in the field and possible solutions that have been or are being developed to overcome these challenges.
Cowell, Andrew J.; Cowell, Amanda K.
2009-08-29
This paper discusses the design and use of anthropomorphic computer characters as nonplayer characters (NPC’s) within analytical games. These new environments allow avatars to play a central role in supporting training and education goals instead of planning the supporting cast role. This new ‘science’ of gaming, driven by high-powered but inexpensive computers, dedicated graphics processors and realistic game engines, enables game developers to create learning and training opportunities on par with expensive real-world training scenarios. However, there needs to be care and attention placed on how avatars are represented and thus perceived. A taxonomy of non-verbal behavior is presented and its application to analytical gaming discussed.
Brune, D.; Forkman, B.; Persson, B.
1984-01-01
This book covers the general theories and techniques of nuclear chemical analysis, directed at applications in analytical chemistry, nuclear medicine, radiophysics, agriculture, environmental sciences, geological exploration, industrial process control, etc. The main principles of nuclear physics and nuclear detection on which the analysis is based are briefly outlined. An attempt is made to emphasise the fundamentals of activation analysis, detection and activation methods, as well as their applications. The book provides guidance in analytical chemistry, agriculture, environmental and biomedical sciences, etc. The contents include: the nuclear periodic system; nuclear decay; nuclear reactions; nuclear radiation sources; interaction of radiation with matter; principles of radiation detectors; nuclear electronics; statistical methods and spectral analysis; methods of radiation detection; neutron activation analysis; charged particle activation analysis; photon activation analysis; sample preparation and chemical separation; nuclear chemical analysis in biological and medical research; the use of nuclear chemical analysis in the field of criminology; nuclear chemical analysis in environmental sciences, geology and mineral exploration; and radiation protection.
Ultrasound in analytical chemistry.
Priego Capote, F; Luque de Castro, M D
2007-01-01
Ultrasound is a type of energy which can help analytical chemists in almost all their laboratory tasks, from cleaning to detection. A generic view of the different steps which can be assisted by ultrasound is given here. These steps include preliminary operations usually not considered in most analytical methods (e.g. cleaning, degassing, and atomization), sample preparation being the main area of application. In sample preparation ultrasound is used to assist solid-sample treatment (e.g. digestion, leaching, slurry formation) and liquid-sample preparation (e.g. liquid-liquid extraction, emulsification, homogenization) or to promote heterogeneous sample treatment (e.g. filtration, aggregation, dissolution of solids, crystallization, precipitation, defoaming, degassing). Detection techniques based on use of ultrasonic radiation, the principles on which they are based, responses, and the quantities measured are also discussed.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Analytic Modeling of Insurgencies
2014-08-01
influenced by interests and utilities. 4.1 Carrots and Sticks An analytic model that captures the aforementioned utilitarian aspect is presented in... carrots ” x. A dynamic utility-based model is developed in [26] in which the state variables are the fractions of contrarians (supporters of the...Unanticipated Political Revolution," Public Choice, vol. 61, pp. 41-74, 1989. [26] M. P. Atkinson, M. Kress and R. Szechtman, " Carrots , Sticks and Fog
Industrial Analytics Corporation
Industrial Analytics Corporation
2004-01-30
The lost foam casting process is sensitive to the properties of the EPS patterns used for the casting operation. In this project Industrial Analytics Corporation (IAC) has developed a new low voltage x-ray instrument for x-ray radiography of very low mass EPS patterns. IAC has also developed a transmitted visible light method for characterizing the properties of EPS patterns. The systems developed are also applicable to other low density materials including graphite foams.
Analytical satellite theories based on a new set of canonical elements
NASA Technical Reports Server (NTRS)
Scheifele, G.; Graf, O.
1974-01-01
A new analytical satellite theory is presented. Instead of the 6 classical elements of Delaunay, a set of 8 canonical elements is used. Whereas the time is the independent variable in classical theory, the true anomaly is the independent variable in the new theory. The new approach has four features: (1) The amount of formulas in the solution is reduced considerably. (2) The first order results are almost as accurate as second order results in classical theory. (3) The theory is easier to understand from a didactical point of view. (4) The problems connected with the inaccuracy of the mean motion that are typical for classical satellite theory are no longer present. The new elements are applied to analytical solutions of the zonal oblateness problem and to the problem of the 24 hour satellite.
The Locus analytical framework for indoor localization and tracking applications
NASA Astrophysics Data System (ADS)
Segou, Olga E.; Thomopoulos, Stelios C. A.
2015-05-01
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system's performance limitations (in a best-case/worst-case scenario basis).
STRengthening analytical thinking for observational studies: the STRATOS initiative.
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-12-30
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved.
STRengthening Analytical Thinking for Observational Studies: the STRATOS initiative
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-01-01
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even ‘standard’ analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. PMID:25074480
Wernicke, A Gabriella; Greenwood, Eleni A; Coplowitz, Shana; Parashar, Bhupesh; Kulidzhanov, Fridon; Christos, Paul J; Fischer, Andrew; Nori, Dattatreyudu; Chao, Kun S Clifford
2013-01-01
Identification of radiation-induced fibrosis (RIF) remains a challenge with Late Effects of Normal Tissue-Subjective Objective Management Analytical (LENT-SOMA). Tissue compliance meter (TCM), a non-invasive applicator, may render a more reproducible tool for measuring RIF. In this study, we prospectively quantify RIF after intracavitary brachytherapy (IB) accelerated partial breast irradiation (APBI) with TCM and compare it with LENT-SOMA. Thirty-nine women with American Joint Committee on Cancer Stages 0-I breast cancer, treated with lumpectomy and intracavitary brachytherapy delivered by accelerated partial breast irradiation (IBAPBI), were evaluated by two raters in a prospective manner pre-IBAPBI and every 6 months post-IBAPBI for development of RIF, using TCM and LENT-SOMA. TCM classification scale grades RIF as 0 = none, 1 = mild, 2 = moderate, and 3 = severe, corresponding to a change in TCM (ΔTCM) between the IBAPBI and nonirradiated breasts of ≤2.9, 3.0-5.9, 6.0-8.9, ≥9.0 mm, respectively. LENT-SOMA scale employs clinical palpation to grade RIF as 0 = none, 1 = mild, 2 = moderate, and 3 = severe. Correlation coefficients-Intraclass (ICC), Pearson (r), and Cohen's kappa (κ)-were employed to assess reliability of TCM and LENT-SOMA. Multivariate and univariate linear models explored the relationship between RIF and anatomical parameters [bra cup size], antihormonal therapy, and dosimetric factors [balloon diameter, skin-to-balloon distance (SBD), V150, and V200]. Median time to follow-up from completion of IBAPBI is 3.6 years (range, 0.8-4.9 years). Median age is 69 years (range, 47-82 years). Median breast cup size is 39D (range, 34B-44DDD). Median balloon size is 41.2 cc (range, 37.6-50.0 cc), and median SBD is 1.4 cm (range, 0.2-5.5 cm). At pre-IBAPBI, TCM measurements demonstrate high interobserver agreement between two raters in all four quadrants of both breasts ICC ≥ 0.997 (95% CI 0.994-1.000). After 36 months, RIF is graded by TCM scale as 0
Approximated analytical solution to an Ebola optimal control problem
NASA Astrophysics Data System (ADS)
Hincapié-Palacio, Doracelly; Ospina, Juan; Torres, Delfim F. M.
2016-11-01
An analytical expression for the optimal control of an Ebola problem is obtained. The analytical solution is found as a first-order approximation to the Pontryagin Maximum Principle via the Euler-Lagrange equation. An implementation of the method is given using the computer algebra system Maple. Our analytical solutions confirm the results recently reported in the literature using numerical methods.
Aquatic concentrations of chemical analytes compared to ...
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Zupan, Andrej; Glavač, Damjan
2015-12-01
Medullary thyroid carcinoma (MTC) is a rare endocrine malignancy with distinctive features separating it from other thyroid cancers. Cancer may be sporadic or occur as a consequence of the hereditary syndrome called multiple endocrine neoplasia type 2 (MEN2) with three distinct phenotypes in MEN2A, MEN2B and FMTC. Each variant of MEN2 results from different RET gene mutations, with a good genotype-phenotype correlation. The goal of the study was to develop a fast and accurate screening method for a reliable detection of hot-spot RET germline and sporadic tumor mutations. From a cohort of 191 patients with MTC and their relatives, 38 tested positive and 31 tested negative for a germline or somatic tumor RET mutation were selected. A positive HRM mutation pattern was detected in all mutation-positive patients and altogether the method was able to clearly differentiate between twenty different genotypes. A novel germline variant p.Ala639Thr was detected in MTC patient, which was determined to be likely benign. Analytical specificity was determined to be 98.6% and a sensitivity threshold was determined to be 30%. The fast and accurate HRM method reduces the turnaround time providing fast and important information, especially when targeted anti-tyrosine kinase therapy on tumor samples is considered. Overall, we developed a high-throughput, accurate and cost-effective approach for the detection of RET germline and sporadic tumor mutations.
Analytical calculation of two-dimensional spectra.
Bell, Joshua D; Conrad, Rebecca; Siemens, Mark E
2015-04-01
We demonstrate an analytical calculation of two-dimensional (2D) coherent spectra of electronic or vibrational resonances. Starting with the solution to the optical Bloch equations for a two-level system in the 2D time domain, we show that a fully analytical 2D Fourier transform can be performed if the projection-slice and Fourier-shift theorems of Fourier transforms are applied. Results can be fit to experimental 2D coherent spectra of resonances with arbitrary inhomogeneity.
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
Accurate colorimetric feedback for RGB LED clusters
NASA Astrophysics Data System (ADS)
Man, Kwong; Ashdown, Ian
2006-08-01
We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.
Melatonin: aeromedical, toxicopharmacological, and analytical aspects.
Sanders, D C; Chaturvedi, A K; Hordinsky, J R
1999-01-01
of accidental death may allow forensic scientists and accident investigators to use the relationship between its concentration and the time of day when death occurred. The most accurate estimations of the time of death result from analysis of melatonin content of the whole pineal body, whereas less accurate estimates are obtained from serum and urine analyses. Pineal levels of melatonin are unlikely to be altered by exogenous melatonin, but its blood and urine levels would change. High blood levels in a daytime crash victim would suggest exogenous supplementation. The possible interfering effects of postmortem biochemical processes on melatonin concentrations in whole blood and in other tissues are not well understood, and there is a need for the continuing research into melatonin's chronobiological properties to define its proper applications and limitations. The indiscriminate use of melatonin by aviation professionals may pose unacceptable safety risks for air travel.
An accurate registration technique for distorted images
NASA Technical Reports Server (NTRS)
Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis
1990-01-01
Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to
Levecke, Bruno; Rinaldi, Laura; Charlier, Johannes; Maurelli, Maria Paola; Morgoglione, Maria Elena; Vercruysse, Jozef; Cringoli, Giuseppe
2011-09-01
The faecal egg count reduction test (FECR) is the recommended technique to monitor anthelmintic drug efficacy in livestock. However, results are often inconclusive due to the low analytic sensitivity of the diagnostic technique or the conflict in results from FECR formulae. A novel experimental set-up was, therefore, used to compare the impact of analytic sensitivity and formulae on FECR results. Four McMaster techniques (analytic sensitivities 50, 33.3, 15 and 10) and a FLOTAC technique (analytic sensitivity ~ 1) were used on faecal samples of 30 calves with a FEC of less than 200 eggs per gram. True drug efficacies of 70%, 80% and 90% were experimentally mimicked by comparing FEC before and after dilution (3:10, 2:10 and 1:10, respectively). The FECR was summarized using group (FECR(1)) and individual (FECR(2)) based formulae. There was a significant increase in precision of FECR when the analytic sensitivity increased (p < 0.0001). The precision also depended on the formula used, FECR(1) (p < 0.05) resulting in more precise FECR compared to FECR(2). The accuracy of the FECR differed marginally between the two formulae (p = 0.06), FECR(1) being more accurate. In conclusion, the present study describes a novel methodology to compare techniques for the precision and the accuracy of their FECR results. The results underscored that techniques with high analytic sensitivity will improve the interpretation of FECR in animal populations where baseline FEC are low. They also point out that the precision of individual-based formulae is affected by the analytic sensitivity.
An analytical model of a longitudinal-torsional ultrasonic transducer
NASA Astrophysics Data System (ADS)
Al-Budairi, Hassan; Lucas, Margaret
2012-08-01
The combination of longitudinal and torsional (LT) vibrations at high frequencies finds many applications such as ultrasonic drilling, ultrasonic welding, and ultrasonic motors. The LT mode can be obtained by modifications to the design of a standard bolted Langevin ultrasonic transducer driven by an axially poled piezoceramic stack, by a technique that degenerates the longitudinal mode to an LT motion by a geometrical alteration of the wave path. The transducer design is developed and optimised through numerical modelling which can represent the geometry and mechanical properties of the transducer and its vibration response to an electrical input applied across the piezoceramic stack. However, although these models can allow accurate descriptions of the mechanical behaviour, they do not generally provide adequate insights into the electrical characteristics of the transducer. In this work, an analytical model is developed to present the LT transducer based on the equivalent circuit method. This model can represent both the mechanical and electrical aspects and is used to extract many of the design parameters, such as resonance and anti-resonance frequencies, the impedance spectra and the coupling coefficient of the transducer. The validity of the analytical model is demonstrated by close agreement with experimental results.
Proactive supply chain performance management with predictive analytics.
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.
Proactive Supply Chain Performance Management with Predictive Analytics
Stefanovic, Nenad
2014-01-01
Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605
Pre-analytical errors management in the clinical laboratory: a five-year study
Giménez-Marín, Angeles; Rivas-Ruiz, Francisco; Pérez-Hidalgo, Maria del Mar; Molina-Mendoza, Pedro
2014-01-01
Introduction: This study describes quality indicators for the pre-analytical process, grouping errors according to patient risk as critical or major, and assesses their evaluation over a five-year period. Materials and methods: A descriptive study was made of the temporal evolution of quality indicators, with a study population of 751,441 analytical requests made during the period 2007–2011. The Runs Test for randomness was calculated to assess changes in the trend of the series, and the degree of control over the process was estimated by the Six Sigma scale. Results: The overall rate of critical pre-analytical errors was 0.047%, with a Six Sigma value of 4.9. The total rate of sampling errors in the study period was 13.54% (P = 0.003). The highest rates were found for the indicators “haemolysed sample” (8.76%), “urine sample not submitted” (1.66%) and “clotted sample” (1.41%), with Six Sigma values of 3.7, 3.7 and 2.9, respectively. Conclusions: The magnitude of pre-analytical errors was accurately valued. While processes that triggered critical errors are well controlled, the results obtained for those regarding specimen collection are borderline unacceptable; this is particularly so for the indicator “haemolysed sample”. PMID:24969918
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing