Accurate Analytic Results for the Steady State Distribution of the Eigen Model
NASA Astrophysics Data System (ADS)
Huang, Guan-Rong; Saakian, David B.; Hu, Chin-Kun
2016-04-01
Eigen model of molecular evolution is popular in studying complex biological and biomedical systems. Using the Hamilton-Jacobi equation method, we have calculated analytic equations for the steady state distribution of the Eigen model with a relative accuracy of O(1/N), where N is the length of genome. Our results can be applied for the case of small genome length N, as well as the cases where the direct numerics can not give accurate result, e.g., the tail of distribution.
Robust Accurate Non-Invasive Analyte Monitor
Robinson, Mark R.
1998-11-03
An improved method and apparatus for determining noninvasively and in vivo one or more unknown values of a known characteristic, particularly the concentration of an analyte in human tissue. The method includes: (1) irradiating the tissue with infrared energy (400 nm-2400 nm) having at least several wavelengths in a given range of wavelengths so that there is differential absorption of at least some of the wavelengths by the tissue as a function of the wavelengths and the known characteristic, the differential absorption causeing intensity variations of the wavelengths incident from the tissue; (2) providing a first path through the tissue; (3) optimizing the first path for a first sub-region of the range of wavelengths to maximize the differential absorption by at least some of the wavelengths in the first sub-region; (4) providing a second path through the tissue; and (5) optimizing the second path for a second sub-region of the range, to maximize the differential absorption by at least some of the wavelengths in the second sub-region. In the preferred embodiment a third path through the tissue is provided for, which path is optimized for a third sub-region of the range. With this arrangement, spectral variations which are the result of tissue differences (e.g., melanin and temperature) can be reduced. At least one of the paths represents a partial transmission path through the tissue. This partial transmission path may pass through the nail of a finger once and, preferably, twice. Also included are apparatus for: (1) reducing the arterial pulsations within the tissue; and (2) maximizing the blood content i the tissue.
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
NASA Astrophysics Data System (ADS)
Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang
2016-08-01
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-01-01
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson’s ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers. PMID:26510769
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
NASA Technical Reports Server (NTRS)
Schlosser, Herbert; Ferrante, John
1989-01-01
An accurate analytic expression for the nonlinear change of the volume of a solid as a function of applied pressure is of great interest in high-pressure experimentation. It is found that a two-parameter analytic expression, fits the experimental volume-change data to within a few percent over the entire experimentally attainable pressure range. Results are presented for 24 different materials including metals, ceramic semiconductors, polymers, and ionic and rare-gas solids.
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
NASA Astrophysics Data System (ADS)
Colalongo, Luigi; Ghittorelli, Matteo; Torricelli, Fabrizio; Kovács-Vajna, Zsolt Miklos
2015-12-01
Surface-potential-based mathematical models are among the most accurate and physically based compact models of Thin-Film Transistors (TFTs) and, in turn, of Organic Thin-Film Transistors (OTFTs), available today. However, the need for iterative computations of the surface potential limits their computational efficiency and diffusion in CAD applications. The existing closed-form approximations of the surface potential are based on regional approximations and empirical smoothing functions that could result not enough accurate to model OTFTs and, in particular, transconductances and transcapacitances. In this paper we present an accurate and computationally efficient closed-form approximation of the surface potential, based on the Lagrange Reversion Theorem, that can be exploited in advanced surface-potential-based OTFTs and TFTs device models.
Milestone M4900: Simulant Mixing Analytical Results
Kaplan, D.I.
2001-07-26
This report addresses Milestone M4900, ''Simulant Mixing Sample Analysis Results,'' and contains the data generated during the ''Mixing of Process Heels, Process Solutions, and Recycle Streams: Small-Scale Simulant'' task. The Task Technical and Quality Assurance Plan for this task is BNF-003-98-0079A. A report with a narrative description and discussion of the data will be issued separately.
Interpolation method for accurate affinity ranking of arrayed ligand-analyte interactions.
Schasfoort, Richard B M; Andree, Kiki C; van der Velde, Niels; van der Kooi, Alex; Stojanović, Ivan; Terstappen, Leon W M M
2016-05-01
The values of the affinity constants (kd, ka, and KD) that are determined by label-free interaction analysis methods are affected by the ligand density. This article outlines a surface plasmon resonance (SPR) imaging method that yields high-throughput globally fitted affinity ranking values using a 96-plex array. A kinetic titration experiment without a regeneration step has been applied for various coupled antibodies binding to a single antigen. Globally fitted rate (kd and ka) and dissociation equilibrium (KD) constants for various ligand densities and analyte concentrations are exponentially interpolated to the KD at Rmax = 100 RU response level (KD(R100)).
Ambipolar transition voltage spectroscopy: Analytical results and experimental agreement
NASA Astrophysics Data System (ADS)
Bâldea, Ioan
2012-01-01
This work emphasizes that the transition voltages Vt± for both bias polarities (V ≷ 0) should be used to properly determine the energy offset ɛ0 of the molecular orbital closest to electrodes’ Fermi level and the bias asymmetry γ in molecular junctions. Accurate analytical formulas are deduced to estimate ɛ0 and γ solely in terms of Vt±. These estimates are validated against experiments, by showing that full experimental I-V curves measured by Beebe [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.97.026801 97, 026801 (2006)] and Tan [Appl. Phsy. Lett.APPLAB0003-695110.1063/1.3291521 96, 013110 (2010)] for both bias polarities can be excellently reproduced.
Two-level laser: Analytical results and the laser transition
Gartner, Paul
2011-11-15
The problem of the two-level laser is studied analytically. The steady-state solution is expressed as a continued fraction and allows for accurate approximation by rational functions. Moreover, we show that the abrupt change observed in the pump dependence of the steady-state population is directly connected to the transition to the lasing regime. The condition for a sharp transition to Poissonian statistics is expressed as a scaling limit of vanishing cavity loss and light-matter coupling, {kappa}{yields}0, g{yields}0, such that g{sup 2}/{kappa} stays finite and g{sup 2}/{kappa}>2{gamma}, where {gamma} is the rate of nonradiative losses. The same scaling procedure is also shown to describe a similar change to the Poisson distribution in the Scully-Lamb laser model, suggesting that the low-{kappa}, low-g asymptotics is of more general significance for the laser transition.
Preliminary Results on Uncertainty Quantification for Pattern Analytics
Stracuzzi, David John; Brost, Randolph; Chen, Maximillian Gene; Malinas, Rebecca; Peterson, Matthew Gregor; Phillips, Cynthia A.; Robinson, David G.; Woodbridge, Diane
2015-09-01
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.
Gravitational lensing from compact bodies: Analytical results for strong and weak deflection limits
Amore, Paolo; Cervantes, Mayra; De Pace, Arturo; Fernandez, Francisco M.
2007-04-15
We develop a nonperturbative method that yields analytical expressions for the deflection angle of light in a general static and spherically symmetric metric. The method works by introducing into the problem an artificial parameter, called {delta}, and by performing an expansion in this parameter to a given order. The results obtained are analytical and nonperturbative because they do not correspond to a polynomial expression in the physical parameters. Already to first order in {delta} the analytical formulas obtained using our method provide at the same time accurate approximations both at large distances (weak deflection limit) and at distances close to the photon sphere (strong deflection limit). We have applied our technique to different metrics and verified that the error is at most 0.5% for all regimes. We have also proposed an alternative approach which provides simpler formulas, although with larger errors.
Accurate analytical modelling of cosmic ray induced failure rates of power semiconductor devices
NASA Astrophysics Data System (ADS)
Bauer, Friedhelm D.
2009-06-01
A new, simple and efficient approach is presented to conduct estimations of the cosmic ray induced failure rate for high voltage silicon power devices early in the design phase. This allows combining common design issues such as device losses and safe operating area with the constraints imposed by the reliability to result in a better and overall more efficient design methodology. Starting from an experimental and theoretical background brought forth a few yeas ago [Kabza H et al. Cosmic radiation as a cause for power device failure and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 9-12, Zeller HR. Cosmic ray induced breakdown in high voltage semiconductor devices, microscopic model and possible countermeasures. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 339-40, and Matsuda H et al. Analysis of GTO failure mode during d.c. blocking. In: Proceedings of the sixth international symposium on power semiconductor devices and IC's, Davos, Switzerland; 1994. p. 221-5], an exact solution of the failure rate integral is derived and presented in a form which lends itself to be combined with the results available from commercial semiconductor simulation tools. Hence, failure rate integrals can be obtained with relative ease for realistic two- and even three-dimensional semiconductor geometries. Two case studies relating to IGBT cell design and planar junction termination layout demonstrate the purpose of the method.
Dismer, Florian; Hansen, Sigrid; Oelmeier, Stefan Alexander; Hubbuch, Jürgen
2013-03-01
Chromatography is the method of choice for the separation of proteins, at both analytical and preparative scale. Orthogonal purification strategies for industrial use can easily be implemented by combining different modes of adsorption. Nevertheless, with flexibility comes the freedom of choice and optimal conditions for consecutive steps need to be identified in a robust and reproducible fashion. One way to address this issue is the use of mathematical models that allow for an in silico process optimization. Although this has been shown to work, model parameter estimation for complex feedstocks becomes the bottleneck in process development. An integral part of parameter assessment is the accurate measurement of retention times in a series of isocratic or gradient elution experiments. As high-resolution analytics that can differentiate between proteins are often not readily available, pure protein is mandatory for parameter determination. In this work, we present an approach that has the potential to solve this problem. Based on the uniqueness of UV absorption spectra of proteins, we were able to accurately measure retention times in systems of up to four co-eluting compounds. The presented approach is calibration-free, meaning that prior knowledge of pure component absorption spectra is not required. Actually, pure protein spectra can be determined from co-eluting proteins as part of the methodology. The approach was tested for size-exclusion chromatograms of 38 mixtures of co-eluting proteins. Retention times were determined with an average error of 0.6 s (1.6% of average peak width), approximated and measured pure component spectra showed an average coefficient of correlation of 0.992.
Analytical results for resonance and runup in piecewise linear bathymetries
NASA Astrophysics Data System (ADS)
Fuentes, Mauricio; Riquelme, Sebastián; Ruiz, Javier; Campos, Jaime
2015-04-01
A general method of solution for the runup evolution and some analytical results concerning a more general bathymetry than a canonical sloping beach model are presented. We studied theoretically the water wave elevation and runup generated on a continuous piecewise linear bathymetry, by solving analytically the linear shallow water wave equations in the 1+1 dimensional case. Non-horizontal linear segments are assumed and we develop an specific matrix propagator scheme, similar to the ones used in the propagation of elastic seismic wave fields in layered media, to obtain an exact integral form for the runup. A general closed expression for the maximum runup was computed analytically via the Cauchy's residue Theorem for an incident solitary wave and isosceles leading-depression N-wave in the case of n+1 linear segments. It is already known that maximum run-up strongly depends only on the closest slope to the shore, although this has not been mathematically demonstrated yet for arbitraries bathymetries. Analytical and numerical verifications were done to check the validity of the asymptotic maximum runup and we provided the mathematical and bathymetrical conditions that must be satisfied by the model to obtain correct asymptotic solutions. We applied our model to study the runup evolution on a more realistic bathymetry than a canonical sloping beach model. The seabed in a Chilean subduction zone was approximated - from the trench to the shore - by two linear segments adjusting the continental slope and shelf. Assuming an incident solitary wave, the two linear segment bathymetry generates a larger runup than the simple sloping beach model. We also discussed about the differences in the runup evolution computed numerically from incident leading-depression and -elevation isosceles N-waves. In the latter case, the water elevation at the shore shows a symmetrical behavior in terms of theirs waveforms. Finally, we applied our solution to study the resonance effects due to
NASA Astrophysics Data System (ADS)
Roy Choudhury, Raja; Roy Choudhury, Arundhati; Kanti Ghose, Mrinal
2013-01-01
A semi-analytical model with three optimizing parameters and a novel non-Gaussian function as the fundamental modal field solution has been proposed to arrive at an accurate solution to predict various propagation parameters of graded-index fibers with less computational burden than numerical methods. In our semi analytical formulation the optimization of core parameter U which is usually uncertain, noisy or even discontinuous, is being calculated by Nelder-Mead method of nonlinear unconstrained minimizations as it is an efficient and compact direct search method and does not need any derivative information. Three optimizing parameters are included in the formulation of fundamental modal field of an optical fiber to make it more flexible and accurate than other available approximations. Employing variational technique, Petermann I and II spot sizes have been evaluated for triangular and trapezoidal-index fibers with the proposed fundamental modal field. It has been demonstrated that, the results of the proposed solution identically match with the numerical results over a wide range of normalized frequencies. This approximation can also be used in the study of doped and nonlinear fiber amplifier.
NASA Astrophysics Data System (ADS)
Amador, Davi H. T.; de Oliveira, Heibbe C. B.; Sambrano, Julio R.; Gargano, Ricardo; de Macedo, Luiz Guilherme M.
2016-10-01
A prolapse-free basis set for Eka-Actinium (E121, Z = 121), numerical atomic calculations on E121, spectroscopic constants and accurate analytical form for the potential energy curve of diatomic E121F obtained at 4-component all-electron CCSD(T) level including Gaunt interaction are presented. The results show a strong and polarized bond (≈181 kcal/mol in strength) between E121 and F, the outermost frontier molecular orbitals from E121F should be fairly similar to the ones from AcF and there is no evidence of break of periodic trends. Moreover, the Gaunt interaction, although small, is expected to influence considerably the overall rovibrational spectra.
Analytical results on back propagation nonlinear compensator with coherent detection.
Tanimura, Takahito; Nölle, Markus; Fischer, Johannes Karl; Schubert, Colja
2012-12-17
We derive analytic formulas for the improvement in effective optical signal-to-noise ratio brought by a digital nonlinear compensator for dispersion uncompensated links. By assuming Gaussian distributed nonlinear noise, we are able to take both nonlinear signal-to-signal and nonlinear signal-to-noise interactions into account. In the limit of weak nonlinear signal-to-noise interactions, we derive an upper boundary of the OSNR improvement. This upper boundary only depends on fiber parameters as well as on the total bandwidth of the considered wavelength-division multiplexing (WDM) signal and the bandwidth available for back propagation. We discuss the dependency of the upper boundary on different fiber types and also the OSNR improvement in practical system conditions. Furthermore, the analytical formulas are validated by numerical simulations. PMID:23263117
Analytical results on back propagation nonlinear compensator with coherent detection.
Tanimura, Takahito; Nölle, Markus; Fischer, Johannes Karl; Schubert, Colja
2012-12-17
We derive analytic formulas for the improvement in effective optical signal-to-noise ratio brought by a digital nonlinear compensator for dispersion uncompensated links. By assuming Gaussian distributed nonlinear noise, we are able to take both nonlinear signal-to-signal and nonlinear signal-to-noise interactions into account. In the limit of weak nonlinear signal-to-noise interactions, we derive an upper boundary of the OSNR improvement. This upper boundary only depends on fiber parameters as well as on the total bandwidth of the considered wavelength-division multiplexing (WDM) signal and the bandwidth available for back propagation. We discuss the dependency of the upper boundary on different fiber types and also the OSNR improvement in practical system conditions. Furthermore, the analytical formulas are validated by numerical simulations.
A results-based process for evaluation of diverse visual analytics tools
NASA Astrophysics Data System (ADS)
Rubin, Gary; Berger, David H.
2013-05-01
With the pervasiveness of still and full-motion imagery in commercial and military applications, the need to ingest and analyze these media has grown rapidly in recent years. Additionally, video hosting and live camera websites provide a near real-time view of our changing world with unprecedented spatial coverage. To take advantage of these controlled and crowd-sourced opportunities, sophisticated visual analytics (VA) tools are required to accurately and efficiently convert raw imagery into usable information. Whether investing in VA products or evaluating algorithms for potential development, it is important for stakeholders to understand the capabilities and limitations of visual analytics tools. Visual analytics algorithms are being applied to problems related to Intelligence, Surveillance, and Reconnaissance (ISR), facility security, and public safety monitoring, to name a few. The diversity of requirements means that a onesize- fits-all approach to performance assessment will not work. We present a process for evaluating the efficacy of algorithms in real-world conditions, thereby allowing users and developers of video analytics software to understand software capabilities and identify potential shortcomings. The results-based approach described in this paper uses an analysis of end-user requirements and Concept of Operations (CONOPS) to define Measures of Effectiveness (MOEs), test data requirements, and evaluation strategies. We define metrics that individually do not fully characterize a system, but when used together, are a powerful way to reveal both strengths and weaknesses. We provide examples of data products, such as heatmaps, performance maps, detection timelines, and rank-based probability-of-detection curves.
Shakhashiro, A; Mabit, L
2009-01-01
Fallout radionuclides (FRNs) such as (210)Pb and (137)Cs have been widely used to assess soil erosion and sedimentation processes. It is of major importance to obtain accurate analytical results of FRNs by gamma analysis before any data treatment through conversion model and to allow subsequent comparison of erosion and sedimentation rates from different case studies. Therefore, IAEA organized an inter-comparison exercise to assess the validity and reliability of the analytical results of (137)Cs and total (210)Pb using gamma-spectrometry in the various laboratories participating in the IAEA Co-ordinated Research Project on "Assess the effectiveness of soil conservation measures for sustainable watershed management using fallout radionuclides". Reference materials were distributed to 14 participating laboratories and, using a rating system, their analytical results were compared to the reference values assigned. In the case of (137)Cs, the analytical results were satisfactory with 66% of the laboratories producing acceptable results. Only the sample with low (137)Cs activity (2.6+/-0.2Bqkg(-1)) gave less accurate results with more than 25% not acceptable results. The total (210)Pb analysis indicated a clear need for corrective actions in the analysis process as only 36% of the laboratories involved in the proficiency test was able to access total (210)Pb with occurrence (bias 10%). This inter-laboratory test underlines that further inter-comparison exercises should be organized by IAEA or regional laboratories to ensure the quality of the analytical data produced in Member States. As a result of the above-mentioned proficiency test, some recommendations have been provided to improve accurate gamma measurement of both (137)Cs and total (210)Pb. PMID:18760612
Krommes, J.A.
2000-01-18
Recent results and future challenges in the systematic analytical description of plasma turbulence are described. First, the importance of statistical realizability is stressed, and the development and successes of the Realizable Markovian Closure are briefly reviewed. Next, submarginal turbulence (linearly stable but nonlinearly self-sustained fluctuations) is considered and the relevance of nonlinear instability in neutral-fluid shear flows to submarginal turbulence in magnetized plasmas is discussed. For the Hasegawa-Wakatani equations, a self-consistency loop that leads to steady-state vortex regeneration in the presence of dissipation is demonstrated and a partial unification of recent work of Drake (for plasmas) and of Waleffe (for neutral fluids) is given. Brief remarks are made on the difficulties facing a quantitatively accurate statistical description of submarginal turbulence. Finally, possible connections between intermittency, submarginal turbulence, and self-organized criticality (SOC) are considered and outstanding questions are identified.
Communicating Qualitative Analytical Results Following Grice's Conversational Maxims
ERIC Educational Resources Information Center
Chenail, Jan S.; Chenail, Ronald J.
2011-01-01
Conducting qualitative research can be seen as a developing communication act through which researchers engage in a variety of conversations. Articulating the results of qualitative data analysis results can be an especially challenging part of this scholarly discussion for qualitative researchers. To help guide investigators through this…
Microgravity Fluid Separation Physics: Experimental and Analytical Results
NASA Technical Reports Server (NTRS)
Shoemaker, J. Michael; Schrage, Dean S.
1997-01-01
Effective, low power, two-phase separation systems are vital for the cost-effective study and utilization of two-phase flow systems and flow physics of two-phase flows. The study of microgravity flows have the potential to reveal significant insight into the controlling mechanisms for the behavior of flows in both normal and reduced gravity environments. The microgravity environment results in a reduction in gravity induced buoyancy forces acting on the discrete phases. Thus, surface tension, viscous, and inertial forces exert an increased influence on the behavior of the flow as demonstrated by the axisymmetric flow patterns. Several space technology and operations groups have studied the flow behavior in reduced gravity since gas-liquid flows are encountered in several systems such as cabin humidity control, wastewater treatment, thermal management, and Rankine power systems.
Lim, Chee Wei; Tai, Siew Hoon; Lee, Lin Min; Chan, Sheot Harn
2012-07-01
The current food crisis demands unambiguous determination of mycotoxin contamination in staple foods to achieve safer food for consumption. This paper describes the first accurate LC-MS/MS method developed to analyze tricothecenes in grains by applying multiple reaction monitoring (MRM) transition and MS(3) quantitation strategies in tandem. The tricothecenes are nivalenol, deoxynivalenol, deoxynivalenol-3-glucoside, fusarenon X, 3-acetyl-deoxynivalenol, 15-acetyldeoxynivalenol, diacetoxyscirpenol, and HT-2 and T-2 toxins. Acetic acid and ammonium acetate were used to convert the analytes into their respective acetate adducts and ammonium adducts under negative and positive MS polarity conditions, respectively. The mycotoxins were separated by reversed-phase LC in a 13.5-min run, ionized using electrospray ionization, and detected by tandem mass spectrometry. Analyte-specific mass-to-charge (m/z) ratios were used to perform quantitation under MRM transition and MS(3) (linear ion trap) modes. Three experiments were made for each quantitation mode and matrix in batches over 6 days for recovery studies. The matrix effect was investigated at concentration levels of 20, 40, 80, 120, 160, and 200 μg kg(-1) (n = 3) in 5 g corn flour and rice flour. Extraction with acetonitrile provided a good overall recovery range of 90-108% (n = 3) at three levels of spiking concentration of 40, 80, and 120 μg kg(-1). A quantitation limit of 2-6 μg kg(-1) was achieved by applying an MRM transition quantitation strategy. Under MS(3) mode, a quantitation limit of 4-10 μg kg(-1) was achieved. Relative standard deviations of 2-10% and 2-11% were reported for MRM transition and MS(3) quantitation, respectively. The successful utilization of MS(3) enabled accurate analyte fragmentation pattern matching and its quantitation, leading to the development of analytical methods in fields that demand both analyte specificity and fragmentation fingerprint-matching capabilities that are
Transcriptional Bursting in Gene Expression: Analytical Results for General Stochastic Models.
Kumar, Niraj; Singh, Abhyudai; Kulkarni, Rahul V
2015-10-01
Gene expression in individual cells is highly variable and sporadic, often resulting in the synthesis of mRNAs and proteins in bursts. Such bursting has important consequences for cell-fate decisions in diverse processes ranging from HIV-1 viral infections to stem-cell differentiation. It is generally assumed that bursts are geometrically distributed and that they arrive according to a Poisson process. On the other hand, recent single-cell experiments provide evidence for complex burst arrival processes, highlighting the need for analysis of more general stochastic models. To address this issue, we invoke a mapping between general stochastic models of gene expression and systems studied in queueing theory to derive exact analytical expressions for the moments associated with mRNA/protein steady-state distributions. These results are then used to derive noise signatures, i.e. explicit conditions based entirely on experimentally measurable quantities, that determine if the burst distributions deviate from the geometric distribution or if burst arrival deviates from a Poisson process. For non-Poisson arrivals, we develop approaches for accurate estimation of burst parameters. The proposed approaches can lead to new insights into transcriptional bursting based on measurements of steady-state mRNA/protein distributions. PMID:26474290
Tank 48H Waste Composition and Results of Investigation of Analytical Methods
Walker , D.D.
1997-04-02
This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.
Accurate Analytic Potential Functions for the a ^3Π_1 and X ^1Σ^+ States of {IBr}
NASA Astrophysics Data System (ADS)
Yukiya, Tokio; Nishimiya, Nobuo; Suzuki, Masao; Le Roy, Robert
2014-06-01
Spectra of IBr in various wavelength regions have been measured by a number of researchers using traditional diffraction grating and microwave methods, as well as using high-resolution laser techniques combined with a Fourier transform spectrometer. In a previous paper at this meeting, we reported a preliminary determination of analytic potential energy functions for the A ^3Π_1 and X ^1Σ^+ states of IBr from a direct-potential-fit (DPF) analysis of all of the data available at that time. That study also confirmed the presence of anomalous fluctuations in the v--dependence of the first differences of the inertial rotational constant, Δ Bv=Bv+1-Bv in the A ^3Π_1 state for vibrational levels with v'(A) in the mid 20's. However, our previous experience in a recent study of the analogous A ^3Π_1-X ^1Σ_g^+ system of Br_2 suggested that the effect of such fluctuations may be overcome if sufficient data are available. The present work therefore reports new measurements of transitions to levels in the v'(A)=23-26 region, together with a new global DPF analysis that uses ``robust" least-squares fits to average properly over the effect of such fluctuations in order to provide an optimum delineation of the underlying potential energy curve(s). L.E.Selin,Ark. Fys. 21,479(1962) E. Tiemann and Th. Moeller, Z. Naturforsch. A 30,986 (1975) E.M. Weinstock and A. Preston, J. Mol. Spectrosc. 70, 188 (1978) D.R.T. Appadoo, P.F. Bernath, and R.J. Le Roy, Can. J. Phys. 72, 1265 (1994) N. Nishimiya, T. Yukiya and M. Suzuki, J. Mol. Spectrosc. 173, 8 (1995). T. Yukiya, N. Nishimiya, and R.J. Le Roy, Paper MF12 at the 65th Ohio State University International Symposium on Molecular Spectroscopy, Columbus, Ohio, June 20-24, 2011. T. Yukiya, N. Nishimiya, Y. Samajima, K. Yamaguchi, M. Suzuki, C.D. Boone, I. Ozier and R.J. Le Roy, J. Mol. Spectrosc. 283, 32 (2013) J.K.G. Watson, J. Mol. Spectrosc. 219, 326 (2003).
Tank 241-A-101 cores 154 and 156 analytical results for the final report
Steen, F.H.
1997-05-02
This report contains tables of the analytical results from sampling Tank 241-A-101 for the following: fluorides, chlorides, nitrites, bromides, nitrates, phosphates, sulfates, and oxalates. This tank is listed on the Hydrogen Watch List.
Ballestra, S.; Vas, D.; Holm, E.; Lopez, J.J.; Parsi, P. )
1988-01-01
The Analytical Quality Control Services Program of the IAEA-ILMR covers a wide variety of intercalibration and reference materials. The purpose of the program is to ensure the comparability of the results obtained by the different participants and to enable laboratories engaged in low-level analyses of marine environmental materials to control their analytical performance. Within the past five years, the International Laboratory of Marine Radioactivity in Monaco has organized eight intercomparison exercises, on a world-wide basis, on natural materials of marine origin comprising sea water, sediment, seaweed and fish flesh. Results on artificial (fission and activation products, transuranium elements) and natural radionuclides were compiled and evaluated. Reference concentration values were established for a number of the intercalibration samples allowing them to become certified as reference materials available for general distribution. The results of the fish flesh sample and those of the deep-sea sediment are reviewed. The present status of three on-going intercomparison exercises on post-Chernobyl samples IAEA-306 (Baltic Sea sediment), IAEA-307 (Mediterranean sea-plant Posidonia oceanica) and IAEA-308 (Mediterranean mixed seaweed) is also described. 1 refs., 4 tabs.
Anderson, Oscar A.
2006-08-06
The well-known Kapchinskij-Vladimirskij (KV) equations are difficult to solve in general, but the problem is simplified for the matched-beam case with sufficient symmetry. They show that the interdependence of the two KV equations is eliminated, so that only one needs to be solved--a great simplification. They present an iterative method of solution which can potentially yield any desired level of accuracy. The lowest level, the well-known smooth approximation, yields simple, explicit results with good accuracy for weak or moderate focusing fields. The next level improves the accuracy for high fields; they previously showed how to maintain a simple explicit format for the results. That paper used expansion in a small parameter to obtain the second level. The present paper, using straightforward iteration, obtains equations of first, second, and third levels of accuracy. For a periodic lattice with beam matched to lattice, they use the lattice and beam parameters as input and solve for phase advances and envelope waveforms. They find excellent agreement with numerical solutions over a wide range of beam emittances and intensities.
NASA Astrophysics Data System (ADS)
Bini, Donato; Damour, Thibault; Geralico, Andrea
2016-05-01
We raise the analytical knowledge of the eccentricity expansion of the Detweiler-Barack-Sago redshift invariant in a Schwarzschild spacetime up to the 9.5th post-Newtonian order (included) for the e2 and e4 contributions, and up to the 4th post-Newtonian order for the higher eccentricity contributions through e20 . We convert this information into an analytical knowledge of the effective-one-body radial potentials d ¯ (u ) , ρ (u ) and q (u ) through the 9.5th post-Newtonian order. We find that our analytical results are compatible with current corresponding numerical self-force data.
NASA Technical Reports Server (NTRS)
Lueck, Dale E.; Captain, Janine E.; Gibson, Tracy L.; Peterson, Barbara V.; Berger, Cristina M.; Levine, Lanfang
2008-01-01
The RESOLVE project requires an analytical system to identify and quantitate the volatiles released from a lunar drill core sample as it is crushed and heated to 150 C. The expected gases and their range of concentrations were used to assess Gas Chromatography (GC) and Mass Spectrometry (MS), along with specific analyzers for use on this potential lunar lander. The ability of these systems to accurately quantitate water and hydrogen in an unknown matrix led to the selection of a small MEMS commercial process GC for use in this project. The modification, development and testing of this instrument for the specific needs of the project is covered.
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.; Walji, Sadru; Sentjens, Katherine
2013-06-01
Alkali hydride diatomic molecules have long been the object of spectroscopic studies. However, their small reduced mass makes them species for which the conventional semiclassical-based methods of analysis tend to have the largest errors. To date, the only quantum-mechanically accurate direct-potential-fit (DPF) analysis for one of these molecules was the one for LiH reported by Coxon and Dickinson. The present paper extends this level of analysis to NaH, and reports a DPF analysis of all available spectroscopic data for the A ^1Σ^+-X ^1Σ^+ system of NaH which yields analytic potential energy functions for these two states that account for those data (on average) to within the experimental uncertainties. W.C. Stwalley, W.T. Zemke and S.C. Yang, J. Phys. Chem. Ref. Data {20}, 153-187 (1991). J.A. Coxon and C.S. Dickinson, J. Chem. Phys. {121}, 8378 (2004).
Bearup, Daniel; Petrovskaya, Natalia; Petrovskii, Sergei
2015-05-01
Monitoring of pest insects is an important part of the integrated pest management. It aims to provide information about pest insect abundance at a given location. This includes data collection, usually using traps, and their subsequent analysis and/or interpretation. However, interpretation of trap count (number of insects caught over a fixed time) remains a challenging problem. First, an increase in either the population density or insects activity can result in a similar increase in the number of insects trapped (the so called "activity-density" problem). Second, a genuine increase of the local population density can be attributed to qualitatively different ecological mechanisms such as multiplication or immigration. Identification of the true factor causing an increase in trap count is important as different mechanisms require different control strategies. In this paper, we consider a mean-field mathematical model of insect trapping based on the diffusion equation. Although the diffusion equation is a well-studied model, its analytical solution in closed form is actually available only for a few special cases, whilst in a more general case the problem has to be solved numerically. We choose finite differences as the baseline numerical method and show that numerical solution of the problem, especially in the realistic 2D case, is not at all straightforward as it requires a sufficiently accurate approximation of the diffusion fluxes. Once the numerical method is justified and tested, we apply it to the corresponding boundary problem where different types of boundary forcing describe different scenarios of pest insect immigration and reveal the corresponding patterns in the trap count growth. PMID:25744607
Are restrained eaters accurate monitors of their intoxication? Results from a field experiment.
Buchholz, Laura J; Crowther, Janis H; Olds, R Scott; Smith, Kathryn E; Ridolfi, Danielle R
2013-04-01
Brief interventions encourage college students to eat more before drinking to prevent harm (Dimeff et al., 1999), although many women decrease their caloric intake (Giles et al., 2009) and the number of eating episodes (Luce et al., 2012) prior to drinking alcohol. Participants were 37 undergraduate women (24.3% Caucasian) who were recruited from a local bar district in the Midwest. This study examined whether changes in eating after intending to drink interacted with dietary restraint to predict accuracy of one's intoxication. Results indicated that changes in eating significantly moderated the relationship between dietary restraint and accuracy of one's intoxication level. After eating more food before intending to drink, women higher in restraint were more likely to overestimate their intoxication than women lower in restraint. There were no differences between women with high levels and low levels of dietary restraint in the accuracy of their intoxication after eating less food before intending to drink. Future research would benefit from examining interoceptive awareness as a possible mechanism involved in this relationship.
Olivieri, Alejandro C
2015-04-01
Practical guidelines for reporting analytical calibration results are provided. General topics, such as the number of reported significant figures and the optimization of analytical procedures, affect all calibration scenarios. In the specific case of single-component or univariate calibration, relevant issues discussed in the present Tutorial include: (1) how linearity can be assessed, (2) how to correctly estimate the limits of detection and quantitation, (2) when and how standard addition should be employed, (3) how to apply recovery studies for evaluating accuracy and precision, and (4) how average prediction errors can be compared for different analytical methodologies. For multi-component calibration procedures based on multivariate data, pertinent subjects here included are the choice of algorithms, the estimation of analytical figures of merit (detection capabilities, sensitivity, selectivity), the use of non-linear models, the consideration of the model regression coefficients for variable selection, and the application of certain mathematical pre-processing procedures such as smoothing.
NASA Technical Reports Server (NTRS)
Friedmann, P. P.; Venkatesan, C.
1988-01-01
The results of an analytical study aimed at predicting the aeromechanical stability of a helicopter in ground resonance, with the inclusion of aerodynamic forces are presented. The theoretical results are found to be in good agreement with the experimental results, available in literature, indicating that the coupled rotor/fuselage system can be represented by a reasonably simple mathematical model.
NASA Astrophysics Data System (ADS)
Fontana, A.; Marzari, F.
2016-05-01
Context. Planetesimals and planets embedded in a circumstellar disk are dynamically perturbed by the disk gravity. It causes an apsidal line precession at a rate that depends on the disk density profile and on the distance of the massive body from the star. Aims: Different analytical models are exploited to compute the precession rate of the perihelion ϖ˙. We compare them to verify their equivalence, in particular after analytical manipulations performed to derive handy formulas, and test their predictions against numerical models in some selected cases. Methods: The theoretical precession rates were computed with analytical algorithms found in the literature using the Mathematica symbolic code, while the numerical simulations were performed with the hydrodynamical code FARGO. Results: For low-mass bodies (planetesimals) the analytical approaches described in Binney & Tremaine (2008, Galactic Dynamics, p. 96), Ward (1981, Icarus, 47, 234), and Silsbee & Rafikov (2015a, ApJ, 798, 71) are equivalent under the same initial conditions for the disk in terms of mass, density profile, and inner and outer borders. They also match the numerical values computed with FARGO away from the outer border of the disk reasonably well. On the other hand, the predictions of the classical Mestel disk (Mestel 1963, MNRAS, 126, 553) for disks with p = 1 significantly depart from the numerical solution for radial distances beyond one-third of the disk extension because of the underlying assumption of the Mestel disk is that the outer disk border is equal to infinity. For massive bodies such as terrestrial and giant planets, the agreement of the analytical approaches is progressively poorer because of the changes in the disk structure that are induced by the planet gravity. For giant planets the precession rate changes sign and is higher than the modulus of the theoretical value by a factor ranging from 1.5 to 1.8. In this case, the correction of the formula proposed by Ward (1981) to
Review of analytical results from the proposed agent disposal facility site, Aberdeen Proving Ground
Brubaker, K.L.; Reed, L.L.; Myers, S.W.; Shepard, L.T.; Sydelko, T.G.
1997-09-01
Argonne National Laboratory reviewed the analytical results from 57 composite soil samples collected in the Bush River area of Aberdeen Proving Ground, Maryland. A suite of 16 analytical tests involving 11 different SW-846 methods was used to detect a wide range of organic and inorganic contaminants. One method (BTEX) was considered redundant, and two {open_quotes}single-number{close_quotes} methods (TPH and TOX) were found to lack the required specificity to yield unambiguous results, especially in a preliminary investigation. Volatile analytes detected at the site include 1, 1,2,2-tetrachloroethane, trichloroethylene, and tetrachloroethylene, all of which probably represent residual site contamination from past activities. Other volatile analytes detected include toluene, tridecane, methylene chloride, and trichlorofluoromethane. These compounds are probably not associated with site contamination but likely represent cross-contamination or, in the case of tridecane, a naturally occurring material. Semivolatile analytes detected include three different phthalates and low part-per-billion amounts of the pesticide DDT and its degradation product DDE. The pesticide could represent residual site contamination from past activities, and the phthalates are likely due, in part, to cross-contamination during sample handling. A number of high-molecular-weight hydrocarbons and hydrocarbon derivatives were detected and were probably naturally occurring compounds. 4 refs., 1 fig., 8 tabs.
Tank 241-S-102, Core 232 analytical results for the final report
STEEN, F.H.
1998-11-04
This document is the analytical laboratory report for tank 241-S-102 push mode core segments collected between March 5, 1998 and April 2, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-S-102 Retained Gas Sampler System Sampling and Analysis Plan (TSAP) (McCain, 1998), Letter of Instruction for Compatibility Analysis of Samples from Tank 241-S-102 (LOI) (Thompson, 1998) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Mulkey and Miller, 1998). The analytical results are included in the data summary table (Table 1).
B Plant canyon sample TK-21-1 analytical results for the final report
Steen, F.H.
1998-04-10
This document is the analytical laboratory report for the TK-21-1 sample collected from the B Plant Canyon on February 18, 1998. The sample was analyzed in accordance with the Sampling and Analysis Plan for B Plant Solutions (SAP) (Simmons, 1997) in support of the B Plant decommissioning project. Samples were analyzed to provide data both to describe the material which would remain in the tanks after the B Plant transition is complete and to determine Tank Farm compatibility. The analytical results are included in the data summary table (Table 1).
Barbati, Alexander C; Kirby, Brian J
2016-07-01
We derive an approximate analytical representation of the conductivity for a 1D system with porous and charged layers grafted onto parallel plates. Our theory improves on prior work by developing approximate analytical expressions applicable over an arbitrary range of potentials, both large and small as compared to the thermal voltage (RTF). Further, we describe these results in a framework of simplifying nondimensional parameters, indicating the relative dominance of various physicochemical processes. We demonstrate the efficacy of our approximate expression with comparisons to numerical representations of the exact analytical conductivity. Finally, we utilize this conductivity expression, in concert with other components of the electrokinetic coupling matrix, to describe the streaming potential and electroviscous effect in systems with porous and charged layers.
Peters, T.; Fink, S.
2011-09-29
As part of the implementation process for the Next Generation Cesium Extraction Solvent (NGCS), SRNL and F/H Lab performed a series of analytical cross-checks to ensure that the components in the NGCS solvent system do not constitute an undue analytical challenge. For measurement of entrained Isopar{reg_sign} L in aqueous solutions, both labs performed similarly with results more reliable at higher concentrations (near 50 mg/L). Low bias occurred in both labs, as seen previously for comparable blind studies for the baseline solvent system. SRNL recommends consideration to use of Teflon{trademark} caps on all sample containers used for this purpose. For pH measurements, the labs showed reasonable agreement but considerable positive bias for dilute boric acid solutions. SRNL recommends consideration of using an alternate analytical method for qualification of boric acid concentrations.
Tank 241-BY-109, cores 201 and 203, analytical results for the final report
Esch, R.A.
1997-11-20
This document is the final laboratory report for tank 241-BY-109 push mode core segments collected between June 6, 1997 and June 17, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (Bell, 1997), the Tank Safety Screening Data Quality Objective (Dukelow, et al, 1995). The analytical results are included.
Feynman Path Integral Approach to Electron Diffraction for One and Two Slits: Analytical Results
ERIC Educational Resources Information Center
Beau, Mathieu
2012-01-01
In this paper we present an analytic solution of the famous problem of diffraction and interference of electrons through one and two slits (for simplicity, only the one-dimensional case is considered). In addition to exact formulae, various approximations of the electron distribution are shown which facilitate the interpretation of the results.…
Recent Results on the Accurate Measurements of the Dielectric Constant of Seawater at 1.413GHZ
NASA Technical Reports Server (NTRS)
Lang, R.H.; Tarkocin, Y.; Utku, C.; Le Vine, D.M.
2008-01-01
Measurements of the complex. dielectric constant of seawater at 30.00 psu, 35.00 psu and 38.27 psu over the temperature range from 5 C to 3 5 at 1.413 GHz are given and compared with the Klein-Swift results. A resonant cavity technique is used. The calibration constant used in the cavity perturbation formulas is determined experimentally using methanol and ethanediol (ethylene glycol) as reference liquids. Analysis of the data shows that the measurements are accurate to better than 1.0% in almost all cases studied.
Bounding the Higgs width at the LHC using full analytic results for $$gg → e^- e^+ \\mu^- \\mu^+$$
Campbell, John M.; Ellis, R. Keith; Williams, Ciaran
2014-04-09
We revisit the hadronic production of the four-lepton final state, e– e+ μ– μ+, through the fusion of initial state gluons. This process is mediated by loops of quarks and we provide first full analytic results for helicity amplitudes that account for both the effects of the quark mass in the loop and off-shell vector bosons. The analytic results have been implemented in the Monte Carlo program MCFM and are both fast, and numerically stable in the region of low Z transverse momentum. We use our results to study the interference between Higgs-mediated and continuum production of four-lepton final states,more » which is necessary in order to obtain accurate theoretical predictions outside the Higgs resonance region. We have confirmed and extended a recent analysis of Caola and Melnikov that proposes to use a measurement of the off-shell region to constrain the total width of the Higgs boson. Using a simple cut-and-count method, existing LHC data should bound the width at the level of 25-45 times the Standard Model expectation. We investigate the power of using a matrix element method to construct a kinematic discriminant to sharpen the constraint. Furthermore, in our analysis the bound on the Higgs width is improved by a factor of about 1.6 using a simple cut on the MEM discriminant, compared to an invariant mass cut μ4l > 300 GeV.« less
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
Analytical results for quasiparticle excitations in the Fractional Quantum Hall Effect regime
NASA Astrophysics Data System (ADS)
Bentalha, Z.
2016-07-01
In this work, quasiparticle energies for systems with N = 3 , 4 , 5 , 6 and 7 electrons are calculated analytically in both Laughlin and composite fermions (CF) theories by considering the electron-electron interaction potential. The exact results we have obtained for the first and the second excited states agree with previous numerical results. This study shows that at this level the CF-wave function has lower energy in comparison with Laughlin wave function energy.
Improving the trust in results of numerical simulations and scientific data analytics
Cappello, Franck; Constantinescu, Emil; Hovland, Paul; Peterka, Tom; Phillips, Carolyn; Snir, Marc; Wild, Stefan
2015-04-30
This white paper investigates several key aspects of the trust that a user can give to the results of numerical simulations and scientific data analytics. In this document, the notion of trust is related to the integrity of numerical simulations and data analytics applications. This white paper complements the DOE ASCR report on Cybersecurity for Scientific Computing Integrity by (1) exploring the sources of trust loss; (2) reviewing the definitions of trust in several areas; (3) providing numerous cases of result alteration, some of them leading to catastrophic failures; (4) examining the current notion of trust in numerical simulation and scientific data analytics; (5) providing a gap analysis; and (6) suggesting two important research directions and their respective research topics. To simplify the presentation without loss of generality, we consider that trust in results can be lost (or the results’ integrity impaired) because of any form of corruption happening during the execution of the numerical simulation or the data analytics application. In general, the sources of such corruption are threefold: errors, bugs, and attacks. Current applications are already using techniques to deal with different types of corruption. However, not all potential corruptions are covered by these techniques. We firmly believe that the current level of trust that a user has in the results is at least partially founded on ignorance of this issue or the hope that no undetected corruptions will occur during the execution. This white paper explores the notion of trust and suggests recommendations for developing a more scientifically grounded notion of trust in numerical simulation and scientific data analytics. We first formulate the problem and show that it goes beyond previous questions regarding the quality of results such as V&V, uncertainly quantification, and data assimilation. We then explore the complexity of this difficult problem, and we sketch complementary general
NASA Astrophysics Data System (ADS)
Mori, Takuro; Nakatani, Makoto; Tesfamariam, Solomon
2015-12-01
This paper presents analytical and numerical models for semirigid timber frame with Lagscrewbolt (LSB) connections. A series of static and reverse cyclic experimental tests were carried out for different beam sizes (400, 500, and 600 mm depth) and column-base connections with different numbers of LSBs (4, 5, 8). For the beam-column connections, with increase in beam depth, moment resistance and stiffness values increased, and ductility factor reduced. For the column-base connection, with increase in the number of LSBs, the strength, stiffness, and ductility values increased. A material model available in OpenSees, Pinching4 hysteretic model, was calibrated for all connection test results. Finally, analytical model of the portal frame was developed and compared with the experimental test results. Overall, there was good agreement with the experimental test results, and the Pinching4 hysteretic model can readily be used for full-scale structural model.
Tank 241-T-112, cores 185 and 186 analytical results for the final report
Steen, F.H.
1997-06-03
This document is the analytical laboratory report for tank 241-T-112 push mode core segments collected between February 26, 1997 and March 19, 1997. The segments were subsampled and analyzed in accordance with the Tank 241-T-112 Push Mode Core Samplings and Analysis Plan (TSAP) and the Safety Screening Data Quality Objective (DQO). The analytical results are included in the data summary table. None of the samples submitted for Differential Scanning Calorimetry and Total Alpha Activity (AT) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding and are not considered in this report.
Tank 241-AX-103, cores 212 and 214 analytical results for the final report
Steen, F.H.
1998-02-05
This document is the analytical laboratory report for tank 241-AX-103 push mode core segments collected between July 30, 1997 and August 11, 1997. The segments were subsampled and analyzed in accordance with the Tank 241-AX-103 Push Mode Core Sampling and Analysis Plan (TSAP) (Comer, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT), plutonium 239 (Pu239), and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Conner, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report.
Drilling forces in high-curvature wellbores: A comparison of analytical model results with MWD data
Rocheleau, D.N.; Zhao, M.
1997-07-01
Horizontal drilling is commonly used to reach lateral targets in oil and gas reservoirs. A method is presented which predicts the drilling forces encountered while tripping-in and tripping-out of high-curvature wellbores during horizontal and extended reach drilling. The method is based on modeling the drillstring as a set of continuous beams using Timoshenko beam theory. The paper first describes how the drillstring is modeled; it then develops the analytical equations of the model and outlines a computer implementation of these equations. Lastly, the results predicted by the analytical model are compared with actual field results based on measurement while drilling (MWD) data obtained from high-curvature wellbores in the Gulf of Mexico.
Analytical Results from the Area G Nitrate Salt Samples Submitted to C-AAC
Drake, Lawrence Randall
2014-09-03
Table 1 is a summary of the analysis results on sample 4174-1-6/7 and 4174-2-6/7. The results in Table 2 are the major and trace metals analysis values. Samples 4174-1-6 and 4174-2-7 were introduced into a radiological glovebox in CMR for partitioning and analysis. Samples 4174-1-7 and 4174-2-6 were assayed by gamma spectrometry and then sent to TA 48 (C-NR) for further analysis. The validated analytical procedures used by C-AAC are cited at the end of this document. The results have not been approved by formal QA release.
Study of a vibrating plate: comparison between experimental (ESPI) and analytical results
NASA Astrophysics Data System (ADS)
Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.
2003-07-01
Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.
2015-01-01
Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. Results In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. Conclusions Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results. PMID:26328893
Field comparison of analytical results from discrete-depth ground water samplers
Zemo, D.A.; Delfino, T.A.; Gallinatti, J.D.; Baker, V.A.; Hilpert, L.R.
1995-07-01
Discrete-depth ground water samplers are used during environmental screening investigations to collect ground water samples in lieu of installing and sampling monitoring wells. Two of the most commonly used samplers are the BAT Enviroprobe and the QED HydroPunch I, which rely on differing sample collection mechanics. Although these devices have been on the market for several years, it was unknown what, if any, effect the differences would have on analytical results for ground water samples containing low to moderate concentrations of chlorinated volatile organic compounds (VOCs). This study investigated whether the discrete-depth ground water sampler used introduces statistically significant differences in analytical results. The goal was to provide a technical basis for allowing the two devices to be used interchangeably during screening investigations. Because this study was based on field samples, it included several sources of potential variability. It was necessary to separate differences due to sampler type from variability due to sampling location, sample handling, and laboratory analytical error. To statistically evaluate these sources of variability, the experiment was arranged in a nested design. Sixteen ground water samples were collected from eight random locations within a 15-foot by 15-foot grid. The grid was located in an area where shallow ground water was believed to be uniformly affected by VOCs. The data were evaluated using analysis of variance.
Variability of physicochemical analytical results from minesoils and QA/QC considerations
Brandt, J.E.; Horbaczewski, J.K.
1997-12-31
The Texas Mining and Reclamation Association (TMRA) has sponsored a soil sample round robin program since 1990. To date, 17 different soil samples (including blind duplicate samples M and P) have been analyzed in a series of six rounds of analysis. Five laboratories have participated in this program. Three of these were commercial laboratories, one was an electric utility`s in-house laboratory, and one was the state regulatory authority`s laboratory. Samples submitted for analysis included minesoils and native soils. Results indicate that average inter-laboratory variability was approximately five times greater than intra-laboratory variability. The level of variation was affected by the sample being analyzed and the analytical parameter that was involved. The larger inter-laboratory variability suggest that a greater convergence of analytical results may be attained through a rigorous examination of individual laboratory methodologies and implementation of widely-used standard operating procedures (SOPs). An examination of coefficient of variation (C.V.) data from twelve analytical parameters indicated that pH was the least variable and trace elements were among the most variable. The inclusion of near minimum detection level (MDL) values tended to increase the C.V. of data. Analyses of variance (ANOVA) indicated that the sources of variation of most parameters were inconsistent over time. Differences in equipment, sample extraction, and personnel could also account for different values among the laboratories. Analytical parameters that exhibited a sample by laboratory (S X L) interaction suggest that there was inconsistency among laboratories over time. Samples M and P were blind duplicates of the same sample, included in the round robin to assess the precision of the participating laboratories.
Bicanic, Dane; Swarts, Jan; Luterotti, Svjetlana; Pietraperzia, Giangaetano; Dóka, Otto; de Rooij, Hans
2004-09-01
The concept of the optothermal window (OW) is proposed as a reliable analytical tool to rapidly determine the concentration of lycopene in a large variety of commercial tomato products in an extremely simple way (the determination is achieved without the need for pretreatment of the sample). The OW is a relative technique as the information is deduced from the calibration curve that relates the OW data (i.e., the product of the absorption coefficient beta and the thermal diffusion length micro) with the lycopene concentration obtained from spectrophotometric measurements. The accuracy of the method has been ascertained with a high correlation coefficient (R = 0.98) between the OW data and results acquired from the same samples by means of the conventional extraction spectrophotometric method. The intrinsic precision of the OW method is quite high (better than 1%), whereas the repeatability of the determination (RSD = 0.4-9.5%, n= 3-10) is comparable to that of spectrophotometry.
NASA Technical Reports Server (NTRS)
Smutek, C.; Bontoux, P.; Roux, B.; Schiroky, G. H.; Hurford, A. C.
1985-01-01
The results of a three-dimensional numerical simulation of Boussinesq free convection in a horizontal differentially heated cylinder are presented. The computation was based on a Samarskii-Andreyev scheme (described by Leong, 1981) and a false-transient advancement in time, with vorticity, velocity, and temperature as dependent variables. Solutions for velocity and temperature distributions were obtained for Rayleigh numbers (based on the radius) Ra = 74-18,700, thus covering the core- and boundary-layer-driven regimes. Numerical solutions are compared with asymptotic analytical solutions and experimental data. The numerical results well represent the complex three-dimensional flows found experimentally.
Analytic results for the percolation transitions of the enhanced binary tree.
Minnhagen, Petter; Baek, Seung Ki
2010-07-01
Percolation for a planar lattice has a single percolation threshold, whereas percolation for a negatively curved lattice displays two separate thresholds. The enhanced binary tree (EBT) can be viewed as a prototype model displaying two separate percolation thresholds. We present an analytic result for the EBT model which gives two critical percolation threshold probabilities, p(c1) = 1/2 square root(13) - 3/2 and p(c2) = 1/2, and yields a size-scaling exponent Φ = ln[(p(1+p))/(1-p(1-p))]/ln 2. It is inferred that the two threshold values give exact upper limits and that pc1 is furthermore exact. In addition, we argue that p(c2) is also exact. The physics of the model and the results are described within the midpoint-percolation concept: Monte Carlo simulations are presented for the number of boundary points which are reached from the midpoint, and the results are compared to the number of routes from the midpoint to the boundary given by the analytic solution. These comparisons provide a more precise physical picture of what happens at the transitions. Finally, the results are compared to related works, in particular, the Cayley tree and Monte Carlo results for hyperbolic lattices as well as earlier results for the EBT model. It disproves a conjecture that the EBT has an exact relation to the thresholds of its dual lattice.
NASA Astrophysics Data System (ADS)
Pasternack, G. B.; Wyrick, J. R.; Jackson, J. R.
2014-12-01
Long practiced in fisheries, visual substrate mapping of coarse-bedded rivers is eschewed by geomorphologists for inaccuracy and limited sizing data. Geomorphologists perform time-consuming measurements of surficial grains, with the few locations precluding spatially explicit mapping and analysis of sediment facies. Remote sensing works for bare land, but not vegetated or subaqueous sediments. As visual systems apply the log2 Wentworth scale made for sieving, they suffer from human inability to readily discern those classes. We hypothesized that size classes centered on the PDF of the anticipated sediment size distribution would enable field crews to accurately (i) identify presence/absence of each class in a facies patch and (ii) estimate the relative amount of each class to within 10%. We first tested 6 people using 14 measured samples with different mixtures. Next, we carried out facies mapping for ~ 37 km of the lower Yuba River in California. Finally, we tested the resulting data to see if it produced statistically significant hydraulic-sedimentary-geomorphic results. Presence/absence performance error was 0-4% for four people, 13% for one person, and 33% for one person. The last person was excluded from further effort. For the abundance estimation performance error was 1% for one person, 7-12% for three people, and 33% for one person. This last person was further trained and re-tested. We found that the samples easiest to visually quantify were unimodal and bimodal, while those most difficult had nearly equal amounts of each size. This confirms psychological studies showing that humans have a more difficult time quantifying abundances of subgroups when confronted with well-mixed groups. In the Yuba, mean grain size decreased downstream, as is typical for an alluvial river. When averaged by reach, mean grain size and bed slope were correlated with an r2 of 0.95. At the morphological unit (MU) scale, eight in-channel bed MU types had an r2 of 0.90 between mean
Analytical Round Robin for Elastic-Plastic Analysis of Surface Cracked Plates: Phase I Results
NASA Technical Reports Server (NTRS)
Wells, D. N.; Allen, P. A.
2012-01-01
An analytical round robin for the elastic-plastic analysis of surface cracks in flat plates was conducted with 15 participants. Experimental results from a surface crack tension test in 2219-T8 aluminum plate provided the basis for the inter-laboratory study (ILS). The study proceeded in a blind fashion given that the analysis methodology was not specified to the participants, and key experimental results were withheld. This approach allowed the ILS to serve as a current measure of the state of the art for elastic-plastic fracture mechanics analysis. The analytical results and the associated methodologies were collected for comparison, and sources of variability were studied and isolated. The results of the study revealed that the J-integral analysis methodology using the domain integral method is robust, providing reliable J-integral values without being overly sensitive to modeling details. General modeling choices such as analysis code, model size (mesh density), crack tip meshing, or boundary conditions, were not found to be sources of significant variability. For analyses controlled only by far-field boundary conditions, the greatest source of variability in the J-integral assessment is introduced through the constitutive model. This variability can be substantially reduced by using crack mouth opening displacements to anchor the assessment. Conclusions provide recommendations for analysis standardization.
Warwick, Peter D.; Breland, F. Clayton; Hackley, Paul C.; Dulong, Frank T.; Nichols, Douglas J.; Karlsen, Alexander W.; Bustin, R. Marc; Barker, Charles E.; Willett, Jason C.; Trippi, Michael H.
2006-01-01
In 2001, and 2002, the U.S. Geological Survey (USGS) and the Louisiana Geological Survey (LGS), through a Cooperative Research and Development Agreement (CRADA) with Devon SFS Operating, Inc. (Devon), participated in an exploratory drilling and coring program for coal-bed methane in north-central Louisiana. The USGS and LGS collected 25 coal core and cuttings samples from two coal-bed methane test wells that were drilled in west-central Caldwell Parish, Louisiana. The purpose of this report is to provide the results of the analytical program conducted on the USGS/LGS samples. The data generated from this project are summarized in various topical sections that include: 1. molecular and isotopic data from coal gas samples; 2. results of low-temperature ashing and X-ray analysis; 3. palynological data; 4. down-hole temperature data; 5. detailed core descriptions and selected core photographs; 6. coal physical and chemical analytical data; 7. coal gas desorption results; 8. methane and carbon dioxide coal sorption data; 9. coal petrographic results; and 10. geophysical logs.
Bounding the Higgs width at the LHC using full analytic results for $gg → e^- e^+ \\mu^- \\mu^+$
Campbell, John M.; Ellis, R. Keith; Williams, Ciaran
2014-04-09
We revisit the hadronic production of the four-lepton final state, e^{–} e^{+} μ^{–} μ^{+}, through the fusion of initial state gluons. This process is mediated by loops of quarks and we provide first full analytic results for helicity amplitudes that account for both the effects of the quark mass in the loop and off-shell vector bosons. The analytic results have been implemented in the Monte Carlo program MCFM and are both fast, and numerically stable in the region of low Z transverse momentum. We use our results to study the interference between Higgs-mediated and continuum production of four-lepton final states, which is necessary in order to obtain accurate theoretical predictions outside the Higgs resonance region. We have confirmed and extended a recent analysis of Caola and Melnikov that proposes to use a measurement of the off-shell region to constrain the total width of the Higgs boson. Using a simple cut-and-count method, existing LHC data should bound the width at the level of 25-45 times the Standard Model expectation. We investigate the power of using a matrix element method to construct a kinematic discriminant to sharpen the constraint. Furthermore, in our analysis the bound on the Higgs width is improved by a factor of about 1.6 using a simple cut on the MEM discriminant, compared to an invariant mass cut μ_{4l }> 300 GeV.
Tank 241-U-106, cores 147 and 148, analytical results for the final report
Steen, F.H.
1996-09-27
This document is the final report deliverable for tank 241-U-106 push mode core segments collected between May 8, 1996 and May 10, 1996 and received by the 222-S Laboratory between May 14, 1996 and May 16, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-U-106 Push Mode Core Sampling and analysis Plan (TSAP), the Historical Model Evaluation Data Requirements (Historical DQO), Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) and the Safety Screening Data Quality Objective (DQO). The analytical results are included in Table 1.
Van Overwalle, Frank; Baetens, Kris; Mariën, Peter; Vandekerckhove, Marie
2015-08-01
A recent meta-analysis explored the role of the cerebellum in social cognition and documented that this part of the brain is critically implicated in social cognition, especially in more abstract and complex forms of mentalizing. The authors found an overlap with clusters involved in sensorimotor (during mirror and self-judgment tasks) as well as in executive processes (across all tasks) documented in earlier nonsocial cerebellar meta-analyses, and hence interpreted their results in terms of a domain-general function of the cerebellum. However, these meta-analytic results might be interpreted in a different, complementary way. Indeed, the results reveal a striking overlap with the parcellation of cerebellar topography offered by a recent functional connectivity analysis. In particular, the majority of social cognitive activity in the cerebellum can also be explained as located within the boundaries of a default/mentalizing network of the cerebellum, with the exception of the involvement of primary and integrative somatomotor networks for self-related and mirror tasks, respectively. Given the substantial overlap, a novel interpretation of the meta-analytic findings is put forward suggesting that cerebellar activity during social judgments might reflect a more domain-specific mentalizing functionality in some areas of the cerebellum than assumed before. PMID:25621820
Tank 214-AW-105, grab samples, analytical results for the finalreport
Esch, R.A.
1997-02-20
This document is the final report for tank 241-AW-105 grab samples. Twenty grabs samples were collected from risers 10A and 15A on August 20 and 21, 1996, of which eight were designated for the K Basin sludge compatibility and mixing studies. This document presents the analytical results for the remaining twelve samples. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DO). The results for the previous sampling of this tank were reported in WHC-SD-WM-DP-149, Rev. 0, 60-Day Waste Compatibility Safety Issue and Final Results for Tank 241-A W-105, Grab Samples 5A W-95-1, 5A W-95-2 and 5A W-95-3. Three supernate samples exceeded the TOC notification limit (30,000 microg C/g dry weight). Appropriate notifications were made. No immediate notifications were required for any other analyte. The TSAP requested analyses for polychlorinated biphenyls (PCB) for all liquids and centrifuged solid subsamples. The PCB analysis of the liquid samples has been delayed and will be presented in a revision to this document.
Shokes, T.; Einerson, J.
2007-07-01
One goal of characterizing, processing, and shipping waste to the Waste Isolation Pilot Plant (WIPP) is to make all activities as efficient as possible. Data management and repetitive calculations are a critical part of the process that can be automated, thereby increasing the accuracy and rate at which work is completed and reducing costs. This paper presents the tools developed to automate statistical analysis and other calculations required by the WIPP Hazardous Waste Facility Permit (HWFP). Statistical analyses are performed on the analytical results on gas samples from the headspace of waste containers and solid samples from the core of the waste container. The calculations include determining the number of samples, test for the shape of the distribution of the analytical results, mean, standard deviation, upper 90-percent confidence limit of the mean, and the minimum required Waste Acceptance Plan (WAP) sample size. The input data for these calculations are from the batch data reports for headspace gas analytical results and solids analysis, which must also be obtained and collated for proper use. The most challenging component of the statistical analysis, if performed manually, is the determination of the distribution shape; therefore, the distribution testing is typically performed using a certified software tool. All other calculations can be completed manually, with a spreadsheet, custom developed software, and/or certified software tool. Out of the options available, manually performing the calculations or using a spreadsheet are the least desirable. These methods rely heavily on the availability of an expert, such as a statistician, to perform the calculation. These methods are also more open to human error such as transcription or 'cut and paste' errors. A SAS program is in the process of being developed to perform the calculations. Due to the potential size of the data input files and the need to archive the data in an accessible format, the SAS
Silva, Romesh; Amouzou, Agbessi; Munos, Melinda; Marsh, Andrew; Hazel, Elizabeth; Victora, Cesar; Black, Robert; Bryce, Jennifer
2016-01-01
Introduction Most low-income countries lack complete and accurate vital registration systems. As a result, measures of under-five mortality rates rely mostly on household surveys. In collaboration with partners in Ethiopia, Ghana, Malawi, and Mali, we assessed the completeness and accuracy of reporting of births and deaths by community-based health workers, and the accuracy of annualized under-five mortality rate estimates derived from these data. Here we report on results from Ethiopia, Malawi and Mali. Method In all three countries, community health workers (CHWs) were trained, equipped and supported to report pregnancies, births and deaths within defined geographic areas over a period of at least fifteen months. In-country institutions collected these data every month. At each study site, we administered a full birth history (FBH) or full pregnancy history (FPH), to women of reproductive age via a census of households in Mali and via household surveys in Ethiopia and Malawi. Using these FBHs/FPHs as a validation data source, we assessed the completeness of the counts of births and deaths and the accuracy of under-five, infant, and neonatal mortality rates from the community-based method against the retrospective FBH/FPH for rolling twelve-month periods. For each method we calculated total cost, average annual cost per 1,000 population, and average cost per vital event reported. Results On average, CHWs submitted monthly vital event reports for over 95 percent of catchment areas in Ethiopia and Malawi, and for 100 percent of catchment areas in Mali. The completeness of vital events reporting by CHWs varied: we estimated that 30%-90% of annualized expected births (i.e. the number of births estimated using a FPH) were documented by CHWs and 22%-91% of annualized expected under-five deaths were documented by CHWs. Resulting annualized under-five mortality rates based on the CHW vital events reporting were, on average, under-estimated by 28% in Ethiopia, 32% in
Comparison of experimental and analytical results for free vibration of laminated composite plates
Maryuama, Koichi; Narita, Yoshihiro; Ichinomiya, Osamu
1995-11-01
Fibrous composite materials are being increasingly employed in high performance structures, including pressured vessel and piping applications. These materials are usually used in the form of laminated flat or curved plates, and the understanding of natural frequencies and the corresponding mode shapes is essential to a reliable structural design. Although many references have been published on analytical study of laminated composite plates, a limited number of experimental studies have appeared for dealing with vibration characteristics of the plates. This paper presents both experimental and analytical results for the problems. In the experiment, the holographic interferometry is used to measure the resonant frequencies and corresponding mode shapes of six-layered CFRP (carbon fiber reinforced plastic) composite plates. The material constants of a lamina are calculated from fiber and matrix material constants by using some different composite rules. With the calculated constants, the natural frequencies of the laminated CFRP plates are theoretically determined by the Ritz method. From the comparison of two sets of the results, the effect of choosing different composite rules is discussed in the vibration study of laminated composite plates.
Tank 241-TX-118, core 236 analytical results for the final report
ESCH, R.A.
1998-11-19
This document is the analytical laboratory report for tank 241-TX-118 push mode core segments collected between April 1, 1998 and April 13, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-118 Push Mode Core sampling and Analysis Plan (TSAP) (Benar, 1997), the Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al, 1995) and the Historical Model Evaluation Data Requirements (Historical DQO) (Sipson, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Benar, 1997). One sample exceeded the Total Alpha Activity (AT) analysis notification limit of 38.4{micro}Ci/g (based on a bulk density of 1.6), core 236 segment 1 lower half solids (S98T001524). Appropriate notifications were made. Plutonium 239/240 analysis was requested as a secondary analysis. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report.
Tank 241-T-203, core 190 analytical results for the final report
Steen, F.H.
1997-08-05
This document is the analytical laboratory report for tank 241-T-203 push mode core segments collected on April 17, 1997 and April 18, 1997. The segments were subsainpled and analyzed in accordance with the Tank 241-T-203 Push Mode Core Sampling andanalysis Plan (TSAP) (Schreiber, 1997a), the Safety Screening Data Quality Objective (DQO)(Dukelow, et al., 1995) and Leffer oflnstructionfor Core Sample Analysis of Tanks 241-T-201, 241-T-202, 241-T-203, and 241-T-204 (LOI)(Hall, 1997). The analytical results are included in the data summary report (Table 1). None of the samples submitted for Differential Scanning Calorimetry (DSC), Total Alpha Activity (AT) and Total Organic Carbon (TOC) exceeded notification limits as stated in the TSAP (Schreiber, 1997a). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997b) and not considered in this report.
Farrar, Jerry W.; Copen, Ashley M.
2000-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-161 (trace constituents), M-154 (major constituents), N-65 (nutrient constituents), N-66 nutrient constituents), P-34 (low ionic strength constituents), and Hg-30 (mercury) -- that were distributed in March 2000 to 144 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 132 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Connor, B.F.; Currier, J.P.; Woodworth, M.T.
2001-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-163 (trace constituents), M-156 (major constituents), N-67 (nutrient constituents), N-68 (nutrient constituents), P-35 (low ionic strength constituents), and Hg-31 (mercury) -- that were distributed in October 2000 to 126 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 122 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, M.T.; Connor, B.F.
2001-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-165 (trace constituents), M-158 (major constituents), N-69 (nutrient constituents), N-70 (nutrient constituents), P-36 (low ionic-strength constituents), and Hg-32 (mercury) -- that were distributed in April 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 73 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Farrar, Jerry W.; Chleboun, Kimberly M.
1999-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for 8 standard reference samples -- T-157 (trace constituents), M-150 (major constituents), N-61 (nutrient constituents), N-62 (nutrient constituents), P-32 (low ionic strength constituents), GWT-5 (ground-water trace constituents), GWM- 4 (ground-water major constituents),and Hg-28 (mercury) -- that were distributed in March 1999 to 120 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 111 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the seven reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the 8 standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2003-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-171 (trace constituents), M-164 (major constituents), N-75 (nutrient constituents), N-76 (nutrient constituents), P-39 (low ionic-strength constituents), and Hg-35 (mercury) -- that were distributed in September 2002 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 102 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2003-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-173 (trace constituents), M-166 (major constituents), N-77 (nutrient constituents), N-78 (nutrient constituents), P-40 (low ionic-strength constituents), and Hg-36 (mercury) -- that were distributed in March 2003 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 110 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, Mark T.; Connor, Brooke F.
2002-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-167 (trace constituents), M-160 (major constituents), N-71 (nutrient constituents), N-72 (nutrient constituents), P-37 (low ionic-strength constituents), and Hg-33 (mercury) -- that were distributed in September 2001 to laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data received from 98 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Farrar, T.W.
2000-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T-159 (trace constituents), M-152 (major constituents), N-63 (nutrient constituents), N-64 (nutrient constituents), P-33 (low ionic strength constituents), and Hg-29 (mercury) -- that were distributed in October 1999 to 149 laboratories enrolled in the U.S. Geological Survey sponsored interlaboratory testing program. Analytical data that were received from 131 of the laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Woodworth, M.T.; Conner, B.F.
2002-01-01
This report presents the results of the U.S. Geological Survey's analytical evaluation program for six standard reference samples -- T- 169 (trace constituents), M- 162 (major constituents), N-73 (nutrient constituents), N-74 (nutrient constituents), P-38 (low ionic-strength constituents), and Hg-34 (mercury) -- that were distributed in March 2002 to laboratories enrolled in the U.S. Geological Survey sponsored intedaboratory testing program. Analytical data received from 93 laboratories were evaluated with respect to overall laboratory performance and relative laboratory performance for each analyte in the six reference samples. Results of these evaluations are presented in tabular form. Also presented are tables and graphs summarizing the analytical data provided by each laboratory for each analyte in the six standard reference samples. The most probable value for each analyte was determined using nonparametric statistics.
Tank 241-A-101, cores 154 and 156 analytical results for the 45 day report
Steen, F.H.
1996-10-18
This document is the 45-day laboratory report for tank 241 -A-101 push mode core segments collected between July II, 1996 and July 25, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-A-101 Push Mode Core Sampling and Analysis Plan (TSAP) (Field, 1996) and the Safety Screening Data Quality Objective (DQO)(Dukelow, et al., 1995). The analytical results are included in the data summary table (Table 1). None of the samples submitted for Total Alpha Activity (AT) or Differential Scanning Calorimetry (DSC) analyses exceeded notification limits as stated in the Safety Screening DQO (Dukelow, et al., 1995). Statistical evaluation on results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Primary safety screening results and the raw data from thermogravimetric analysis (TGA) and DSC analyses are included in this report.
Tank 241-AN-105, cores 152 and 153, analytical results for the 45 day report
Steen, F.H.
1996-09-20
This document is the 45-day laboratory report for tank 241-AN-105 push mode core segments collected between June 10, 1996 and June 28, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-AN-105 Push Mode Core Sampling and analysis Plan (TSAP) and the Safety Screening Data Quality Objective (DQO). The analytical results are included in the data summary table. None of the samples submitted for Total Alpha Activity or Differential Scanning Calorimetry (DSC) analyses exceeded notification limits as stated in the Safety Screening DQO. Statistical evaluation on results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Primary safety screening results and the raw data from thermogravimetric analysis (TGA) and DSC analyses are included in this report.
NASA Astrophysics Data System (ADS)
Bozkaya, Uǧur; Sherrill, C. David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N6) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm-1) is fortuitously even better than that of CCSD(T) (50 cm-1), while the MAEs of CEPA(0) (184 cm-1) and CCSD (84 cm-1) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol-1, which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol-1), and comparing to MP2 (7.7 kcal mol-1) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is only 0.1 kcal
Bozkaya, Uğur; Sherrill, C David
2013-08-01
Orbital-optimized coupled-electron pair theory [or simply "optimized CEPA(0)," OCEPA(0), for short] and its analytic energy gradients are presented. For variational optimization of the molecular orbitals for the OCEPA(0) method, a Lagrangian-based approach is used along with an orbital direct inversion of the iterative subspace algorithm. The cost of the method is comparable to that of CCSD [O(N(6)) scaling] for energy computations. However, for analytic gradient computations the OCEPA(0) method is only half as expensive as CCSD since there is no need to solve the λ2-amplitude equation for OCEPA(0). The performance of the OCEPA(0) method is compared with that of the canonical MP2, CEPA(0), CCSD, and CCSD(T) methods, for equilibrium geometries, harmonic vibrational frequencies, and hydrogen transfer reactions between radicals. For bond lengths of both closed and open-shell molecules, the OCEPA(0) method improves upon CEPA(0) and CCSD by 25%-43% and 38%-53%, respectively, with Dunning's cc-pCVQZ basis set. Especially for the open-shell test set, the performance of OCEPA(0) is comparable with that of CCSD(T) (ΔR is 0.0003 Å on average). For harmonic vibrational frequencies of closed-shell molecules, the OCEPA(0) method again outperforms CEPA(0) and CCSD by 33%-79% and 53%-79%, respectively. For harmonic vibrational frequencies of open-shell molecules, the mean absolute error (MAE) of the OCEPA(0) method (39 cm(-1)) is fortuitously even better than that of CCSD(T) (50 cm(-1)), while the MAEs of CEPA(0) (184 cm(-1)) and CCSD (84 cm(-1)) are considerably higher. For complete basis set estimates of hydrogen transfer reaction energies, the OCEPA(0) method again exhibits a substantially better performance than CEPA(0), providing a mean absolute error of 0.7 kcal mol(-1), which is more than 6 times lower than that of CEPA(0) (4.6 kcal mol(-1)), and comparing to MP2 (7.7 kcal mol(-1)) there is a more than 10-fold reduction in errors. Whereas the MAE for the CCSD method is
Analytical results on Casimir forces for conductors with edges and tips
Maghrebi, Mohammad F.; Rahi, Sahand Jamal; Emig, Thorsten; Graham, Noah; Jaffe, Robert L.; Kardar, Mehran
2011-01-01
Casimir forces between conductors at the submicron scale are paramount to the design and operation of microelectromechanical devices. However, these forces depend nontrivially on geometry, and existing analytical formulae and approximations cannot deal with realistic micromachinery components with sharp edges and tips. Here, we employ a novel approach to electromagnetic scattering, appropriate to perfect conductors with sharp edges and tips, specifically wedges and cones. The Casimir interaction of these objects with a metal plate (and among themselves) is then computed systematically by a multiple-scattering series. For the wedge, we obtain analytical expressions for the interaction with a plate, as functions of opening angle and tilt, which should provide a particularly useful tool for the design of microelectromechanical devices. Our result for the Casimir interactions between conducting cones and plates applies directly to the force on the tip of a scanning tunneling probe. We find an unexpectedly large temperature dependence of the force in the cone tip which is of immediate relevance to experiments.
Tank 241-BY-107, Cores 151 and 161, Analytical Results for the 45 day report
Fritts, L.L.
1996-09-09
This document is the 45-day laboratory report for tank 241-BY-107. Push mode core segments were removed from risers 8 and 9B between June 5, 1996, and July 26, 1996. Segments were received and extruded at the 222-S Analytical Laboratory. Analyses were performed in accordance with Tank 241-BY-107 Push Mode Core Sampling and analysis Plan (TSAP) and the Safety Screening Data Quality Objective (DQO). None of the subsamples submitted for Total Alpha Activity (AT) analysis or Differential Scanning Calorimetry (DSC) exceeded the notification limits as stated in the DQO. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Primary safety screening results are included in the data summary table. The raw data from DSC and TGA analyses are included in this report.
Tank 241-T-105, cores 205 and 207 analytical results for the final report
Esch, R. A.
1997-10-21
This document is the final laboratory report for tank 241-T-105 push mode core segments collected between June 24, 1997 and June 30, 1997. The segments were subsampled and analyzed in accordance with the {ital Tank Push Mode Core Sampling and Analysis Plan} (TSAP) (Field,1997), the {ital Tank Safety Screening Data Quality Objective} (Safety DQO) (Dukelow, et al., 1995) and {ital Tank 241-T-105 Sample Analysis} (memo) (Field, 1997a). The analytical results are included in Table 1. None of the subsamples submitted for the differential scanning calorimetry (DSC) analysis or total alpha activity (AT) exceeded the notification limits as stated in the TSAP (Field, 1997). The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and not considered in this report.
Tank 241-T-204, core 188 analytical results for the final report
Nuzum, J.L.
1997-07-24
TANK 241-T-204, CORE 188, ANALYTICAL RESULTS FOR THE FINAL REPORT. This document is the final laboratory report for Tank 241 -T-204. Push mode core segments were removed from Riser 3 between March 27, 1997, and April 11, 1997. Segments were received and extruded at 222-8 Laboratory. Analyses were performed in accordance with Tank 241-T-204 Push Mode Core Sampling and analysis Plan (TRAP) (Winkleman, 1997), Letter of instruction for Core Sample Analysis of Tanks 241-T-201, 241- T-202, 241-T-203, and 241-T-204 (LAY) (Bell, 1997), and Safety Screening Data Qual@ Objective (DO) ODukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DC) analyses exceeded the notification limits stated in DO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group and are not considered in this report.
Harper, Martin; Sarkisian, Khatchatur; Andrew, Michael
2015-01-01
Analysis of Proficiency Analytical Testing (PAT) results between 2003 and 2013 suggest that the variation in respirable crystalline silica analysis is much smaller today than it was in the period 1990–1998, partly because of a change in sample production procedure and because the colorimetric method has been phased out, although quality improvements in the x-ray diffraction (XRD) or infrared (IR) methods may have also played a role. There is no practical difference between laboratories using XRD or IR methods or between laboratories which are accredited or those which are not. Reference laboratory means (assigned values) are not different from the means of all participants across the current range of mass loading, although there is a small difference in variance in the ratios of all participants to reference laboratory means based on method because the reference laboratories are much more likely to use XRD than are the others. Matrix interference does not lead to biases or substantially larger variances for either XRD or IR methods. Data from proficiency test sample analyses that include results from poorly performing laboratories should not be used to determine the validity of a method. PAT samples are not produced below 40 μg and variance may increase with lower masses, although this is not particularly predictable. PAT data from lower mass loadings will be required to evaluate analytical performance if exposure limits are lowered without change in sampling method. Task-specific exposure measurements for periods shorter than a full shift typically result in lower mass loadings and the quality of these analyses would also be better assured from being within the range of PAT mass loadings. High flow rate cyclones, whose performance has been validated, can be used to obtain higher mass loadings in environments of lower concentrations or where shorter sampling times are desired. PMID:25175284
Harper, Martin; Sarkisian, Khatchatur; Andrew, Michael
2014-01-01
Analysis of Proficiency Analytical Testing (PAT) results between 2003 and 2013 suggest that the variation in respirable crystalline silica analysis is much smaller today than it was in the period 1990-1998, partly because of a change in sample production procedure and because the colorimetric method has been phased out, although quality improvements in the x-ray diffraction (XRD) or infrared (IR) methods may have also played a role. There is no practical difference between laboratories using XRD or IR methods or between laboratories which are accredited or those which are not. Reference laboratory means (assigned values) are not different from the means of all participants across the current range of mass loading, although there is a small difference in variance in the ratios of all participants to reference laboratory means based on method because the reference laboratories are much more likely to use XRD than are the others. Matrix interference does not lead to biases or substantially larger variances for either XRD or IR methods. Data from proficiency test sample analyses that include results from poorly performing laboratories should not be used to determine the validity of a method. PAT samples are not produced below 40 μg and variance may increase with lower masses, although this is not particularly predictable. PAT data from lower mass loadings will be required to evaluate analytical performance if exposure limits are lowered without change in sampling method. Task-specific exposure measurements for periods shorter than a full shift typically result in lower mass loadings and the quality of these analyses would also be better assured from being within the range of PAT mass loadings. High flow rate cyclones, whose performance has been validated, can be used to obtain higher mass loadings in environments of lower concentrations or where shorter sampling times are desired.
Tank 241-B-109, cores 169 and 170 analytical results for the final report
Nuzum, J.L.
1997-01-20
This document is the final laboratory report for tank 241-B-109. Push mode core segments were removed from risers 4 and 7 between August 22, 1996, and August 27, 1996. Segments were received and extruded at 222-S Analytical Laboratory. Analyses were performed in accordance with Tank 241-B-109 Push Mode Core Sampling and Analysis Plan (TSAP) and Tank Safety Screening Data Quality Objective (DQO). The results for primary safety screening data, including differential scanning calorimetry (DSC) analyses, thermogravimetric analyses (TGA), bulk density determinations, and total alpha activity analyses for each subsegment, were presented in the 45-Day report (Rev. 0 of this document). The 45-Day report is included as Part II of this revision. The raw data for DSC and TGA is found in Part II of this report. The raw data for all other analyses are included in this revision.
Interacting steps with finite-range interactions: Analytical approximation and numerical results
NASA Astrophysics Data System (ADS)
Jaramillo, Diego Felipe; Téllez, Gabriel; González, Diego Luis; Einstein, T. L.
2013-05-01
We calculate an analytical expression for the terrace-width distribution P(s) for an interacting step system with nearest- and next-nearest-neighbor interactions. Our model is derived by mapping the step system onto a statistically equivalent one-dimensional system of classical particles. The validity of the model is tested with several numerical simulations and experimental results. We explore the effect of the range of interactions q on the functional form of the terrace-width distribution and pair correlation functions. For physically plausible interactions, we find modest changes when next-nearest neighbor interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
Recent Analytical and Numerical Results for The Navier-Stokes-Voigt Model and Related Models
NASA Astrophysics Data System (ADS)
Larios, Adam; Titi, Edriss; Petersen, Mark; Wingate, Beth
2010-11-01
The equations which govern the motions of fluids are notoriously difficult to handle both mathematically and computationally. Recently, a new approach to these equations, known as the Voigt-regularization, has been investigated as both a numerical and analytical regularization for the 3D Navier-Stokes equations, the Euler equations, and related fluid models. This inviscid regularization is related to the alpha-models of turbulent flow; however, it overcomes many of the problems present in those models. I will discuss recent work on the Voigt-regularization, as well as a new criterion for the finite-time blow-up of the Euler equations based on their Voigt-regularization. Time permitting, I will discuss some numerical results, as well as applications of this technique to the Magnetohydrodynamic (MHD) equations and various equations of ocean dynamics.
NASA Astrophysics Data System (ADS)
Milošević, M.; Dimitrijević, D. D.; Djordjević, G. S.; Stojanović, M. D.
2016-06-01
The role tachyon fields may play in evolution of early universe is discussed in this paper. We consider the evolution of a flat and homogeneous universe governed by a tachyon scalar field with the DBI-type action and calculate the slow-roll parameters of inflation, scalar spectral index (n), and tensor-scalar ratio (r) for the given potentials. We pay special attention to the inverse power potential, first of all to V(x)˜ x^{-4}, and compare the available results obtained by analytical and numerical methods with those obtained by observation. It is shown that the computed values of the observational parameters and the observed ones are in a good agreement for the high values of the constant X_0. The possibility that influence of the radion field can extend a range of the acceptable values of the constant X_0 to the string theory motivated sector of its values is briefly considered.
Motion effects on an IFR hovering task: Analytical predictions and experimental results
NASA Technical Reports Server (NTRS)
Ringland, R. F.; Stapleford, R. L.; Magdaleno, R. E.
1971-01-01
An analytical pilot model incorporating the effects of motion cues and display scanning and sampling is tested by comparing predictions against experimental results on a moving base simulator. The simulated task is that of precision hovering of a VTOL having varying amounts of rate damping, and using separated instrument displays. Motion cue effects are investigated by running the experiment under fixed and moving base conditions, the latter in two modes; full motion, and angular motion only. Display scanning behavior is measured on some of the runs. The results of the program show that performance is best with angular motion only, most probably because a g-vector tilt cue is available to the pilot in this motion condition. This provides an attitude indication even when not visually fixating the attitude display. Vestibular threshold effects are also present in the results because of the display scaling used to permit hovering position control within the motion simulator limits; no washouts are used in the simulator drive signals. The IFR nature of the task results in large decrements in pilot opinion and performance relative to VFR conditions because of the scanning workload. Measurements of scanning behavior are sensitive to motion conditions and show more attention to attitude control under fixed base conditions.
Tank 241-S-106, cores 183, 184 and 187 analytical results for the final report
Esch, R.A.
1997-06-30
This document is the final laboratory report for tank 241-S-106 push mode core segments collected between February 12, 1997 and March 21, 1997. The segments were subsampled and analyzed in accordance with the Tank Push Mode Core Sampling and Analysis Plan (TSAP), the Tank Safety Screening Data Quality Objective (Safety DQO), the Historical Model Evaluation Data Requirements (Historical DQO) and the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO). The analytical results are included in Table 1. Six of the twenty-four subsamples submitted for the differential scanning calorimetry (DSC) analysis exceeded the notification limit of 480 Joules/g stated in the DQO. Appropriate notifications were made. Total Organic Carbon (TOC) analyses were performed on all samples that produced exotherms during the DSC analysis. All results were less than the notification limit of three weight percent TOC. No cyanide analysis was performed, per agreement with the Tank Safety Program. None of the samples submitted for Total Alpha Activity exceeded notification limits as stated in the TSAP. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. No core composites were created because there was insufficient solid material from any of the three core sampling events to generate a composite that would be representative of the tank contents.
Tank 241-B-108, cores 172 and 173 analytical results for the final report
Nuzum, J.L., Fluoro Daniel Hanford
1997-03-04
The Data Summary Table (Table 3) included in this report compiles analytical results in compliance with all applicable DQOS. Liquid subsamples that were prepared for analysis by an acid adjustment of the direct subsample are indicated by a `D` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a fusion digest are indicated by an `F` in the A column in Table 3. Solid subsamples that were prepared for analysis by performing a water digest are indicated by a I.wl. or an `I` in the A column of Table 3. Due to poor precision and accuracy in original analysis of both Lower Half Segment 2 of Core 173 and the core composite of Core 173, fusion and water digests were performed for a second time. Precision and accuracy improved with the repreparation of Core 173 Composite. Analyses with the repreparation of Lower Half Segment 2 of Core 173 did not show improvement and suggest sample heterogeneity. Results from both preparations are included in Table 3.
NASA Astrophysics Data System (ADS)
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T.
2014-09-01
The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP) and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present PIC simulation results on EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Acknowledgement: This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE grant no. DE-FC52-06NA27616 at the University of Nevada at Reno.
Tank 241-TX-104, cores 230 and 231 analytical results for the final report
Diaz, L.A.
1998-07-07
This document is the analytical laboratory report for tank 241-TX-104 push mode core segments collected between February 18, 1998 and February 23, 1998. The segments were subsampled and analyzed in accordance with the Tank 241-TX-104 Push Mode Core Sampling and Analysis Plan (TSAP) (McCain, 1997), the Data Quality Objective to Support Resolution of the Organic Complexant Safety Issue (Organic DQO) (Turner, et al., 1995) and the Safety Screening Data Quality Objective (DQO) (Dukelow, et.al., 1995). The analytical results are included in the data summary table. None of the samples submitted for Differential Scanning Calorimetry (DSC) and Total Alpha Activity (AT) exceeded notification limits as stated in the TSAP. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group in accordance with the Memorandum of Understanding (Schreiber, 1997) and are not considered in this report. Appearance and Sample Handling Attachment 1 is a cross reference to relate the tank farm identification numbers to the 222-S Laboratory LabCore/LIMS sample numbers. The subsamples generated in the laboratory for analyses are identified in these diagrams with their sources shown. Core 230: Three push mode core segments were removed from tank 241-TX-104 riser 9A on February 18, 1998. Segments were received by the 222-S Laboratory on February 19, 1998. Two segments were expected for this core. However, due to poor sample recovery, an additional segment was taken and identified as 2A. Core 231: Four push mode core segments were removed from tank 241-TX-104 riser 13A between February 19, 1998 and February 23, 1998. Segments were received by the 222-S Laboratory on February 24, 1998. Two segments were expected for this core. However, due to poor sample recovery, additional segments were taken and identified as 2A and 2B. The TSAP states the core samples should be transported to the laboratory within three
Tank 241-AN-103, cores 166 and 167 analytical results for the final report
Steen, F.H.
1997-05-15
This document is the analytical laboratory report for tank 241-AN-103 [Hydrogen Watch Listed] push mode core segments collected between September 13, 1996 and September 23, 1996. The segments were subsampled and analyzed in accordance with the Tank 241-AN-103 Push Mode Core Sampling and Analysis Plan (TSAP), the Safety Screening Data Quality Objective (DQO) and the Flammable Gas Data Quality Objective (DQO). The analytical results are included in the data summary table. The raw data are included in this document. None of the samples submitted for Total Alpha Activity (AT), Total Organic Carbon (TOC) and Plutonium analyses exceeded notification limits as stated in the TSAP. One sample submitted for Differential Scanning Calorimetry (DSC) analysis exceeded the notification limit of 480 Joules/g (dry weight basis) as stated in the Safety Screening DQO. Appropriate notifications were made. Statistical evaluation of results by calculating the 95% upper confidence limit is not performed by the 222-S Laboratory and is not considered in this report. Appearance and Sample Handling Attachment 1 is a cross reference to relate the tank farm identification numbers to the 222-S Laboratory LabCore/LIMS sample numbers. The subsamples generated in the laboratory for analyses are identified in these diagrams with their sources shown. The diagrams identifying the core composites are also included. Core 166 Nineteen push mode core segments were removed from tank 241-AN-103 riser 12A between September 13, 1996 and September 17, 1996. Segments were received by the 222-S Laboratory between September 20, 1996 and September 30, 1996. Table 2 summarizes the extrusion information. Selected segments (2, 5 and 14) were sampled using the Retained Gas Sampler (RGS) and extruded by the Process Chemistry and Statistical Analysis Group. Core 167 Eighteen push mode core segments were removed from tank 241-AN-103 riser 21A between September 18, 1996 and September 23, 1996. Tank Farm Operations were
New Analytic Results for the Spectrum of Intensity Fluctuations in Strong Scatter
NASA Astrophysics Data System (ADS)
Carrano, C. S.; Rino, C. L.
2015-12-01
In a recent work, Carrano and Rino (Proc. of the Ionospheric Effects Symposium, 2015) extended the phase screen power law theory of ionospheric scintillation to account for the case where the refractive index irregularities follow a two-component power law spectrum. A specific normalization was invoked to exploit the self-similar properties of the problem and achieve a universal scaling, such that different combinations of perturbation strength, propagation distance, and frequency produce the same results. Using this model, numerical quadrature was employed to obtain essentially exact solutions of the 4th moment equation governing the intensity fluctuations resulting from propagation through two-dimensional field-aligned ionospheric irregularities. In this paper, we present a series of new asymptotic solutions for the case of a one-component spectrum for all integer and half-integer values of the phase spectral index, p, between 1 and 5. In addition, we present an asymptotic solution to the high frequency portion of the intensity spectrum for the case of a general two-component spectrum with 1
Pool, K.H.
1994-03-01
The potential for a ferrocyanide explosion in Hanford site single-shelled waste storage tanks (SSTS) poses a serious safety concern. This potential danger developed in the 1950s when {sup 137}Cs was scavenged during the reprocessing of uranium recovery process waste by co-precipitating it along with sodium in nickel ferrocyanide salt. Sodium or potassium ferrocyanide and nickel sulfate were added to the liquid waste stored in SSTs. The tank storage space resulting from the scavenging process was subsequently used to store other waste types. Ferrocyanide salts in combinations with oxidizing agents, such as nitrate and nitrite, are known to explode when key parameters (temperature, water content, oxidant concentration, and fuel [cyanide]) are in place. Therefore, reliable total cyanide analysis data for actual SST materials are required to address the safety issue. Accepted cyanide analysis procedures do not yield reliable results for samples containing nickel ferrocyanide materials because the compounds are insoluble in acidic media. Analytical chemists at Pacific Northwest Laboratory (PNL) have developed a modified microdistillation procedure (see below) for analyzing total cyanide in waste tank matrices containing nickel ferrocyanide materials. Pacific Northwest Laboratory analyzed samples from Hanford Waste Tank 241-C-112 cores 34, 35, and 36 for total cyanide content using technical procedure PNL-ALO-285 {open_quotes}Total Cyanide by Remote Microdistillation and Agrentometric Titration,{close_quotes} Rev. 0. This report summarizes the results of these analyses along with supporting quality control data, and, in addition, summarizes the results of the test to check the efficacy of sodium nickel ferrocyanide solubilization from an actual core sample by aqueous EDTA/en to verify that nickel ferrocyanide compounds were quantitatively solubilized before actual distillation.
Miller, G.L.
1997-06-02
Turnaround time for this project was 60 days, as required in Reference 2. The analyses were to be performed using SW-846 procedures whenever possible to meet analytical requirements as a Resource Conservation Recovery Act (RCRA) protocol project. Except for the preparation and analyses of polychlorinated biphenyl hydrocarbons (PCB) and Nickel-63, which the program deleted as a required analyte for 222-S Laboratory, all preparative and analytical work was performed at the 222-S Laboratory. Quanterra Environmental Services of Earth City, Missouri, performed the PCB analyses. During work on this project, two events occurred nearly simultaneously, which negatively impacted the 60 day deliverable schedule: an analytical hold due to waste handling issues at the 222-S Laboratory, and the discovery of PCBs at concentrations of regulatory significance in the 105-N Basin samples. Due to findings of regulatory non-compliance by the Washington State, Department of Ecology, the 222-S Laboratory placed a temporary administrative hold on its analytical work until all waste handling, designation and segregation issues were resolved. During the hold of approximately three weeks, all analytical and waste.handling procedures were rewritten to comply with the legal regulations, and all staff were retrained in the designation, segregation and disposal of RCRA liquid and solid wastes.
Lanman, Richard B; Mortimer, Stefanie A; Zill, Oliver A; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A; Divers, Stephen G; Hoon, Dave S B; Kopetz, E Scott; Lee, Jeeyun; Nikolinakos, Petros G; Baca, Arthur M; Kermani, Bahram G; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital Sequencing™ is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient's cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing.
Zill, Oliver A.; Sebisanovic, Dragan; Lopez, Rene; Blau, Sibel; Collisson, Eric A.; Divers, Stephen G.; Hoon, Dave S. B.; Kopetz, E. Scott; Lee, Jeeyun; Nikolinakos, Petros G.; Baca, Arthur M.; Kermani, Bahram G.; Eltoukhy, Helmy; Talasaz, AmirAli
2015-01-01
Next-generation sequencing of cell-free circulating solid tumor DNA addresses two challenges in contemporary cancer care. First this method of massively parallel and deep sequencing enables assessment of a comprehensive panel of genomic targets from a single sample, and second, it obviates the need for repeat invasive tissue biopsies. Digital SequencingTM is a novel method for high-quality sequencing of circulating tumor DNA simultaneously across a comprehensive panel of over 50 cancer-related genes with a simple blood test. Here we report the analytic and clinical validation of the gene panel. Analytic sensitivity down to 0.1% mutant allele fraction is demonstrated via serial dilution studies of known samples. Near-perfect analytic specificity (> 99.9999%) enables complete coverage of many genes without the false positives typically seen with traditional sequencing assays at mutant allele frequencies or fractions below 5%. We compared digital sequencing of plasma-derived cell-free DNA to tissue-based sequencing on 165 consecutive matched samples from five outside centers in patients with stage III-IV solid tumor cancers. Clinical sensitivity of plasma-derived NGS was 85.0%, comparable to 80.7% sensitivity for tissue. The assay success rate on 1,000 consecutive samples in clinical practice was 99.8%. Digital sequencing of plasma-derived DNA is indicated in advanced cancer patients to prevent repeated invasive biopsies when the initial biopsy is inadequate, unobtainable for genomic testing, or uninformative, or when the patient’s cancer has progressed despite treatment. Its clinical utility is derived from reduction in the costs, complications and delays associated with invasive tissue biopsies for genomic testing. PMID:26474073
LiF TLD-100 as a Dosimeter in High Energy Proton Beam Therapy-Can It Yield Accurate Results?
Zullo, John R. Kudchadker, Rajat J.; Zhu, X. Ronald; Sahoo, Narayan; Gillin, Michael T.
2010-04-01
In the region of high-dose gradients at the end of the proton range, the stopping power ratio of the protons undergoes significant changes, allowing for a broad spectrum of proton energies to be deposited within a relatively small volume. Because of the potential linear energy transfer dependence of LiF TLD-100 (thermolumescent dosimeter), dose measurements made in the distal fall-off region of a proton beam may be less accurate than those made in regions of low-dose gradients. The purpose of this study is to determine the accuracy and precision of dose measured using TLD-100 for a pristine Bragg peak, particularly in the distal fall-off region. All measurements were made along the central axis of an unmodulated 200-MeV proton beam from a Probeat passive beam-scattering proton accelerator (Hitachi, Ltd., Tokyo, Japan) at varying depths along the Bragg peak. Measurements were made using TLD-100 powder flat packs, placed in a virtual water slab phantom. The measurements were repeated using a parallel plate ionization chamber. The dose measurements using TLD-100 in a proton beam were accurate to within {+-}5.0% of the expected dose, previously seen in our past photon and electron measurements. The ionization chamber and the TLD relative dose measurements agreed well with each other. Absolute dose measurements using TLD agreed with ionization chamber measurements to within {+-} 3.0 cGy, for an exposure of 100 cGy. In our study, the differences in the dose measured by the ionization chamber and those measured by TLD-100 were minimal, indicating that the accuracy and precision of measurements made in the distal fall-off region of a pristine Bragg peak is within the expected range. Thus, the rapid change in stopping power ratios at the end of the range should not affect such measurements, and TLD-100 may be used with confidence as an in vivo dosimeter for proton beam therapy.
Network Traffic Analysis With Query Driven VisualizationSC 2005HPC Analytics Results
Stockinger, Kurt; Wu, Kesheng; Campbell, Scott; Lau, Stephen; Fisk, Mike; Gavrilov, Eugene; Kent, Alex; Davis, Christopher E.; Olinger,Rick; Young, Rob; Prewett, Jim; Weber, Paul; Caudell, Thomas P.; Bethel,E. Wes; Smith, Steve
2005-09-01
Our analytics challenge is to identify, characterize, and visualize anomalous subsets of large collections of network connection data. We use a combination of HPC resources, advanced algorithms, and visualization techniques. To effectively and efficiently identify the salient portions of the data, we rely on a multi-stage workflow that includes data acquisition, summarization (feature extraction), novelty detection, and classification. Once these subsets of interest have been identified and automatically characterized, we use a state-of-the-art-high-dimensional query system to extract data subsets for interactive visualization. Our approach is equally useful for other large-data analysis problems where it is more practical to identify interesting subsets of the data for visualization than to render all data elements. By reducing the size of the rendering workload, we enable highly interactive and useful visualizations. As a result of this work we were able to analyze six months worth of data interactively with response times two orders of magnitude shorter than with conventional methods.
Semi-Analytic Galaxy Evolution (SAGE): Model Calibration and Basic Results
NASA Astrophysics Data System (ADS)
Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M.
2016-02-01
This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short.5 sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling-radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population.
Alastuey, A; Ballenegger, V
2012-12-01
We compute thermodynamical properties of a low-density hydrogen gas within the physical picture, in which the system is described as a quantum electron-proton plasma interacting via the Coulomb potential. Our calculations are done using the exact scaled low-temperature (SLT) expansion, which provides a rigorous extension of the well-known virial expansion-valid in the fully ionized phase-into the Saha regime where the system is partially or fully recombined into hydrogen atoms. After recalling the SLT expansion of the pressure [A. Alastuey et al., J. Stat. Phys. 130, 1119 (2008)], we obtain the SLT expansions of the chemical potential and of the internal energy, up to order exp(-|E_{H}|/kT) included (E_{H}≃-13.6 eV). Those truncated expansions describe the first five nonideal corrections to the ideal Saha law. They account exactly, up to the considered order, for all effects of interactions and thermal excitations, including the formation of bound states (atom H, ions H^{-} and H_{2}^{+}, molecule H_{2},⋯) and atom-charge and atom-atom interactions. Among the five leading corrections, three are easy to evaluate, while the remaining ones involve well-defined internal partition functions for the molecule H_{2} and ions H^{-} and H_{2}^{+}, for which no closed-form analytical formula exist currently. We provide accurate low-temperature approximations for those partition functions by using known values of rotational and vibrational energies. We compare then the predictions of the SLT expansion, for the pressure and the internal energy, with, on the one hand, the equation-of-state tables obtained within the opacity program at Livermore (OPAL) and, on the other hand, data of path integral quantum Monte Carlo (PIMC) simulations. In general, a good agreement is found. At low densities, the simple analytical SLT formulas reproduce the values of the OPAL tables up to the last digit in a large range of temperatures, while at higher densities (ρ∼10^{-2} g/cm^{3}), some
A compressed sensing method with analytical results for lidar feature classification
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark
2011-04-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.
A COMPRESSED SENSING METHOD WITH ANALYTICAL RESULTS FOR LIDAR FEATURE CLASSIFICATION
Allen, Josef D
2011-01-01
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting s unique ability to minimize or eliminate undesirable terrain data artifacts. Keywords: Compressed Sensing, Sparsity, Data Dictionary, LiDAR, ROC, K-Means, Clustering, K-SVD, Orthogonal Matching Pursuit
General analytic results for nonlinear waves and solitons in molecular clouds
NASA Technical Reports Server (NTRS)
Adams, Fred C.; Fatuzzo, Marco; Watkins, Richard
1994-01-01
We study nonlinear wave phenomena in self-gravitating fluid systems, with a particular emphasis on applications to molecular clouds. This paper presents analytical results for one spatial dimension. We show that a large class of physical systems can be described by theories with a 'charge density' q(rho); this quantity replaces the density on the right-hand side of the Poisson equation for the gravitational potential. We use this formulation to prove general results about nonlinear wave motions in self-gravitating systems. We show that in order for stationary waves to exist, the total charge (the integral of the charge density over the wave profile) must vanish. This 'no-charge' property for solitary waves is related to the capability of a system to be stable to gravitational perturbations for arbitrarily long wavelengths. We find necessary and sufficient conditions on the charge density for the existence of solitary waves and stationary waves. We study nonlinear wave motions for Jeans-type theories (where q(rho) = rho-rho(sub 0)) and find that nonlinear waves of large amplitude are confined to a rather narrow range of wavelengths. We also study wave motions for molecular clouds threaded by magnetic fields and show how the allowed range of wavelengths is affected by the field strength. Since the gravitational force in one spatial dimension does not fall off with distance, we consider two classes of models with more realistic gravity: Yukawa potentials and a pseudo two-dimensional treatment. We study the allowed types of wave behavior for these models. Finally, we discuss the implications of this work for molecular cloud structure. We argue that molecular clouds can support a wide variety of wave motions and suggest that stationary waves (such as those considered in this paper) may have already been observed.
Donoghue, J. K.; Dyson, E. D.; Hislop, J. S.; Leach, A. M.; Spoor, N. L.
1972-01-01
Donoghue, J. K., Dyson, E. D., Hislop, J. S., Leach, A. M., and Spoor, N. L. (1972).Brit. J. industr. Med.,29, 81-89. Human exposure to natural uranium: a case history and analytical results from some postmortem tissues. After the collapse and sudden death of an employee who had worked for 10 years in a natural uranium workshop, in which the airborne uranium was largely U3O8 with an Activity Median Aerodynamic Diameter in the range 3·5-6·0 μm and average concentration of 300 μg/m3, his internal organs were analysed for uranium. The tissues examined included lungs (1041 g), pulmonary lymph nodes (12 g), sternum (114 g), and kidneys (217 g). Uranium was estimated by neutron activation analysis, using irradiated tissue ash, and counting the delayed neutrons from uranium-235. The concentrations of uranium (μg U/g wet tissue) in the lungs, lymph nodes, sternum, and kidneys were 1·2, 1·8, 0·09, and 0·14 respectively. The weights deposited in the lungs and lymph nodes are less than 1% of the amounts calculated from the environmental data using the parameters currently applied in radiological protection. The figures are compatible with those reported by Quigley, heartherton, and Ziegler in 1958 and by Meichen in 1962. The relation between these results, the environmental exposure data, and biological monitoring data is discussed in the context of current views on the metabolism of inhaled insoluble uranium. PMID:5060250
Working with Real Data: Getting Analytic Element Groundwater Model Results to Honor Field Data
NASA Astrophysics Data System (ADS)
Congdon, R. D.
2014-12-01
Models of groundwater flow often work best when very little field data exist. In such cases, some knowledge of the depth to the water table, annual precipitation totals, and basic geological makeup is sufficient to produce a reasonable-looking and potentially useful model. However, in this case where a good deal of information is available regarding depth to bottom of a dune field aquifer, attempting to incorporate the data set into the model has variously resulted in convergence, failure to achieve target water level criteria, or complete failure to converge. The first model did not take the data set into consideration, but used general information that the aquifer was thinner in the north and thicker in the south. This model would run and produce apparently useful results. The first attempt at satisfying the data set; in this case 51 wells showing the bottom elevation of a Pacific coast sand dune aquifer, was to use the isopach interpretation of Robinson (OFR 73-241). Using inhomogeneities (areas of equal characteristics) delineated by Robinson's isopach diagram did not enable an adequate fit to the water table lakes, and caused convergence problems when adding pumping wells. The second attempt was to use a Thiessen polygon approach, creating an aquifer thickness zone for each data point. The results for the non-pumping scenario were better, but run times were considerably greater. Also, there were frequent runs with non-convergence, especially when water supply wells were added. Non-convergence may be the result of the lake line-sinks crossing the polygon boundaries or proximity of pumping wells to inhomogeneity boundaries. The third approach was to merge adjacent polygons of similar depths; in this case within 5% of each other. The results and run times were better, but matching lake levels was not satisfactory. The fourth approach was to reduce the number of inhomogeneities to four, and to average the depth data over the inhomogeneity. The thicknesses were
Crock, J.G.; Smith, D.B.; Yager, T.J.B.
2009-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District, MWRD), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colorado, USA. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream bed sediment. Soils for this study were defined as the plow zone of the dry land agricultural fields - the top twelve inches of the soil column. This report presents analytical results for the soil samples collected at the Metro District farm land near Deer Trail, Colorado, during three separate sampling events during 1999, 2000, and 2002. Soil samples taken in 1999 were to be a representation of the original baseline of the agricultural soils prior to any biosolids application. The soil samples taken in 2000 represent the soils after one application of biosolids to the middle field at each site and those taken in 2002 represent the soils after two applications. There have been no biosolids applied to any of the four control fields. The next soil sampling is scheduled for the spring of 2010. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross
Kim, Ellen S; Satter, Martin; Reed, Marilyn; Fadell, Ronald; Kardan, Arash
2016-06-01
Glioblastoma multiforme (GBM) is the most common and lethal malignant glioma in adults. Currently, the modality of choice for diagnosing brain tumor is high-resolution magnetic resonance imaging (MRI) with contrast, which provides anatomic detail and localization. Studies have demonstrated, however, that MRI may have limited utility in delineating the full tumor extent precisely. Studies suggest that MR spectroscopy (MRS) can also be used to distinguish high-grade from low-grade gliomas. However, due to operator dependent variables and the heterogeneous nature of gliomas, the potential for error in diagnostic accuracy with MRS is a concern. Positron emission tomography (PET) imaging with (11)C-methionine (MET) and (18)F-fluorodeoxyglucose (FDG) has been shown to add additional information with respect to tumor grade, extent, and prognosis based on the premise of biochemical changes preceding anatomic changes. Combined PET/MRS is a technique that integrates information from PET in guiding the location for the most accurate metabolic characterization of a lesion via MRS. We describe a case of glioblastoma multiforme in which MRS was initially non-diagnostic for malignancy, but when MRS was repeated with PET guidance, demonstrated elevated choline/N-acetylaspartate (Cho/NAA) ratio in the right parietal mass consistent with a high-grade malignancy. Stereotactic biopsy, followed by PET image-guided resection, confirmed the diagnosis of grade IV GBM. To our knowledge, this is the first reported case of an integrated PET/MRS technique for the voxel placement of MRS. Our findings suggest that integrated PET/MRS may potentially improve diagnostic accuracy in high-grade gliomas.
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
NASA Technical Reports Server (NTRS)
Redd, L. T.; Hanson, P. W.; Wynne, E. C.
1979-01-01
A wind tunnel technique for obtaining gust frequency response functions for use in predicting the response of flexible aircraft to atmospheric turbulence is evaluated. The tunnel test results for a dynamically scaled cable supported aeroelastic model are compared with analytical and flight data. The wind tunnel technique, which employs oscillating vanes in the tunnel throat section to generate a sinusoidally varying flow field around the model, was evaluated by use of a 1/30 scale model of the B-52E airplane. Correlation between the wind tunnel results, flight test results, and analytical predictions for response in the short period and wing first elastic modes of motion are presented.
A stereo triangulation system for structural identification: Analytical and experimental results
NASA Technical Reports Server (NTRS)
Junkins, J. L.; James, G. H., III; Pollock, T. C.; Rahman, Z. H.
1988-01-01
Identification of large space structures' distributed mass, stiffness, and energy dissipation characteristics poses formidable analytical, numerical, and implementation difficulties. Development of reliable on-orbit structural identification methods is important for implementing active vibration suppression concepts which are under widespread study in the large space structures community. Near the heart of the identification problem lies the necessity of making a large number of spatially distributed measurements of the structure's vibratory response and the associated force/moment inputs with sufficient spatial and frequency resolution. In the present paper, we discuss a method whereby tens of active or passive (retro-reflecting) targets on the structure are tracked simultaneously by the focal planes of two or more video cameras mounted on an adjacent platform. Triangulation (optical ray intersection) of the conjugate image centroids yield inertial trajectories of each target on the structure. Given the triangulated motion of the targets, we apply and extend methodology developed by Creamer, Junkins, and Juang to identify the frequencies, mode shapes, and updated estimates for the mass/stiffness/damping parameterization of the structure. The methodology is semi-automated, for example, the post experiment analysis of the video imagery to determine the inertial trajectories of the targets typically requires less than thirty minutes of real time. Using methodology discussed herein, the frequency response of a large number of points on the structure (where reflective targets are mounted) on the structure can be determined from optical measurements alone. For comparison purposes, we also utilize measurements from accelerometers and a calibrated impulse hammer. While our experimental work remains in a research stage of development, we have successfully tracked and stereo triangulated 20 targets (on a vibrating cantilevered grid structure) at a sample frequency of 200 HZ
STABLE CONIC-HELICAL ORBITS OF PLANETS AROUND BINARY STARS: ANALYTICAL RESULTS
Oks, E.
2015-05-10
Studies of planets in binary star systems are especially important because it was estimated that about half of binary stars are capable of supporting habitable terrestrial planets within stable orbital ranges. One-planet binary star systems (OBSS) have a limited analogy to objects studied in atomic/molecular physics: one-electron Rydberg quasimolecules (ORQ). Specifically, ORQ, consisting of two fully stripped ions of the nuclear charges Z and Z′ plus one highly excited electron, are encountered in various plasmas containing more than one kind of ion. Classical analytical studies of ORQ resulted in the discovery of classical stable electronic orbits with the shape of a helix on the surface of a cone. In the present paper we show that despite several important distinctions between OBSS and ORQ, it is possible for OBSS to have stable planetary orbits in the shape of a helix on a conical surface, whose axis of symmetry coincides with the interstellar axis; the stability is not affected by the rotation of the stars. Further, we demonstrate that the eccentricity of the stars’ orbits does not affect the stability of the helical planetary motion if the center of symmetry of the helix is relatively close to the star of the larger mass. We also show that if the center of symmetry of the conic-helical planetary orbit is relatively close to the star of the smaller mass, a sufficiently large eccentricity of stars’ orbits can switch the planetary motion to the unstable mode and the planet would escape the system. We demonstrate that such planets are transitable for the overwhelming majority of inclinations of plane of the stars’ orbits (i.e., the projections of the planet and the adjacent start on the plane of the sky coincide once in a while). This means that conic-helical planetary orbits at binary stars can be detected photometrically. We consider, as an example, Kepler-16 binary stars to provide illustrative numerical data on the possible parameters and the
Distribution of Steps with Finite-Range Interactions: Analytic Approximations and Numerical Results
NASA Astrophysics Data System (ADS)
GonzáLez, Diego Luis; Jaramillo, Diego Felipe; TéLlez, Gabriel; Einstein, T. L.
2013-03-01
While most Monte Carlo simulations assume only nearest-neighbor steps interact elastically, most analytic frameworks (especially the generalized Wigner distribution) posit that each step elastically repels all others. In addition to the elastic repulsions, we allow for possible surface-state-mediated interactions. We investigate analytically and numerically how next-nearest neighbor (NNN) interactions and, more generally, interactions out to q'th nearest neighbor alter the form of the terrace-width distribution and of pair correlation functions (i.e. the sum over n'th neighbor distribution functions, which we investigated recently.[2] For physically plausible interactions, we find modest changes when NNN interactions are included and generally negligible changes when more distant interactions are allowed. We discuss methods for extracting from simulated experimental data the characteristic scale-setting terms in assumed potential forms.
NASA Technical Reports Server (NTRS)
Shufflebarger, C C; Payne, Chester B; Cahen, George L
1958-01-01
An analytical study of the effects of wing flexibility on wing strains due to gusts has been made for four spanwise stations of a four-engine bomber airplane, and the results have been correlated with results of a previous flight investigation.
Kokhanovsky, Alexander A
2007-04-01
Analytical equations for the diffused scattered light correction factor of Sun photometers are derived and analyzed. It is shown that corrections are weakly dependent on the atmospheric optical thickness. They are influenced mostly by the size of aerosol particles encountered by sunlight on its way to a Sun photometer. In addition, the accuracy of the small-angle approximation used in the work is studied with numerical calculations based on the exact radiative transfer equation.
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda
2012-11-01
It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).
NASA Astrophysics Data System (ADS)
Crivellini, A.
2016-02-01
This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.
NASA Astrophysics Data System (ADS)
Crivellini, A.
2016-02-01
This paper deals with the numerical performance of a sponge layer as a non-reflective boundary condition. This technique is well known and widely adopted, but only recently have the reasons for a sponge failure been recognised, in analysis by Mani. For multidimensional problems, the ineffectiveness of the method is due to the self-reflections of the sponge occurring when it interacts with an oblique acoustic wave. Based on his theoretical investigations, Mani gives some useful guidelines for implementing effective sponge layers. However, in our opinion, some practical indications are still missing from the current literature. Here, an extensive numerical study of the performance of this technique is presented. Moreover, we analyse a reduced sponge implementation characterised by undamped partial differential equations for the velocity components. The main aim of this paper relies on the determination of the minimal width of the layer, as well as of the corresponding strength, required to obtain a reflection error of no more than a few per cent of that observed when solving the same problem on the same grid, but without employing the sponge layer term. For this purpose, a test case of computational aeroacoustics, the single airfoil gust response problem, has been addressed in several configurations. As a direct consequence of our investigation, we present a well documented and highly validated reference solution for the far-field acoustic intensity, a result that is not well established in the literature. Lastly, the proof of the accuracy of an algorithm for coupling sub-domains solved by the linear and non-liner Euler governing equations is given. This result is here exploited to adopt a linear-based sponge layer even in a non-linear computation.
Plumlee, Geoffrey S.; Martin, Deborah A.; Hoefen, Todd; Kokaly, Raymond F.; Hageman, Philip; Eckberg, Alison; Meeker, Gregory P.; Adams, Monique; Anthony, Michael; Lamothe, Paul J.
2007-01-01
Overview The U.S. Geological Survey (USGS) collected ash and burned soils from about 28 sites in southern California wildfire areas (Harris, Witch, Ammo, Santiago, Canyon and Grass Valley) from Nov. 2 through 9, 2007 (table 1). USGS researchers are applying a wide variety of analytical methods to these samples, with the goal of helping identify characteristics of the ash and soils from wildland and suburban burned areas that may be of concern for their potential to adversely affect water quality, human health, endangered species, and debris-flow or flooding hazards. These studies are part of the Southern California Multi-Hazards Demonstration Project, and preliminary findings are presented here.
Dalmont, Jean-Pierre; Frappé, Cyrille
2007-08-01
In the context of a simplified model of the clarinet in which the losses are assumed to be frequency independent the analytic expressions of the various thresholds have been calculated in a previous paper [Dalmont et al., J. Acoust. Soc. Am. 118, 32.94-3305 (2005)]. The present work is a quantitative comparison between "theoretical" values of the thresholds and their experimental values measured by using an artificial mouth. It is shown that the "Raman" model, providing that nonlinear losses are taken into account, is reliable and able to predict the values of thresholds.
Scale dependence of the effective matrix diffusion coefficient:some analytical results
Liu, H.H.; Zhang, Y.Q.; Molz, F.J.
2005-05-30
Matrix diffusion is an important process affecting solutetransport in fractured rock, and the matrix diffusion coefficient is akey parameter for describing this process. Previous studies haveindicated that the effective matrix-diffusion coefficient values,obtained from a number of field tracer tests, are enhanced in comparisonwith local values and may increase with test scale. In thiscommunication, we develop analytical expressions for the effective matrixdiffusion coefficient for two simple fracture-matrix systems, anddemonstrate that heterogeneities in the rock matrix at different scalescontribute to the scale dependence of the effective matrix diffusioncoefficient.
B Plant, TK-21-1, analytical results for the final report
Fritts, L.L., Westinghouse Hanford
1996-12-09
This document is the final laboratory report for B Plant Tk-21-1. A Resource Conservation and Recovery Act (RCRA) sample was taken from Tk-21 -1 September 26, 1996. This sample was received at 222-S Analytical Laboratory on September 27, 1996. Analyses were performed in accordance with the accompanying Request for Sample Analysis (RSA) and Letter of Instruction B PLANT RCRA SAMPLES TO 222S LABORATORY, LETTER OF INSTRUCTION (LOI) 2B-96-LOI-012-01 (LOI) (Westra, 1996). LOI was issued subsequent to RSA and replaces Letter of Instruction 2C-96-LOI-004-01 referenced in RSA.
The route to MBxNyCz molecular wheels: II. Results using accurate functionals and basis sets
NASA Astrophysics Data System (ADS)
Güthler, A.; Mukhopadhyay, S.; Pandey, R.; Boustani, I.
2014-04-01
Applying ab initio quantum chemical methods, molecular wheels composed of metal and light atoms were investigated. High quality basis sets 6-31G*, TZPV, and cc-pVTZ as well as exchange and non-local correlation functionals B3LYP, BP86 and B3P86 were used. The ground-state energy and structures of cyclic planar and pyramidal clusters TiBn (for n = 3-10) were computed. In addition, the relative stability and electronic structures of molecular wheels TiBxNyCz (for x, y, z = 0-10) and MBnC10-n (for n = 2 to 5 and M = Sc to Zn) were determined. This paper sustains a follow-up study to the previous one of Boustani and Pandey [Solid State Sci. 14 (2012) 1591], in which the calculations were carried out at the HF-SCF/STO3G/6-31G level of theory to determine the initial stability and properties. The results show that there is a competition between the 2D planar and the 3D pyramidal TiBn clusters (for n = 3-8). Different isomers of TiB10 clusters were also studied and a structural transition of 3D-isomer into 2D-wheel is presented. Substitution boron in TiB10 by carbon or/and nitrogen atoms enhances the stability and leads toward the most stable wheel TiB3C7. Furthermore, the computations show that Sc, Ti and V at the center of the molecular wheels are energetically favored over other transition metal atoms of the first row.
Analytical test results for archived core composite samples from tanks 241-TY-101 and 241-TY-103
Beck, M.A.
1993-07-16
This report describes the analytical tests performed on archived core composite samples form a 1.085 sampling of the 241-TY-101 (101-TY) and 241-TY-103 (103-TY) single shell waste tanks. Both tanks are suspected of containing quantities of ferrocyanide compounds, as a result of process activities in the late 1950`s. Although limited quantities of the composite samples remained, attempts were made to obtain as much analytical information as possible, especially regarding the chemical and thermal properties of the material.
Shalchi, A.; Danos, R. J.
2013-03-10
A spatially varying mean magnetic field gives rise to so-called adiabatic focusing of energetic particles propagating through the universe. In the past, different analytical approaches have been proposed to calculate the particle diffusion coefficient along the mean field with focusing. In the present paper, we show how these different results are related to each other. New results for the parallel diffusion coefficient that are more general than previous results are also presented.
Hidden modes in open disordered media: analytical, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Bliokh, Yury P.; Freilikher, Valentin; Shi, Z.; Genack, A. Z.; Nori, Franco
2015-11-01
We explore numerically, analytically, and experimentally the relationship between quasi-normal modes (QNMs) and transmission resonance (TR) peaks in the transmission spectrum of one-dimensional (1D) and quasi-1D open disordered systems. It is shown that for weak disorder there exist two types of the eigenstates: ordinary QNMs which are associated with a TR, and hidden QNMs which do not exhibit peaks in transmission or within the sample. The distinctive feature of the hidden modes is that unlike ordinary ones, their lifetimes remain constant in a wide range of the strength of disorder. In this range, the averaged ratio of the number of transmission peaks {N}{{res}} to the number of QNMs {N}{{mod}}, {N}{{res}}/{N}{{mod}}, is insensitive to the type and degree of disorder and is close to the value \\sqrt{2/5}, which we derive analytically in the weak-scattering approximation. The physical nature of the hidden modes is illustrated in simple examples with a few scatterers. The analogy between ordinary and hidden QNMs and the segregation of superradiant states and trapped modes is discussed. When the coupling to the environment is tuned by an external edge reflectors, the superradiance transition is reproduced. Hidden modes have been also found in microwave measurements in quasi-1D open disordered samples. The microwave measurements and modal analysis of transmission in the crossover to localization in quasi-1D systems give a ratio of {N}{{res}}/{N}{{mod}} close to \\sqrt{2/5}. In diffusive quasi-1D samples, however, {N}{{res}}/{N}{{mod}} falls as the effective number of transmission eigenchannels M increases. Once {N}{{mod}} is divided by M, however, the ratio {N}{{res}}/{N}{{mod}} is close to the ratio found in 1D.
NASA Astrophysics Data System (ADS)
Wu, Yang; Kelly, Damien P.
2014-12-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf's treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of ? and ? type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of ? and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by ?, where ? is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system.
Wu, Yang; Kelly, Damien P.
2014-01-01
The distribution of the complex field in the focal region of a lens is a classical optical diffraction problem. Today, it remains of significant theoretical importance for understanding the properties of imaging systems. In the paraxial regime, it is possible to find analytical solutions in the neighborhood of the focus, when a plane wave is incident on a focusing lens whose finite extent is limited by a circular aperture. For example, in Born and Wolf’s treatment of this problem, two different, but mathematically equivalent analytical solutions, are presented that describe the 3D field distribution using infinite sums of Un and Vn type Lommel functions. An alternative solution expresses the distribution in terms of Zernike polynomials, and was presented by Nijboer in 1947. More recently, Cao derived an alternative analytical solution by expanding the Fresnel kernel using a Taylor series expansion. In practical calculations, however, only a finite number of terms from these infinite series expansions is actually used to calculate the distribution in the focal region. In this manuscript, we compare and contrast each of these different solutions to a numerically calculated result, paying particular attention to how quickly each solution converges for a range of different spatial locations behind the focusing lens. We also examine the time taken to calculate each of the analytical solutions. The numerical solution is calculated in a polar coordinate system and is semi-analytic. The integration over the angle is solved analytically, while the radial coordinate is sampled with a sampling interval of Δρ and then numerically integrated. This produces an infinite set of replicas in the diffraction plane, that are located in circular rings centered at the optical axis and each with radii given by 2πm/Δρ, where m is the replica order. These circular replicas are shown to be fundamentally different from the replicas that arise in a Cartesian coordinate system. PMID
Box, Stephen E.; Bookstrom, Arthur A.; Ikramuddin, Mohammed; Lindsay, James
2001-01-01
(Fe), manganese (Mn), arsenic (As), and cadmium (Cd). In general inter-laboratory correlations are better for samples within the compositional range of the Standard Reference Materials (SRMs) from the National Institute of Standards and Technology (NIST). Analyses by EWU are the most accurate relative to the NIST standards (mean recoveries within 1% for Pb, Fe, Mn, and As, 3% for Zn and 5% for Cd) and are the most precise (within 7% of the mean at the 95% confidence interval). USGS-EDXRF is similarly accurate for Pb and Zn. XRAL and ACZ are relatively accurate for Pb (within 5-8% of certified NIST values), but were considerably less accurate for the other 5 elements of concern (10-25% of NIST values). However, analyses of sample splits by more than one laboratory reveal that, for some elements, XRAL (Pb, Mn, Cd) and ACZ (Pb, Mn, Zn, Fe) analyses were comparable to EWU analyses of the same samples (when values are within the range of NIST SRMs). These results suggest that, for some elements, XRAL and ACZ dissolutions are more effective on the matrix of the CdA samples than on the matrix of the NIST samples (obtained from soils around Butte, Montana). Splits of CdA samples analyzed by CHEMEX were the least accurate, yielding values 10-25% less than those of EWU.
Timme, Marc; Geisel, Theo; Wolf, Fred
2006-03-01
We analyze the dynamics of networks of spiking neural oscillators. First, we present an exact linear stability theory of the synchronous state for networks of arbitrary connectivity. For general neuron rise functions, stability is determined by multiple operators, for which standard analysis is not suitable. We describe a general nonstandard solution to the multioperator problem. Subsequently, we derive a class of neuronal rise functions for which all stability operators become degenerate and standard eigenvalue analysis becomes a suitable tool. Interestingly, this class is found to consist of networks of leaky integrate-and-fire neurons. For random networks of inhibitory integrate-and-fire neurons, we then develop an analytical approach, based on the theory of random matrices, to precisely determine the eigenvalue distributions of the stability operators. This yields the asymptotic relaxation time for perturbations to the synchronous state which provides the characteristic time scale on which neurons can coordinate their activity in such networks. For networks with finite in-degree, i.e., finite number of presynaptic inputs per neuron, we find a speed limit to coordinating spiking activity. Even with arbitrarily strong interaction strengths neurons cannot synchronize faster than at a certain maximal speed determined by the typical in-degree.
Recent experimental and analytical results of BNL direct containment heating programs
Ginsberg, T.; Tutu, N.K.
1986-01-01
The direct containment heating (DCH) scenario involves high-pressure ejection of molten core material from the reactor vessel into the region beneath the vessel and into various subcompartments of the containment building. The stored energy in the melt consists of the sensible energy of melt and the chemical reaction energy of the various components assuming that they can react with either the oxygen or the steam within containment. The metallic phase may first react with steam, if local conditions permit, and thereby produce hydrogen. The hydrogen may then burn at some later time at a different location. In order to predict the containment response, one must follow the melt through the various subcompartments of the containment building, while computing the integrated release of energy from the melt to the containment atmosphere and the quantity of hydrogen produced during the time period that the melt is suspended. The BNL research program in the area of direct containment heating is directed towards the development of a methodology to predict the hydrodynamic, chemical and thermal interactions which could take place in three regions of PWR containment buildings: the reactor cavity, the intermediate compartments (e.g., steam generator room) and the containment dome. Separate effects, scaled experiments are performed related to selected aspects of the DCH problem, and analytical models are developed to characterize the relevant phenomena.
Fabro, M A; Milanesio, H V; Robert, L M; Speranza, J L; Murphy, M; Rodríguez, G; Castañeda, R
2006-03-01
In Argentina, one analytical method is usually carried out to determine acidity in whole raw milk: the Instituto Nacional de Racionalización de Materiales standard (no. 14005), based on the Dornic method of French origin. In a national and international regulation, the Association of Official Analytical Chemists International method (no. 947.05) is proposed as the standard method of analysis. Although these methods have the same foundation, there is no evidence that results obtained using the 2 methods are equivalent. The presence of some trends and discordant data lead us to perform a statistical study to verify the equivalency of the obtained results. We analyzed 266 samples and the existence of significant differences between the results obtained by both methods was determined.
Global adjustement of analytical theories of planetary motion to observations: the first results
NASA Astrophysics Data System (ADS)
Fienga, A.
1999-12-01
In this work, we have begun the first adjustement of the analytical theories of the planets built at the IMC-BDL, VSOP.We had gather together several types of observations, reduce and homogenize them. There are very different types of data: old and recent transit observations spread on a period of more than 2 centuries (1750-1997), photographic and CCD observations, radar ranging data and positions of planets deduced from tracking observations of space probe by the use of the VLBI techniques. We also used positions of outer planets deduced from satellites absolute positions. We have treated an important number of data limiting our first study to the planets Mercury, Venus, Jupiter and Saturn. From the first fit made on Mercury and Venus observations, we have deduced a new link between the inertiel dynamical reference frame of VSOP and the inertiel cinematic reference frame of the ICRS. We choosed to test the quality of the new solution of motion of the Earth-Moon barycenter by including this solution in the fitted orbit of the outer planets.We made a second fit based on observations of Jupiter and Saturn, computed in the reference frame deduced from the fit on Mercury and Venus observations. We increased the accuracy of a factor 2 on the positions of Jupiter when we compared positions deduced from the fitted solution and observed positions (made at La Palma in 1983-1993) which are not inclued in the fit. Finally, we made one of the first wavelet analysis on non-regularly sampled time series (see other presentation).
Reigel, M.; Cozzi, A.
2010-08-17
This report details the chemical analysis results for the characterization of the May 19, 2010 inadvertent transfer from the Saltstone Production Facility (SPF) to the Saltstone Disposal Facility (SDF). On May 19, 2010, the Saltstone Processing Facility (SPF) inadvertently transferred approximately 1800 gallons of untreated low-level salt solution from the salt feed tank (SFT) to Cell F of Vault 4. The transfer was identified and during safe configuration shutdown, approximately 70 gallons of SFT material was left in the Saltstone hopper. After the shutdown, the material in the hopper was undisturbed, while the SFT has received approximately 1400 gallons of drain water from the Vault 4 bleed system. The drain water path from Vault 4 to the SFT does not include the hopper (Figure 1); therefore it was determined that the material remaining in the hopper was the most representative sample of the salt solution transferred to the vault. To complete item No.5 of Reference 1, Savannah River National Laboratory (SRNL) was asked to analyze the liquid sample retrieved from the hopper for pH, and metals identified by the Resource Conservation and Recovery Act (RCRA). SRNL prepared a report to complete item No.5 and determine the hazardous nature of the transfer. Waste Solidification Engineering then instructed SRNL to provide a more detailed analysis of the slurried sample to assist in the determination of the portion of Tank 50 waste in the hopper sample.
Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine
2012-08-31
Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2010-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver, a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey began monitoring groundwater at part of this site. In 1999, the Survey began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through the end of 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report presents analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2009. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ('priority analytes') (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied. Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2008, and this consistency continues with the samples for 2009. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 11 years of this study.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2011-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo., U.S.A. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program was recently extended through the end of 2010 and is now completed. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water runoff effects. This report summarizes analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed for 2010. In general, the objective of each component of the study was to determine whether concentrations of nine trace elements ("priority analytes") (1) were higher than regulatory limits, (2) were increasing with time, or (3) were significantly higher in biosolids-applied areas than in a similar farmed area where biosolids were not applied (background). Previous analytical results indicate that the elemental composition of biosolids from the Denver plant was consistent during 1999-2009, and this consistency continues with the samples for 2010. Total concentrations of regulated trace elements remain consistently lower than the regulatory limits for the entire monitoring period. Concentrations of none of the priority analytes appear to have increased during the 12 years
Analytic Result for the Two-loop Six-point NMHV Amplitude in N = 4 Super Yang-Mills Theory
Dixon, Lance J.; Drummond, James M.; Henn, Johannes M.; /Humboldt U., Berlin /Princeton, Inst. Advanced Study
2012-02-15
We provide a simple analytic formula for the two-loop six-point ratio function of planar N = 4 super Yang-Mills theory. This result extends the analytic knowledge of multi-loop six-point amplitudes beyond those with maximal helicity violation. We make a natural ansatz for the symbols of the relevant functions appearing in the two-loop amplitude, and impose various consistency conditions, including symmetry, the absence of spurious poles, the correct collinear behavior, and agreement with the operator product expansion for light-like (super) Wilson loops. This information reduces the ansatz to a small number of relatively simple functions. In order to fix these parameters uniquely, we utilize an explicit representation of the amplitude in terms of loop integrals that can be evaluated analytically in various kinematic limits. The final compact analytic result is expressed in terms of classical polylogarithms, whose arguments are rational functions of the dual conformal cross-ratios, plus precisely two functions that are not of this type. One of the functions, the loop integral {Omega}{sup (2)}, also plays a key role in a new representation of the remainder function R{sub 6}{sup (2)} in the maximally helicity violating sector. Another interesting feature at two loops is the appearance of a new (parity odd) x (parity odd) sector of the amplitude, which is absent at one loop, and which is uniquely determined in a natural way in terms of the more familiar (parity even) x (parity even) part. The second non-polylogarithmic function, the loop integral {tilde {Omega}}{sup (2)}, characterizes this sector. Both {Omega}{sup (2)} and {tilde {Omega}}{sup (2)} can be expressed as one-dimensional integrals over classical polylogarithms with rational arguments.
NASA Astrophysics Data System (ADS)
Kang, Jae-Do; Tagawa, Hiroshi
2016-03-01
This paper presents results of experimental and numerical investigations of a seesaw energy dissipation system (SEDS) using fluid viscous dampers (FVDs). To confirm the characteristics of the FVDs used in the tests, harmonic dynamic loading tests were conducted in advance of the free vibration tests and the shaking table tests. Shaking table tests were conducted to demonstrate the damping capacity of the SEDS under random excitations such as seismic waves, and the results showed SEDSs have sufficient damping capacity for reducing the seismic response of frames. Free vibration tests were conducted to confirm the reliability of simplified analysis. Time history response analyses were also conducted and the results are in close agreement with shaking table test results.
Tank 241-BY-112, cores 174 and 177 analytical results for the final report
Nuzum, J.L.
1997-05-06
Results from bulk density tests ranged from 1.03 g/mL to 1.86 g/mL. The highest bulk density result of 1.86 g/mL was used to calculate the solid total alpha activity notification limit for this tank (33.1 uCi/g), Total Alpha (AT) Analysis. Attachment 2 contains the Data Verification and Deliverable (DVD) Summary Report for AT analyses. This report summarizes results from AT analyses and provides data qualifiers and total propagated uncertainty (TPU) values for results. The TPU values are based on the uncertainties inherent in each step of the analysis process. They may be used as an additional reference to determine reasonable RPD values which may be used to accept valid data that do not meet the TSAP acceptance criteria. A report guide is provided with the report to assist in understanding this summary report.
Analytical results for post-buckling behaviour of plates in compression and in shear
NASA Technical Reports Server (NTRS)
Stein, M.
1985-01-01
The postbuckling behavior of long rectangular isotropic and orthotropic plates is determined. By assuming trigonometric functions in one direction, the nonlinear partial differential equations of von Karman large deflection plate theory are converted into nonlinear ordinary differential equations. The ordinary differential equations are solved numerically using an available boundary value problem solver which makes use of Newton's method. Results for longitudinal compression show different postbuckling behavior between isotropic and orthotropic plates. Results for shear show that change in inplane edge constraints can cause large change in postbuckling stiffness.
Test and analytical results of a new bolt configuration for a diagnostic/device canister connection
Boyce, L.
1981-09-01
Underground nuclear explosive tests utilize a nuclear device canister suspended from a canister containing diagnostic equipment. A standard design for these canisters and their connection is being developed by the Nuclear Test Engineering Divisions, Test Systems Section of Lawrence Livermore National Laboratory. Test and analysis of a new bolt configuration for a portion of this bolted canister connection have been carried out and results are presented and compared for channel loads of 100,000 and 200,000 lb. When results for this connection design are compared with an earlier one, significant reductions are found in bolt loads, end plate separations, and certain stresses and moments.
Critical behavior of two-dimensional models with spatially modulated phases: Analytic results
NASA Astrophysics Data System (ADS)
Ruján, P.
1981-12-01
The two-dimensional Elliott [or axial next-nearest-neighbor Ising (ANNNI)] model is mapped into an eight-vertex model with direct and staggered fields. With the use of the transfer-matrix approach it is shown that the dual of the ANNNI model belongs to the universality class of the one-dimensional quantum XY model in a staggered field at T=0. The phase structure is investigated by high- and low-temperature expansions of the correlation length and by spin-wave-like approximations valid in first order at low and high temperatures, respectively. The fact that the phase diagram obtained at low temperatures agrees qualitatively with recent results by Villain and Bak and by Coppersmith et al. shows that the paramagnetic phase extends until T=0. The role of the umklapp scattering in determining the critical wave vector in the modulated phase and in stabilizing the <2> antiphase is pointed out. In the eight-vertex representation the critical indices are identified in the floating, massless phase. The dislocations destabilizing this incommensurate phase correspond to the energy operator of the eight-vertex model. Finally, it is argued that the apparent contradiction between the low-temperature results on one hand, and the Monte Carlo simulations and high-temperature-expansion results on the other hand, is probably due to the strong oscillatory behavior of spin-spin correlation functions in the massive paramagnetic region.
Hartwell, William T.; Daniels, Jeffrey; Nikolich, George; Shadel, Craig; Giles, Ken; Karr, Lynn; Kluesner, Tammy
2012-01-01
During the period April to June 2008, at the behest of the Department of Energy (DOE), National Nuclear Security Administration, Nevada Site Office (NNSA/NSO); the Desert Research Institute (DRI) constructed and deployed two portable environmental monitoring stations at the Tonopah Test Range (TTR) as part of the Environmental Restoration Project Soils Activity. DRI has operated these stations since that time. A third station was deployed in the period May to September 2011. The TTR is located within the northwest corner of the Nevada Test and Training Range (NTTR), and covers an area of approximately 725.20 km2 (280 mi2). The primary objective of the monitoring stations is to evaluate whether and under what conditions there is wind transport of radiological contaminants from Soils Corrective Action Units (CAUs) associated with Operation Roller Coaster on TTR. Operation Roller Coaster was a series of tests, conducted in 1963, designed to examine the stability and dispersal of plutonium in storage and transportation accidents. These tests did not result in any nuclear explosive yield. However, the tests did result in the dispersal of plutonium and contamination of surface soils in the surrounding area.
Tank 241-BY-101, cores 189 and 199 analytical results for the final report
Nuzum, J.L.
1997-09-25
This document is the final laboratory report for Tank 241-BY-101. Push mode core segments were removed from Pisers 10B and 10D between May 27, 1997, and June 1, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-.BY-101 Push Core Sampling and analysis Plan (TSAP) and Tank Safety Screening Data Quality Objective (DQO). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DSC) analysis exceeded the notification limits as stated in TSAP and DQO, The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems (TWRS) Technical Basis Group, and are not considered in this report. Near Infrared Spectroscopy (NIR) analysis was requested in order to compare NIR results with those obtained from percent water gravimetry analysis (%H20) and thermal gravimetric analysis (TGA). The TWRS Technical Basis Group rescinded the request for this analysis, and neither NIR nor %H20 analyses were performed.
Tank 241-AP-105, cores 208, 209 and 210, analytical results for the final report
Nuzum, J.L.
1997-10-24
This document is the final laboratory report for Tank 241-AP-105. Push mode core segments were removed from Risers 24 and 28 between July 2, 1997, and July 14, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-AP-105 Push Mode Core Sampling and Analysis Plan (TSAP) (Hu, 1997) and Tank Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT), differential scanning calorimetry (DSC) analysis, or total organic carbon (TOC) analysis exceeded the notification limits as stated in TSAP and DQO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group, and are not considered in this report. Appearance and Sample Handling Two cores, each consisting of four segments, were expected from Tank 241-AP-105. Three cores were sampled, and complete cores were not obtained. TSAP states core samples should be transported to the laboratory within three calendar days from the time each segment is removed from the tank. This requirement was not met for all cores. Attachment 1 illustrates subsamples generated in the laboratory for analysis and identifies their sources. This reference also relates tank farm identification numbers to their corresponding 222-S Laboratory sample numbers.
Tank 241-T-201, core 192 analytical results for the final report
Nuzum, J.L.
1997-08-07
This document is the final laboratory report for Tank 241-T-201. Push mode core segments were removed from Riser 3 between April 24, 1997, and April 25, 1997. Segments were received and extruded at 222-S Laboratory. Analyses were performed in accordance with Tank 241-T-201 Push Mode Core Sampling and Analysis Plan (TSAP) (Hu, 1997), Letter of Instruction for Core Sample Analysis of Tanks 241-T-201, 241-T-202, 241-T-203, and 241-T-204 (LOI) (Bell, 1997), Additional Core Composite Sample from Drainable Liquid Samples for Tank 241-T-2 01 (ACC) (Hall, 1997), and Safety Screening Data Quality Objective (DQO) (Dukelow, et al., 1995). None of the subsamples submitted for total alpha activity (AT) or differential scanning calorimetry (DSC) analyses exceeded the notification limits stated in DQO. The statistical results of the 95% confidence interval on the mean calculations are provided by the Tank Waste Remediation Systems Technical Basis Group, and are not considered in this report.
Following a trend with an exponential moving average: Analytical results for a Gaussian model
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.; Serror, Jeremy
2014-01-01
We investigate how price variations of a stock are transformed into profits and losses (P&Ls) of a trend following strategy. In the frame of a Gaussian model, we derive the probability distribution of P&Ls and analyze its moments (mean, variance, skewness and kurtosis) and asymptotic behavior (quantiles). We show that the asymmetry of the distribution (with often small losses and less frequent but significant profits) is reminiscent to trend following strategies and less dependent on peculiarities of price variations. At short times, trend following strategies admit larger losses than one may anticipate from standard Gaussian estimates, while smaller losses are ensured at longer times. Simple explicit formulas characterizing the distribution of P&Ls illustrate the basic mechanisms of momentum trading, while general matrix representations can be applied to arbitrary Gaussian models. We also compute explicitly annualized risk adjusted P&L and strategy turnover to account for transaction costs. We deduce the trend following optimal timescale and its dependence on both auto-correlation level and transaction costs. Theoretical results are illustrated on the Dow Jones index.
Audebert, P.; Temmar, A.
1997-05-01
In continuation of a series of tests, the original results of oak drying in an evacuated kiln are presented here for different plate temperatures and for various pressures in the kiln. These results include more particularly the drying curves, the evolution of temperature, of moisture and of pressure in and on the wood. They evidence the pressure and the levels of temperature occurring in the wood during the drying period. These results also allow the development of two types of drying models: a simple monodimensional model of drying curves from the analytical solutions of the equations of water diffusion in the wood and, moreover, a model, in two dimensions, of temperature, moisture and pressure fields in the wood by applying the finite element method. The boundary conditions of the second model can be fixed with precision thanks to the results of the first model. In both cases, the proposed solutions are justified by experimental results.
Tank 103, 219-S Facility at 222-S Laboratory, analytical results for the final report
Fuller, R.K.
1998-06-18
This is the final report for the polychlorinated biphenyls analysis of Tank-103 (TK-103) in the 219-S Facility at 222-S Laboratory. Twenty 1-liter bottles (Sample numbers S98SO00074 through S98SO00093) were received from TK-103 during two sampling events, on May 5 and May 7, 1998. The samples were centrifuged to separate the solids and liquids. The centrifuged sludge was analyzed for PCBs as Aroclor mixtures. The results are discussed on page 6. The sample breakdown diagram (Page 114) provides a cross-reference of sample identification of the bulk samples to the laboratory identification number for the solids. The request for sample analysis (RSA) form is provided as Page 117. The raw data is presented on Page 43. Sample Description, Handling, and Preparation Twenty samples were received in the laboratory in 1-Liter bottles. The first 8 samples were received on May 5, 1998. There were insufficient solids to perform the requested PCB analysis and 12 additional samples were collected and received on May 7, 1998. Breakdown and sub sampling was performed on May 8, 1998. Sample number S98SO00084 was lost due to a broken bottle. Nineteen samples were centrifuged and the solids were collected in 8 centrifuge cones. After the last sample was processed, the solids were consolidated into 2 centrifuge cones. The first cone contained 9.7 grams of solid and 13.0 grams was collected in the second cone. The wet sludge from the first centrifuge cone was submitted to the laboratory for PCB analysis (sample number S98SO00102). The other sample portion (S98SO00103) was retained for possible additional analyses.
Ficklin, W.H.; Nowlan, G.A.; Preston, D.J.
1983-01-01
Water samples were collected in the vicinity of Jackman, Maine as a part of the study of the relationship of dissolved constituents in water to the sediments subjacent to the water. Each sample was analyzed for specific conductance, alkalinity, acidity, pH, fluoride, chloride, sulfate, phosphate, nitrate, sodium, potassium, calcium, magnesium, and silica. Trace elements determined were copper, zinc, molybdenum, lead, iron, manganese, arsenic, cobalt, nickel, and strontium. The longitude and latitude of each sample location and a sample site map are included in the report as well as a table of the analytical results.
Magnuson, Matthew; Campisano, Romy; Griggs, John; Fitz-James, Schatzi; Hall, Kathy; Mapp, Latisha; Mullins, Marissa; Nichols, Tonya; Shah, Sanjiv; Silvestri, Erin; Smith, Terry; Willison, Stuart; Ernst, Hiba
2014-11-01
Catastrophic incidents can generate a large number of samples of analytically diverse types, including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface residue. Such samples may arise not only from contamination from the incident but also from the multitude of activities surrounding the response to the incident, including decontamination. This document summarizes a range of activities to help build laboratory capability in preparation for sample analysis following a catastrophic incident, including selection and development of fit-for-purpose analytical methods for chemical, biological, and radiological contaminants. Fit-for-purpose methods are those which have been selected to meet project specific data quality objectives. For example, methods could be fit for screening contamination in the early phases of investigation of contamination incidents because they are rapid and easily implemented, but those same methods may not be fit for the purpose of remediating the environment to acceptable levels when a more sensitive method is required. While the exact data quality objectives defining fitness-for-purpose can vary with each incident, a governing principle of the method selection and development process for environmental remediation and recovery is based on achieving high throughput while maintaining high quality analytical results. This paper illustrates the result of applying this principle, in the form of a compendium of analytical methods for contaminants of interest. The compendium is based on experience with actual incidents, where appropriate and available. This paper also discusses efforts aimed at adaptation of existing methods to increase fitness-for-purpose and development of innovative methods when necessary. The contaminants of interest are primarily those potentially released through catastrophes resulting from malicious activity
NASA Technical Reports Server (NTRS)
Farassat, F.; Casper, J.
2003-01-01
Alan Powell has made significant contributions to the understanding of many aeroacoustic problems, in particular, the problems of broadband noise from jets and boundary layers. In this paper, some analytic results are presented for the calculation of the correlation function of the broadband noise radiated from a wing, a propeller, and a jet in uniform forward motion. It is shown that, when the observer (or microphone) motion is suitably chosen, the geometric terms of the radiation formula become time independent. The time independence of these terms leads to a significant simplification of the statistical analysis of the radiated noise, even when the near field terms are included. For a wing in forward motion, if the observer is in the moving reference frame, then the correlation function of the near and far field noise can be related to a space-time cross-correlation function of the pressure on the wing surface. A similar result holds for a propeller in forward flight if the observer is in a reference frame that is attached to the propeller and rotates at the shaft speed. For a jet in motion, it is shown that the correlation function of the radiated noise can be related to the space-time crosscorrelation of the Lighthill stress tensor in the jet. Exact analytical results are derived for all three cases. For the cases under present consideration, the inclusion of the near field terms does not introduce additional complexity, as compared to existing formulations that are limited to the far field.
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.
2014-05-15
The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.
Pele, Maria; Brohée, Marcel; Anklam, Elke; Van Hengel, Arjon J
2007-12-01
Accidental exposure to hazelnut or peanut constitutes a real threat to the health of allergic consumers. Correct information regarding food product ingredients is of paramount importance for the consumer, thereby reducing exposure to food allergens. In this study, 569 cookies and chocolates on the European market were purchased. All products were analysed to determine peanut and hazelnut content, allowing a comparison of the analytical results with information provided on the product label. Compared to cookies, chocolates are more likely to contain undeclared allergens, while, in both food categories, hazelnut traces were detected at higher frequencies than peanut. The presence of a precautionary label was found to be related to a higher frequency of positive test results. The majority of chocolates carrying a precautionary label tested positive for hazelnut, whereas peanut traces were not be detected in 75% of the cookies carrying a precautionary label.
NASA Astrophysics Data System (ADS)
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.
2014-05-01
The presence of plasma turbulence can strongly influence propagation properties of electromagnetic signals used for surveillance and communication. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of coherent vortex structures. Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics, and in many other applications. We will discuss scattering of high frequency electromagnetic waves on low frequency density irregularities due to the presence of vortex density structures associated with interchange instability. We will also present particle-in-cell simulation results of electromagnetic scattering on vortex type density structures using the large scale plasma code LSP and compare them with analytical results.
NASA Astrophysics Data System (ADS)
Florens, Serge; Snyman, Izak
2015-11-01
We analyze the spatial correlation structure of the spin density of an electron gas in the vicinity of an antiferromagnetically coupled Kondo impurity. Our analysis extends to the regime of spin-anisotropic couplings, where there are no quantitative results for spatial correlations in the literature. We use an original and numerically exact method, based on a systematic coherent-state expansion of the ground state of the underlying spin-boson Hamiltonian. It has not yet been applied to the computation of observables that are specific to the fermionic Kondo model. We also present an important technical improvement to the method that obviates the need to discretize modes of the Fermi sea, and allows one to tackle the problem in the thermodynamic limit. As a result, one can obtain excellent spatial resolution over arbitrary length scales, for a relatively low computational cost, a feature that gives the method an advantage over popular techniques such as the numerical and density-matrix renormalization groups. We find that the anisotropic Kondo model shows rich universal scaling behavior in the spatial structure of the entanglement cloud. First, SU(2) spin-symmetry is dynamically restored in a finite domain in the parameter space in the vicinity of the isotropic line, as expected from poor man's scaling. More surprisingly, we are able to obtain in closed analytical form a set of different, yet universal, scaling curves for strong exchange asymmetry, which are parametrized by the longitudinal exchange coupling. Deep inside the cloud, i.e., for distances smaller than the Kondo length, the correlation between the electron spin density and the impurity spin oscillates between ferromagnetic and antiferromagnetic values at the scale of the Fermi wavelength, an effect that is drastically enhanced at strongly anisotropic couplings. Our results also provide further numerical checks and alternative analytical approximations for the Kondo overlaps that were recently computed by
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1979-01-01
The effect of combustor operating conditions on the conversion of fuel-bound nitrogen (FBN) to nitrogen oxides NO sub x was analytically determined. The effect of FBN and of operating conditions on carbon monoxide (CO) formation was also studied. For these computations, the combustor was assumed to be a two stage, adiabatic, perfectly-stirred reactor. Propane-air was used as the combustible mixture and fuel-bound nitrogen was simulated by adding nitrogen atoms to the mixture. The oxidation of propane and formation of NO sub x and CO were modeled by a fifty-seven reaction chemical mechanism. The results for NO sub x and CO formation are given as functions of primary and secondary stage equivalence ratios and residence times.
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2009-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colo., has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colo. (U.S.A.). In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring groundwater at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock groundwater, and stream-bed sediment. Streams at the site are dry most of the year, so samples of stream-bed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 2008. Crock and others have presented earlier a compilation of analytical results for the biosolids samples collected and analyzed for 1999 thru 2006, and in a separate report, data for the 2007 biosolids are reported. More information about the other monitoring components is presented elsewhere in the literature. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for groundwater and sediment components.
Goheen, Steven C.
2001-07-01
Characterizing environmental samples has been exhaustively addressed in the literature for most analytes of environmental concern. One of the weak areas of environmental analytical chemistry is that of radionuclides and samples contaminated with radionuclides. The analysis of samples containing high levels of radionuclides can be far more complex than that of non-radioactive samples. This chapter addresses the analysis of samples with a wide range of radioactivity. The other areas of characterization examined in this chapter are the hazardous components of mixed waste, and special analytes often associated with radioactive materials. Characterizing mixed waste is often similar to characterizing waste components in non-radioactive materials. The largest differences are in associated safety precautions to minimize exposure to dangerous levels of radioactivity. One must attempt to keep radiological dose as low as reasonably achievable (ALARA). This chapter outlines recommended procedures to safely and accurately characterize regulated components of radioactive samples.
NASA Technical Reports Server (NTRS)
Bieber, J. W.; Gray, P. C.; Matthaeus, W. H.
1995-01-01
Parallel and perpendicular diffusion coefficients were computed numerically by following particle orbits in a simulated magnetic field. The simulated field was chosen to have delta B/B(sub o) small, so as to provide a test of quasilinear theory in a regime where the theory should be most accurate. The simulation space is large enough to contain many magnetic field correlation lengths, so that effects of field line random walk can be studied. After presenting results for parallel diffusion, we will focus on two controversial issues relating to perpendicular diffusion: (1) Do quasilinear descriptions of perpendicular diffusion retain any validity for particles whose Larmor radius is smaller than a correlation length? (2) Does field line random walk lead to particle diffusion in the usual sense, or does it produce 'compound' diffusion for which particles spread out proportionally to t(exp 1/4) instead of t(exp 1/2)?
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Berry, C.J.; Adams, M.G.
2008-01-01
Since late 1993, the Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of nonirrigated farmland and rangeland near Deer Trail, Colorado (U.S.A.). In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site. In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications to water, soil, and vegetation. This more comprehensive monitoring program recently has been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and streambed sediment. Streams at the site are dry most of the year, so samples of streambed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 2007. We have presented earlier a compilation of analytical results for the biosolids samples collected and analyzed for 1999 through 2006. More information about the other monitoring components is presented elsewhere in the literature. Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1
Crock, J.G.; Smith, D.B.; Yager, T.J.B.; Brown, Z.A.; Adams, M.G.
2008-01-01
Since late 1993, Metro Wastewater Reclamation District of Denver (Metro District), a large wastewater treatment plant in Denver, Colorado, has applied Grade I, Class B biosolids to about 52,000 acres of non-irrigated farmland and rangeland near Deer Trail, Colorado. In cooperation with the Metro District in 1993, the U.S. Geological Survey (USGS) began monitoring ground water at part of this site (Yager and Arnold, 2003). In 1999, the USGS began a more comprehensive monitoring study of the entire site to address stakeholder concerns about the potential chemical effects of biosolids applications. This more comprehensive monitoring program has recently been extended through 2010. Monitoring components of the more comprehensive study include biosolids collected at the wastewater treatment plant, soil, crops, dust, alluvial and bedrock ground water, and stream bed sediment. Streams at the site are dry most of the year, so samples of stream bed sediment deposited after rain were used to indicate surface-water effects. This report will present only analytical results for the biosolids samples collected at the Metro District wastewater treatment plant in Denver and analyzed during 1999 through 2006. More information about the other monitoring components is presented elsewhere in the literature (e.g., Yager and others, 2004a, 2004b, 2004c, 2004d). Priority parameters for biosolids identified by the stakeholders and also regulated by Colorado when used as an agricultural soil amendment include the total concentrations of nine trace elements (arsenic, cadmium, copper, lead, mercury, molybdenum, nickel, selenium, and zinc), plutonium isotopes, and gross alpha and beta activity. Nitrogen and chromium also were priority parameters for ground water and sediment components. In general, the objective of each component of the study was to determine whether concentrations of priority parameters (1) were higher than regulatory limits, (2) were increasing with time, or (3) were
Madsen, Berit L. . E-mail: ronblm@vmmc.org; Hsi, R. Alex; Pham, Huong T.; Fowler, Jack F.; Esagui, Laura C.; Corman, John
2007-03-15
Purpose: To evaluate the feasibility and toxicity of stereotactic hypofractionated accurate radiotherapy (SHARP) for localized prostate cancer. Methods and Materials: A Phase I/II trial of SHARP performed for localized prostate cancer using 33.5 Gy in 5 fractions, calculated to be biologically equivalent to 78 Gy in 2 Gy fractions ({alpha}/{beta} ratio of 1.5 Gy). Noncoplanar conformal fields and daily stereotactic localization of implanted fiducials were used for treatment. Genitourinary (GU) and gastrointestinal (GI) toxicity were evaluated by American Urologic Association (AUA) score and Common Toxicity Criteria (CTC). Prostate-specific antigen (PSA) values and self-reported sexual function were recorded at specified follow-up intervals. Results: The study includes 40 patients. The median follow-up is 41 months (range, 21-60 months). Acute toxicity Grade 1-2 was 48.5% (GU) and 39% (GI); 1 acute Grade 3 GU toxicity. Late Grade 1-2 toxicity was 45% (GU) and 37% (GI). No late Grade 3 or higher toxicity was reported. Twenty-six patients reported potency before therapy; 6 (23%) have developed impotence. Median time to PSA nadir was 18 months with the majority of nadirs less than 1.0 ng/mL. The actuarial 48-month biochemical freedom from relapse is 70% for the American Society for Therapeutic Radiology and Oncology definition and 90% by the alternative nadir + 2 ng/mL failure definition. Conclusions: SHARP for localized prostate cancer is feasible with minimal acute or late toxicity. Dose escalation should be possible.
Plumlee, Geoffrey S.; Casadevall, Thomas J.; Wibowo, Handoko T.; Rosenbauer, Robert J.; Johnson, Craig A.; Breit, George N.; Lowers, Heather; Wolf, Ruth E.; Hageman, Philip L.; Goldstein, Harland L.; Anthony, Michael W.; Berry, Cyrus J.; Fey, David L.; Meeker, Gregory P.; Morman, Suzette A.
2008-01-01
On May 29, 2006, mud and gases began erupting unexpectedly from a vent 150 meters away from a hydrocarbon exploration well near Sidoarjo, East Java, Indonesia. The eruption, called the LUSI (Lumpur 'mud'-Sidoarjo) mud volcano, has continued since then at rates as high as 160,000 m3 per day. At the request of the United States Department of State, the U.S. Geological Survey (USGS) has been providing technical assistance to the Indonesian Government on the geological and geochemical aspects of the mud eruption. This report presents initial characterization results of a sample of the mud collected on September 22, 2007, as well as inerpretive findings based on the analytical results. The focus is on characteristics of the mud sample (including the solid and water components of the mud) that may be of potential environmental or human health concern. Characteristics that provide insights into the possible origins of the mud and its contained solids and waters have also been evaluated.
NASA Astrophysics Data System (ADS)
Sotnikov, V.; Kim, T.; Lundberg, J.; Paraschiv, I.; Mehlhorn, T. A.
2014-10-01
Interchange or flute type density irregularities in magnetized plasma are associated with Rayleigh-Taylor type instability. In particular, we are interested in the generation of low frequency plasma density irregularities in the form of flute type vortex density structures and interaction of high frequency electromagnetic waves used for surveillance and communication with such structures. These types of density irregularities play an important role in refraction and scattering of high frequency electromagnetic signals propagating in the earth ionosphere, in high energy density physics (HEDP), and in many other applications. We will present PIC simulation results of EM scattering on vortex type density structures using the LSP code and compare them with analytical results. Two cases will be analyzed. In the first case electromagnetic wave scattering will take place in the ionospheric plasma. In the second case laser probing in a high-beta Z-pinch plasma will be presented. This work was supported by the Air Force Research laboratory, the Air Force Office of Scientific Research, the Naval Research Laboratory and NNSA/DOE Grant No. DE-FC52-06NA27616 at the University of Nevada at Reno.
Catastrophic incidents can generate a large number of samples with analytically diverse types including forensic, clinical, environmental, food, and others. Environmental samples include water, wastewater, soil, air, urban building and infrastructure materials, and surface resid...
Hachisu, Yushi; Hashimoto, Ruiko; Kishida, Kazunori; Yokoyama, Eiji
2013-12-01
Variable number of tandem repeats (VNTR) analysis is one of the methods for molecular epidemiological studies of Mycobacterium tuberculosis. VNTR analysis is a method based on PCR, provides rapid highly reproducible results and higher strain discrimination power than the restriction fragment length polymorphism (RFLP) analysis widely used in molecular epidemiological studies of Mycobacterium tuberculosis. Genetic lineage compositions of Mycobacterium tuberculosis clinical isolates differ among the regions from where they are isolated, and allelic diversity at each locus also differs among the genetic lineages of Mycobacterium tuberculosis. Therefore, the combination of VNTR loci that can provide high discrimination capacity for analysis is not common in every region. The Japan Anti-Tuberculosis Association (JATA) 12 (15) reported a standard combination of VNTR loci for analysis in Japan, and the combination with hypervariable (HV) loci added to JATA12 (15), which has very high discrimination capacity, was also reported. From these reports, it is thought that data sharing between institutions and construction of a nationwide database will progress from now on. Using database construction of VNTR profiles, VNTR analysis has become an effective tool to trace the route of tuberculosis infection, and also helps in decision-making in the treatment course. However, in order to utilize the results of VNTR analysis effectively, it is important that each related organization cooperates closely, and analysis should be appropriately applied in the system in which accurate control and private information protection are ensured.
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1980-01-01
The influence of ground-based gas turbine combustor operating conditions and fuel-bound nitrogen (FBN) found in coal-derived liquid fuels on the formation of nitrogen oxides and carbon monoxide is investigated. Analytical predictions of NOx and CO concentrations are obtained for a two-stage, adiabatic, perfectly-stirred reactor operating on a propane-air mixture, with primary equivalence ratios from 0.5 to 1.7, secondary equivalence ratios of 0.5 or 0.7, primary stage residence times from 12 to 20 msec, secondary stage residence times of 1, 2 and 3 msec and fuel nitrogen contents of 0.5, 1.0 and 2.0 wt %. Minimum nitrogen oxide but maximum carbon monoxide formation is obtained at primary zone equivalence ratios between 1.4 and 1.5, with percentage conversion of FBN to NOx decreasing with increased fuel nitrogen content. Additional secondary dilution is observed to reduce final pollutant concentrations, with NOx concentration independent of secondary residence time and CO decreasing with secondary residence time; primary zone residence time is not observed to affect final NOx and CO concentrations significantly. Finally, comparison of computed results with experimental values shows a good semiquantitative agreement.
NASA Astrophysics Data System (ADS)
Parniak, Michał; Wasilewski, Wojciech
2015-02-01
We develop a model to calculate nonlinear polarization in a nondegenerate four-wave mixing in diamond configuration which includes the effects of hyperfine structure and Doppler broadening. We verify the model against the experiment with 5 2S1 /2,5 2P3 /2,5 2D3 /2 , and 5 2P1 /2 levels of rubidium 85. Treating the multilevel atomic system as a combination of many four-level systems we are able to express the nonlinear susceptibility of a thermal ensemble in a low-intensity regime in terms of Voigt-type profiles and obtain an excellent conformity of theory and experiment within this complex system. The agreement is also satisfactory at high intensity and the analytical model correctly predicts the positions and shapes of resonances. Our results elucidate the physics of coherent interaction of light with atoms involving higher excited levels in vapors at room temperature, which is used in an increasing range of applications.
Thorn, Conde R.; Heywood, Charles E.
2001-01-01
The City of Albuquerque, New Mexico, is interested in gaining a better understanding, both quantitative and qualitative, of the aquifer system in and around Albuquerque. Currently (2000), the City of Albuquerque and surrounding municipalities are completely dependent on ground-water reserves for their municipal water supply. This report presents the results of a long-term aquifer test conducted near the Rio Grande in Albuquerque. The long-term aquifer test was conducted during the winter of 1994-95. The City of Albuquerque Griegos 1 water production well was pumped continuously for 54 days at an average pumping rate of 2,331 gallons per minute. During the 54-day pumping and a 30-day recovery period, water levels were recorded in a monitoring network that consisted of 3 production wells and 19 piezometers located at nine sites. These wells and piezometers were screened in river alluvium and (or) the upper and middle parts of the Santa Fe Group aquifer system. In addition to the measurement of water levels, aquifer-system compaction was monitored during the aquifer test by an extensometer. Well-bore video and flowmeter surveys were conducted in the Griegos 1 water production well at the end of the recovery period to identify the location of primary water- producing zones along the screened interval. Analytical results from the aquifer test presented in this report are based on the methods used to analyze a leaky confined aquifer system and were performed using the computer software package AQTESOLV. Estimated transmissivities for the Griegos 1 and 4 water production wells ranged from 10,570 to 24,810 feet squared per day; the storage coefficient for the Griegos 4 well was 0.0025. A transmissivity of 13,540 feet squared per day and a storage coefficient of 0.0011 were estimated from the data collected from a piezometer completed in the production interval of the Griegos 1 well.
NASA Astrophysics Data System (ADS)
Mazoyer, Johan; Pueyo, Laurent; Norman, Colin; N'Diaye, Mamadou; van der Marel, Roeland P.; Soummer, Rémi
2016-03-01
The new frontier in the quest for the highest contrast levels in the focal plane of a coronagraph is now the correction of the large diffraction artifacts introduced at the science camera by apertures of increasing complexity. Indeed, the future generation of space- and ground-based coronagraphic instruments will be mounted on on-axis and/or segmented telescopes; the design of coronagraphic instruments for such observatories is currently a domain undergoing rapid progress. One approach consists of using two sequential deformable mirrors (DMs) to correct for aberrations introduced by secondary mirror structures and segmentation of the primary mirror. The coronagraph for the WFIRST-AFTA mission will be the first of such instruments in space with a two-DM wavefront control system. Regardless of the control algorithm for these multiple DMs, they will have to rely on quick and accurate simulation of the propagation effects introduced by the out-of-pupil surface. In the first part of this paper, we present the analytical description of the different approximations to simulate these propagation effects. In Appendix A, we prove analytically that in the special case of surfaces inducing a converging beam, the Fresnel method yields high fidelity for simulations of these effects. We provide numerical simulations showing this effect. In the second part, we use these tools in the framework of the active compensation of aperture discontinuities (ACAD) technique applied to pupil geometries similar to WFIRST-AFTA. We present these simulations in the context of the optical layout of the high-contrast imager for complex aperture telescopes, which will test ACAD on a optical bench. The results of this analysis show that using the ACAD method, an apodized pupil Lyot coronagraph, and the performance of our current DMs, we are able to obtain, in numerical simulations, a dark hole with a WFIRST-AFTA-like. Our numerical simulation shows that we can obtain contrast better than 2×10-9 in
Grassa, Fausto; Capasso, Giorgio; Oliveri, Ygor; Sollami, Aldo; Carreira, Paula; Rosario Carvalho, M; Marques, Jose M; Nunes, Joao C
2010-06-01
A continuous-flow GC/IRMS technique has been developed to analyse delta(15)N values for molecular nitrogen in gas samples. This method provides reliable results with accuracy better than 0.15 per thousand and reproducibility (1sigma) within+/-0.1 per thousand for volumes of N(2) between 1.35 (about 56 nmol) and 48.9 muL (about 2 mumol). The method was tested on magmatic and hydrothermal gases as well as on natural gas samples collected from various sites. Since the analysis of nitrogen isotope composition may be prone to atmospheric contamination mainly in samples with low N(2) concentration, we set the instrument to determine also N(2) and (36)Ar contents in a single run. In fact, based on the simultaneously determined N(2)/(36)Ar ratios and assuming that (36)Ar content in crustal and mantle-derived fluids is negligible with respect to (36)Ar concentration in the atmosphere, for each sample, the degree of atmospheric contamination can be accurately evaluated. Therefore, the measured delta(15)N values can be properly corrected for air contamination.
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
Hibbard, Judith H; Greaves, Felix; Dudley, R Adams
2015-01-01
Background In the context of the Affordable Care Act, there is extensive emphasis on making provider quality transparent and publicly available. Online public reports of quality exist, but little is known about how visitors find reports or about their purpose in visiting. Objective To address this gap, we gathered website analytics data from a national group of online public reports of hospital or physician quality and surveyed real-time visitors to those websites. Methods Websites were recruited from a national group of online public reports of hospital or physician quality. Analytics data were gathered from each website: number of unique visitors, method of arrival for each unique visitor, and search terms resulting in visits. Depending on the website, a survey invitation was launched for unique visitors on landing pages or on pages with quality information. Survey topics included type of respondent (eg, consumer, health care professional), purpose of visit, areas of interest, website experience, and demographics. Results There were 116,657 unique visitors to the 18 participating websites (1440 unique visitors/month per website), with most unique visitors arriving through search (63.95%, 74,606/116,657). Websites with a higher percent of traffic from search engines garnered more unique visitors (P=.001). The most common search terms were for individual hospitals (23.25%, 27,122/74,606) and website names (19.43%, 22,672/74,606); medical condition terms were uncommon (0.81%, 605/74,606). Survey view rate was 42.48% (49,560/116,657 invited) resulting in 1755 respondents (participation rate=3.6%). There were substantial proportions of consumer (48.43%, 850/1755) and health care professional respondents (31.39%, 551/1755). Across websites, proportions of consumer (21%-71%) and health care professional respondents (16%-48%) varied. Consumers were frequently interested in using the information to choose providers or assess the quality of their provider (52.7%, 225
Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying
2012-03-01
This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Wöhling, Thomas; Barkle, Greg; Stenger, Roland; Moorhead, Brian; Wall, Aaron; Clague, Juliet
2014-05-01
Automated equilibrium tension plate lysimeters (AETLs) are arguably the most accurate method to measure unsaturated water and contaminant fluxes below the root zone at the scale of up to 1 m². The AETL technique utilizes a porous sintered stainless-steel plate to provide a comparatively large sampling area with a continuously controlled vacuum that is in "equilibrium" with the surrounding vadose zone matric pressure to ensure measured fluxes represent those under undisturbed conditions. This novel lysimeter technique was used at an intensive research site for investigations of contaminant pathways from the land surface to the groundwater on a sheep and beef farm under pastoral land use in the Tutaeuaua subcatchment, New Zealand. The Spydia research facility was constructed in 2005 and was fully operational between 2006 and 2011. Extending from a central access caisson, 15 separately controlled AETLs with 0.2 m² surface area were installed at five depths between 0.4 m and 5.1 m into the undisturbed volcanic vadose zone materials. The unique setup of the facility ensured minimum interference of the experimental equipment and external factors with the measurements. Over the period of more than five years, a comprehensive data set was collected at each of the 15 AETL locations which comprises of time series of soil water flux, pressure head, volumetric water contents, and soil temperature. The soil water was regularly analysed for EC, pH, dissolved carbon, various nitrogen compounds (including nitrate, ammonia, and organic N), phosphorus, bromide, chloride, sulphate, silica, and a range of other major ions, as well as for various metals. Climate data was measured directly at the site (rainfall) and a climate station at 500m distance. The shallow groundwater was sampled at three different depths directly from the Spydia caisson and at various observation wells surrounding the facility. Two tracer experiments were conducted at the site in 2009 and 2010. In the 2009
NASA Technical Reports Server (NTRS)
Hiel, C. C.; Adamson, M. J.
1986-01-01
The epoxy resins currently in use can slowly absorb moisture from the atmosphere over a long period. This reduces those mechanical properties of composites which depend strongly on the matrix, such as compressive strength and buckling instabilities. The effect becomes greater at elevated temperatures. The paper will discuss new phenomena which occur under simultaneous temperature and moisture variations. An analytical model will also be discussed and documented.
NASA Technical Reports Server (NTRS)
Flannelly, W. G.; Fabunmi, J. A.; Nagy, E. J.
1981-01-01
Analytical methods for combining flight acceleration and strain data with shake test mobility data to predict the effects of structural changes on flight vibrations and strains are presented. This integration of structural dynamic analysis with flight performance is referred to as analytical testing. The objective of this methodology is to analytically estimate the results of flight testing contemplated structural changes with minimum flying and change trials. The category of changes to the aircraft includes mass, stiffness, absorbers, isolators, and active suppressors. Examples of applying the analytical testing methodology using flight test and shake test data measured on an AH-1G helicopter are included. The techniques and procedures for vibration testing and modal analysis are also described.
NASA Astrophysics Data System (ADS)
Fedyushin, B. T.
1992-01-01
The concepts developed earlier are used to propose a simple analytic model describing the spatial-temporal distribution of a mechanical load (pressure, impulse) resulting from interaction of laser radiation with a planar barrier surrounded by air. The correctness of the model is supported by a comparison with experimental results.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Pietsch, W.; Petit, A.; Briand, A.
1998-05-01
We report in this paper, the first determination of the isotope ratio (238/235) in an uranium sample by optical emission spectroscopy on a laser-produced plasma at reduced pressure (2.67 Pa). Investigations aimed at developing a new application of laser ablation for analytical isotope control of uranium are presented. Optimized experimental conditions allow one to obtain atomic emission spectra characterized by the narrowest possible line widths of the order of 0.01 nm for the investigated transition UII 424.437 nm. We show the possibility to achieve a relative precision in the range of 5% for an enrichment of 3.5% 235U. The influence of different relevant plasma parameters on the measured line width is discussed.
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed. PMID:26631024
Quo vadis, analytical chemistry?
Valcárcel, Miguel
2016-01-01
This paper presents an open, personal, fresh approach to the future of Analytical Chemistry in the context of the deep changes Science and Technology are anticipated to experience. Its main aim is to challenge young analytical chemists because the future of our scientific discipline is in their hands. A description of not completely accurate overall conceptions of our discipline, both past and present, to be avoided is followed by a flexible, integral definition of Analytical Chemistry and its cornerstones (viz., aims and objectives, quality trade-offs, the third basic analytical reference, the information hierarchy, social responsibility, independent research, transfer of knowledge and technology, interfaces to other scientific-technical disciplines, and well-oriented education). Obsolete paradigms, and more accurate general and specific that can be expected to provide the framework for our discipline in the coming years are described. Finally, the three possible responses of analytical chemists to the proposed changes in our discipline are discussed.
NASA Astrophysics Data System (ADS)
Khan, Sheema; Morton, Thomas L.; Ronis, David
1987-05-01
The static correlations in highly charged colloidal and micellar suspensions, with and without added electrolyte, are examined using the hypernetted-chain approximation (HNC) for the macro-ion-macro-ion correlations and the mean-spherical approximation for the other correlations. By taking the point-ion limit for the counter-ions, an analytic solution for the counter-ion part of the problem can be obtained; this maps the macro-ion part of the problem onto a one-component problem where the macro-ions interact via a screened Coulomb potential with the Gouy-Chapman form for the screening length and an effective charge that depends on the macro-ion-macro-ion pair correlations. Numerical solutions of the effective one-component equation in the HNC approximation are presented, and in particular, the effects of macro-ion charge, nonadditive core diameters, and added electrolyte are examined. As we show, there can be a strong renormalization of the effective macro-ion charge and reentrant melting in colloidal crystals.
2013-01-01
Background Despite the introduction of free antiretroviral therapy (ART), the use of voluntary counselling and testing (VCT) services remains persistently low in many African countries. This study investigates how prior experience of HIV and VCT, and knowledge about HIV and ART influence VCT use in rural Tanzania. Methods In 2006–7, VCT was offered to study participants during the fifth survey round of an HIV community cohort study that includes HIV testing for research purposes without results disclosure, and a questionnaire covering knowledge, attitudes and practices around HIV infection and HIV services. Categorical variables were created for HIV knowledge and ART knowledge, with “good” HIV and ART knowledge defined as correctly answering at least 4/6 and 5/7 questions about HIV and ART respectively. Experience of HIV was defined as knowing people living with HIV, or having died from AIDS. Logistic regression methods were used to assess how HIV and ART knowledge, and prior experiences of HIV and VCT were associated with VCT uptake, with adjustment for HIV status and socio-demographic confounders. Results 2,695/3,886 (69%) men and 2,708/5,575 women (49%) had “good” HIV knowledge, while 613/3,886 (16%) men and 585/5575 (10%) women had “good” ART knowledge. Misconceptions about HIV transmission were common, including through kissing (55% of women, 43% of men), or mosquito bites (42% of women, 34% of men). 19% of men and 16% of women used VCT during the survey. After controlling for HIV status and socio-demographic factors, the odds of VCT use were lower among those with poor HIV knowledge (aOR = 0.5; p = 0.01 for men and aOR = 0.6; p < 0.01 for women) and poor ART knowledge (aOR = 0.8; p = 0.06 for men, aOR = 0.8; p < 0.01 for women), and higher among those with HIV experience (aOR = 1.3 for men and aOR = 1.6 for women, p < 0.01) and positive prior VCT experience (aOR = 2.0 for all men and aOR = 2
Visual analytics of brain networks.
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2012-05-15
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging.
Visual Analytics of Brain Networks
Li, Kaiming; Guo, Lei; Faraco, Carlos; Zhu, Dajiang; Chen, Hanbo; Yuan, Yixuan; Lv, Jinglei; Deng, Fan; Jiang, Xi; Zhang, Tuo; Hu, Xintao; Zhang, Degang; Miller, L Stephen; Liu, Tianming
2014-01-01
Identification of regions of interest (ROIs) is a fundamental issue in brain network construction and analysis. Recent studies demonstrate that multimodal neuroimaging approaches and joint analysis strategies are crucial for accurate, reliable and individualized identification of brain ROIs. In this paper, we present a novel approach of visual analytics and its open-source software for ROI definition and brain network construction. By combining neuroscience knowledge and computational intelligence capabilities, visual analytics can generate accurate, reliable and individualized ROIs for brain networks via joint modeling of multimodal neuroimaging data and an intuitive and real-time visual analytics interface. Furthermore, it can be used as a functional ROI optimization and prediction solution when fMRI data is unavailable or inadequate. We have applied this approach to an operation span working memory fMRI/DTI dataset, a schizophrenia DTI/resting state fMRI (R-fMRI) dataset, and a mild cognitive impairment DTI/R-fMRI dataset, in order to demonstrate the effectiveness of visual analytics. Our experimental results are encouraging. PMID:22414991
NASA Astrophysics Data System (ADS)
Dattani, Nikesh S.; Welsh, Staszek
2014-06-01
Being the simplest neutral open shell molecule, BeH is a very important benchmark system for ab initio calculations. However, the most accurate empirical potentials and Born-Oppenheimer breakdown (BOB) functions for this system are nearly a decade old and are not reliable in the long-range region. Particularly, the uncertainties in their dissociation energies were about ±200 cm-1, and even the number of vibrational levels predicted was at the time very questionable, meaning that no good benchmark exists for ab initio calculations on neutral open shell molecules. We build new empirical potentials for BeH, BeD, and BeT that are much more reliable in the long-range. Being the second lightest heteronuclear molecule with a stable ground electronic state, BeH is also very important for the study of isotope effects, such as BOB. We extensively study isotope effects in this system, and we show that the empirical BOB functions fitted from the data of any two isotopologues, is sufficient to predict crucial properties of the third isotopologue.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Dorofeev, S.B.; Efimenko, A.A.; Kochurko, A.S.; Sidorov, V.P.
1996-03-01
A review of hydrogen combustion research at Kurchatov Institute is presented. Criterion for spontaneous detonation onset possibility and its application to severe accidents in a nuclear power plant is discussed. Theoretical and experimental results on spontaneous detonation onset conditions are summarized. Three series of large scale turbulent jet initiation experiments have been carried out in KOPER facility (50 m{sup 3} and 150 m{sup 3}). Series of jet initiation experiments in initially confined H{sub 2} - air mixtures have been carried out in KOPER facility (20-46 m{sup 3}). Turbulent deflagration/DDT experiments were carried out in large scale confined volume of 480 m{sup 3} in RUT facility. Results showed, that the characteristic volume size should be used for conservative estimates in accident analysis. Series of experiments on detonation transition from one mixture to another of lower sensitivity has been carried in DRIVER facility. The experiments were aimed on the estimation of the minimum size of a detonation kernel. The received results are in a good agreement with the 7 cell width criterion. Results of combined hydrogen injection/ignition experiments are presented. The experiments are aimed on the investigation of possible consequences of deliberate ignition at dynamic conditions. Analysis of the experimental data showed applicability of 7 cell width criterion to dynamic conditions. The sum of the results on the scaling of spontaneous detonations is discussed in connection with the strategy of hydrogen mitigation at severe accidents.
Modern analytical chemistry in the contemporary world
NASA Astrophysics Data System (ADS)
Šíma, Jan
2016-02-01
Students not familiar with chemistry tend to misinterpret analytical chemistry as some kind of the sorcery where analytical chemists working as modern wizards handle magical black boxes able to provide fascinating results. However, this approach is evidently improper and misleading. Therefore, the position of modern analytical chemistry among sciences and in the contemporary world is discussed. Its interdisciplinary character and the necessity of the collaboration between analytical chemists and other experts in order to effectively solve the actual problems of the human society and the environment are emphasized. The importance of the analytical method validation in order to obtain the accurate and precise results is highlighted. The invalid results are not only useless; they can often be even fatal (e.g., in clinical laboratories). The curriculum of analytical chemistry at schools and universities is discussed. It is referred to be much broader than traditional equilibrium chemistry coupled with a simple description of individual analytical methods. Actually, the schooling of analytical chemistry should closely connect theory and practice.
NASA Technical Reports Server (NTRS)
Westphalen, H.; Spjeldvik, W. N.
1982-01-01
A theoretical method by which the energy dependence of the radial diffusion coefficient may be deduced from spectral observations of the particle population at the inner edge of the earth's radiation belts is presented. This region has previously been analyzed with numerical techniques; in this report an analytical treatment that illustrates characteristic limiting cases in the L shell range where the time scale of Coulomb losses is substantially shorter than that of radial diffusion (L approximately 1-2) is given. It is demonstrated both analytically and numerically that the particle spectra there are shaped by the energy dependence of the radial diffusion coefficient regardless of the spectral shapes of the particle populations diffusing inward from the outer radiation zone, so that from observed spectra the energy dependence of the diffusion coefficient can be determined. To insure realistic simulations, inner zone data obtained from experiments on the DIAL, AZUR, and ESRO 2 spacecraft have been used as boundary conditions. Excellent agreement between analytic and numerical results is reported.
222-S Analytical services final results for Tank 241-U-101, grab samples 1U-96-1 through 1U-96-4
Miller, G.L., Westinghouse Hanford
1996-08-23
This document is the final, format IV, laboratory report for characterization of tank 241-U-101 (U-101) grab samples from risers 1 and 7. It transmits additional analytical data for specific gravity (Sp.G.), and all raw analytical data which were not provided in the 45-day report. The 45-day report is attached to this final report as Part II. Secondary analyses were not performed on any of the U-101 samples. This is because none of the primary analyte limits, which trigger the performance of secondary analyses, were exceeded. Grab samples were taken on May 29, 1996 and May 30, 1996 from risers 1 and 7, respectively, and were received at the 222-S Laboratory on the same days that they were collected. Analyses were performed in accordance with the Tank Sampling and Analysis Plan (TSAP) for this tank and the Safety Screening Data Quality Objective (DQO). The samples were analyzed for differential scanning calorimetry (DSC), thermogravimetric analysis (TGA), total alpha activity (AT), visual appearance, bulk density, and specific gravity. A sample data summary table, includes sample analytical data accompanied by quality control data (for example, duplicate, spike, blank and standard results and detection limits and counting efforts). The table includes data for DSC, TGA, AT, bulk density, volume percent solids and Sp.G. analyses. Data regarding the visual appearance of samples, volume percent solids and density of the solids are provided in tabular form of the 45-day report (attached as Part II). The table of the 45-day report also associates the original customer sample number with corresponding laboratory sample numbers. The TSAP specified notification limits for only DSC and total alpha. Notification limits were not exceeded for DSC or total alpha analyses for any of the samples, consequently immediate notifications were not necessary and were not made.
Liberatore, S.; Jaouen, S.; Tabakhoff, E.; Canaud, B.
2009-04-15
Magnetic Rayleigh-Taylor instability is addressed in compressible hydrostatic media. A full model is presented and compared to numerical results from a linear perturbation code. A perfect agreement between both approaches is obtained in a wide range of parameters. Compressibility effects are examined and substantial deviations from classical Chandrasekhar growth rates are obtained and confirmed by the model and the numerical calculations.
Straight, William H; Karr, Jonathan D; Cox, Julia E; Barrick, Reese E
2004-01-01
Although previous work has demonstrated that biological phosphates ('biophosphates') record significant changes in delta18O associated with variations in local climate and seasonality, the repeatability of these analyses between laboratories has not previously been tested. We serially sampled enamel on four Cretaceous dinosaur teeth for phosphate delta18O analysis at up to three different facilities. With the exception of one set of unprocessed enamel samples, the material supplied to each laboratory was chemically processed to silver phosphate. Each laboratory analyzed sample sets by pyrolysis (thermochemical decomposition) in a ThermoFinnigan TC/EA attached to a ThermoFinnigan Delta Plus mass spectrometer. Significant interference between phosphate samples and the NIST reference material 8557 barium sulfate (NBS 127) distorts some of the results. Samples analyzed immediately following NBS 127 may be depleted by 6 per thousand isotopically and in instrument peak amplitude response by 80%. Substantial interference can persist over the subsequent 20 silver phosphate samples, and can influence the instrument peak amplitude response from some organic standards. Experiments using reagent-grade silver phosphate link these effects to divalent cations, particularly Ca2+ and Ba2+, which linger in the reactor and scavenge oxygen evolved from pyrolysis of subsequent samples. Unprocessed enamel includes 40 wt% calcium and self-scavenges oxygen, disrupting the isotopic measurements for the first half of a set and depleting subsequent organic standards by up to 9 per thousand. In sets without NBS 127 or calcium, such interference did not occur and an interlaboratory comparison of results from enamel shows reproducible, significantly correlated peaked delta18O patterns with a 2-3 per thousand dynamic range, consistent with previous results from contemporaneous teeth. Whereas both unprocessed enamel and the NBS 127 barium sulfate should be applied to biological phosphate
BELL, K.E.
2000-05-11
This document is the format IV, final report for the tank 241-SY-102 (SY-102) grab samples taken in January 2000 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank SY-102 samples were performed as directed in Comparability Grab Sampling and Analysis Plan for Fiscal Year 2000 (Sasaki 1999). No notification limits were exceeded. Preliminary data on samples 2SY-99-5, -6, and -7 were reported in ''Format II Report on Tank 241-SY-102 Waste Compatibility Grab Samples Taken in January 2000'' (Lockrem 2000). The data presented here represent the final results.
Ivey, Wade
2013-12-17
Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, received five swipe samples on December 10, 2013 from the Northern Biomedical Research Facility in Norton Shores, Michigan. The samples were analyzed for tritium and carbon-14 according to the NRC Form 303 supplied with the samples. The sample identification numbers are presented in Table 1 and the tritium and carbon-14 results are provided in Table 2. The pertinent procedure references are included with the data tables.
Wilcox, Ralph
1995-01-01
The six sites investigated include silver recovery units; a buried caustic drain line; a neutralization pit; an evaporation/infiltration pond; the Manzano fire training area; and a waste oil underground storage tank. Environmental samples of soil, pond sediment, soil gas, and water and gas in floor drains were collected and analyzed. Field quality-control samples were also collected and analyzed in association with the environmental samples. The six sites were investigated because past or current activities could have resulted in contamination of soil, pond sediment, or water and sediment in drains.
NASA Astrophysics Data System (ADS)
Chen, R.; Pagonis, V.; Lawless, J. L.
2006-02-01
Nonmonotonic dose dependence of optically stimulated luminescence (OSL) has been reported in a number of materials including Al2O3:C which is one of the main dosimetric materials. In a recent work, the nonmonotonic effect has been shown to result, under certain circumstances, from the competition either during excitation or during readout between trapping states or recombination centers. In the present work, we report on a study of the effect in a more concrete framework of two trapping states and two kinds of recombination centers involved in the luminescence processes in Al2O3:C. Using sets of trapping parameters, based on available experimental data, previously utilized to explain the nonmonotonic dose dependence of thermoluminescence including nonzero initial occupancies of recombination centers (F+ centers), the OSL along with the occupancies of the relevant traps and centers are simulated numerically. The connection between these different resulting quantities is discussed, giving a better insight as to the ranges of the increase and decrease of the integral OSL as a function of dose, as well as the constant equilibrium value occurring at high doses.
Lewis, D.W. . Dept. of Geology); McConchie, D.M. . Centre for Coastal Management)
1994-01-01
Both a self instruction manual and a cookbook'' guide to field and laboratory analytical procedures, this book provides an essential reference for non-specialists. With a minimum of mathematics and virtually no theory, it introduces practitioners to easy, inexpensive options for sample collection and preparation, data acquisition, analytic protocols, result interpretation and verification techniques. This step-by-step guide considers the advantages and limitations of different procedures, discusses safety and troubleshooting, and explains support skills like mapping, photography and report writing. It also offers managers, off-site engineers and others using sediments data a quick course in commissioning studies and making the most of the reports. This manual will answer the growing needs of practitioners in the field, either alone or accompanied by Practical Sedimentology, which surveys the science of sedimentology and provides a basic overview of the principles behind the applications.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Romeo, G.; Frulla, G.
1995-09-01
The coefficient of thermal expansion (CTE) as determined by the Classical Laminate Theory is very sensitive to some orthotropic elastic constants and to the laminate layup. In particular, the non-Hookean behavior of a unidirectional lamina in the fiber direction have to be taken into account to exactly predict the CTE. To verify the theoretical analysis, a new test facility has been designed to carefully measure the CTE in advanced composite materials having a quasi zero value of CTE. Measurement error in the CTE was minimized by a careful choice of displacement sensors and the high control of their thermal stability. The results show that a variation of +/- 1 deg in the lamina orientation can change the CTE of the quasi-isotropic laminate up to -/+ 50.5% of the theoretical value. A variation of +/- 5% in the physical and mechanical properties can change the CTE up to -/+ 48%. 14 refs.
Brühl, Annette Beatrix; Delsignore, Aba; Komossa, Katja; Weidt, Steffi
2014-11-01
Social anxiety disorder (SAD) is one of the most frequent anxiety disorders. The landmark meta-analysis of functional neuroimaging studies by Etkin and Wager (2007) revealed primarily the typical fear circuit as overactive in SAD. Since then, new methodological developments such as functional connectivity and more standardized structural analyses of grey and white matter have been developed. We provide a comprehensive update and a meta-analysis of neuroimaging studies in SAD since 2007 and present a new model of the neurobiology of SAD. We confirmed the hyperactivation of the fear circuit (amygdala, insula, anterior cingulate and prefrontal cortex) in SAD. In addition, task-related functional studies revealed hyperactivation of medial parietal and occipital regions (posterior cingulate, precuneus, cuneus) in SAD and a reduced connectivity between parietal and limbic and executive network regions. Based on the result of this meta-analysis and review, we present an updated model of SAD adopting a network-based perspective. The disconnection of the medial parietal hub in SAD extends current frameworks for future research in anxiety disorders.
Godin, O A; Chapman, D M
2001-10-01
In the upper tens of meters of ocean bottom, unconsolidated marine sediments consisting of clay, silt, or fine sand with high porosity are "almost incompressible" in the sense that the shear wave velocity is much smaller than the compressional wave velocity. The shear velocity has very large gradients close to the ocean floor leading to strong coupling of compressional and shear waves in such "soft" sediments. The weak compressibility opens an avenue for developing a theory of elastic wave propagation in continuously stratified soft sediments that fully accounts for the coupling. Elastic waves in soft sediments consist of "fast" waves propagating with velocities close to the compressional velocity and "slow" waves propagating with velocities on the order of the shear velocity. For the slow waves, the theory predicts the existence of surface waves at the ocean-sediment boundary. In the important special case of the power-law depth-dependence of shear rigidity, phase and group velocities of the interface waves are shown to scale as a certain power of frequency. An explicit, exact solution was obtained for the surface waves in sediments characterized by constant density and a linear increase of shear rigidity with depth, that is, for the case of shear speed proportional to the square root of the depth below the sediment-water interface. Asymptotic and perturbation techniques were used to extend the result to more general environments. Theoretical dispersion relations agreed well with numerical simulations and available experimental data and, as demonstrated in a companion paper [D. M. F. Chapman and O. A. Godin, J. Acoust. Soc. Am 110, 1908 (2001)] led to a simple and robust inversion of interface wave travel times for shear velocity profiles in the sediment.
STEEN, F.H.
1999-02-23
This document is the final report for tank 241-S-304 grab samples. Four grab samples were collected from riser 4 on July 30, 1998. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1998) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO). The analytical results are presented in the data summary report (Table 1). None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP (Saaaki, 1998).
Tank 241-AP-106, grab samples, 6AP-96-1 through 6AP-96-3 analytical results for the final report
Esch, R.A., Westinghouse Hanford
1996-12-11
This document is the final report for tank 241-AP-106 grab samples. This document presents the analytical results for three samples (6AP-96-1, 6AP-96-2 and 6AP-96-3) taken from riser 1 @ 150{degrees} of tank 241-AP-1 06 on September 12, 1996. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1996) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Fowler, 1995).
FULLER, R.K.
1999-02-24
This document is the final report for tank 241-AN-101 grab samples. Three grab samples 1AN-98-1, 1AN-98-2 and 1AN-98-3 were taken from riser 16 of tank 241-AN-101 on April 8, 1998 and received by the 222-S Laboratory on April 9, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program'' (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded.
Tucker, Robert E.; McHugh, J.B.; Ficklin, W.H.; Motooka, J.M.; Preston, D.J.; Miller, W.R.
1983-01-01
Two hundred and forty-nine water samples were collected from small first-order streams, springs, and mine drainages from the Mount Belknap caldera and Deer Trail Mountain-Alunite Ridge areas and vicinity in southwestern Utah. The samples were collected during three hydrogeochemical surveys in 1978, 1979, and 1981. The water samples were analyzed for Ca, Mg, Na, K, Li, SiO2, alkalinity (HCO3), SO4, Cl, F, Zn, Cu, Mo, As, U, and pH. Temperature and specific conductance were also measured. Analytical results are presented in this report.
FULLER, R.K.
1999-02-24
This document is the final report for catch tank 241-ER-311 grab samples. Three grab samples ER311-98-1, ER311-98-2 and ER311-98-3 were taken from East riser of tank 241-ER-311 on August 4, 1998 and received by the 222-S Laboratory on August 4, 1998. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1998)and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Mulkey and Miller, 1997). The analytical results are presented in the data summary report (Table 1). No notification limits were exceeded.
Bonanno, Lisa M.; Kwong, Tai C.; DeLouise, Lisa A.
2010-01-01
In this work we evaluate for the first time the performance of a label-free porous silicon (PSi) immunosensor assay in a blind clinical study designed to screen authentic patient urine specimens for a broad range of opiates. The PSi opiate immunosensor achieved 96% concordance with liquid chromatography-mass spectrometry/tandem mass spectrometry (LC-MS/MS) results on samples that underwent standard opiate testing (n=50). In addition, successful detection of a commonly abused opiate, oxycodone, resulted in 100% qualitative agreement between the PSi opiate sensor and LC-MS/MS. In contrast, a commercial broad opiate immunoassay technique (CEDIA®) achieved 65% qualitative concordance with LC-MS/MS. Evaluation of important performance attributes including precision, accuracy, and recovery was completed on blank urine specimens spiked with test analytes. Variability of morphine detection as a model opiate target was < 9% both within-run and between-day at and above the cutoff limit of 300 ng ml−1. This study validates the analytical screening capability of label-free PSi opiate immunosensors in authentic patient samples and is the first semi-quantitative demonstration of the technology’s successful clinical use. These results motivate future development of PSi technology to reduce complexity and cost of diagnostic testing particularly in a point-of-care setting. PMID:21062030
FULLER, R.K.
1999-02-23
This document is the final report for tank 241-AP-106 grab samples. Three grab samples 6AP-98-1, 6AP-98-2 and 6AP-98-3 were taken from riser 1 of tank 241-AP-106 on May 28, 1998 and received by the 222-S Laboratory on May 28, 1998. Analyses were performed in accordance with the ''Compatability Grab Sampling and Analysis Plan'' (TSAP) (Sasaki, 1998) and the ''Data Quality Objectives for Tank Farms Waste Compatability Program (DQO). The analytical results are presented in the data summary report. No notification limits were exceeded. The request for sample analysis received for AP-106 indicated that the samples were polychlorinated biphenyl (PCB) suspects. The results of this analysis indicated that no PCBs were present at the Toxic Substance Control Act (TSCA) regulated limit of 50 ppm. The results and raw data for the PCB analysis are included in this document.
NASA Astrophysics Data System (ADS)
Katsaros, Th.; Liritzis, I.; Laskaris, N.
According to Theophrastus of Eressos (4th c. B.C.) Melian-earth was a very bright white color used by the painters of his era. Pliny the Elder described it as the white pigment of the famous painter Appeles (c. 352 - 308 BC). Earlier investigations on the island of Melos (Aegean Sea) have not identified the specific place of the extraction of this material, because of the unknown chemical character. In our new analytical data from excavations (Turkey, Italy, England) the presence of a TiO2 phase in the white ground decoration of ceramics has been testified, especially after the meticulous exploration of the island of Melos with a new point of view. At the western side of the island Kaolin was found in the locality of Kontaros with 1% by weight TiO2. Analytical results from the white layer of decoration of the white ground Lekythoi give us the same level of TiO2. We propose that the famous white pigment well known as melian earth in antiquity could be a kind of natural Titania as impurity in the Kaolin.
NASA Technical Reports Server (NTRS)
Uslenghi, Piergiorgio L. E.; Laxpati, Sharad R.; Kawalko, Stephen F.
1993-01-01
The third phase of the development of the computer codes for scattering by coated bodies that has been part of an ongoing effort in the Electromagnetics Laboratory of the Electrical Engineering and Computer Science Department at the University of Illinois at Chicago is described. The work reported discusses the analytical and numerical results for the scattering of an obliquely incident plane wave by impedance bodies of revolution with phi variation of the surface impedance. Integral equation formulation of the problem is considered. All three types of integral equations, electric field, magnetic field, and combined field, are considered. These equations are solved numerically via the method of moments with parametric elements. Both TE and TM polarization of the incident plane wave are considered. The surface impedance is allowed to vary along both the profile of the scatterer and in the phi direction. Computer code developed for this purpose determines the electric surface current as well as the bistatic radar cross section. The results obtained with this code were validated by comparing the results with available results for specific scatterers such as the perfectly conducting sphere. Results for the cone-sphere and cone-cylinder-sphere for the case of an axially incident plane were validated by comparing the results with the results with those obtained in the first phase of this project. Results for body of revolution scatterers with an abrupt change in the surface impedance along both the profile of the scatterer and the phi direction are presented.
NASA Astrophysics Data System (ADS)
Avazmohammadi, Reza; Ponte Castañeda, Pedro
2014-04-01
This paper presents a homogenization-based constitutive model for the mechanical behaviour of particle-reinforced elastomers with random microstructures subjected to finite deformations. The model is based on a recently improved version of the tangent second-order (TSO) method (Avazmohammadi and Ponte Castañeda, J. Elasticity 112 (2013) p.139-183) for two-phase, hyperelastic composites and is able to directly account for the shape, orientation, and concentration of the particles. After a brief summary of the TSO homogenization method, we describe its application to composites consisting of an incompressible rubber reinforced by aligned, spheroidal, rigid particles, undergoing generally non-aligned, three-dimensional loadings. While the results are valid for finite particle concentrations, in the dilute limit they can be viewed as providing a generalization of Eshelby's results in linear elasticity. In particular, we provide analytical estimates for the overall response and microstructure evolution of the particle-reinforced composites with generalized neo-Hookean matrix phases under non-aligned loadings. For the special case of aligned pure shear and axisymmetric shear loadings, we give closed-form expressions for the effective stored-energy function of the composites with neo-Hookean matrix behaviour. Moreover, we investigate the possible development of "macroscopic" (shear band-type) instabilities in the homogenized behaviour of the composite at sufficiently large deformations. These instabilities whose wavelengths are much larger than the typical size of the microstructure are detected by making use of the loss of strong ellipticity condition for the effective stored-energy function of the composites. The analytical results presented in this paper will be complemented in Part II (Avazmohammadi and Ponte Castaneda, Phil. Mag. (2014)) of this work by specific applications for several representative microstructures and loading configurations.
Boltz, D.R.; Johnson, W.H.; Serkiz, S.M.
1994-10-01
The Quantification of Soil Source Terms and Determination of the Geochemistry Controlling Distribution Coefficients (K{sub d} values) of Contaminants at the F- and H-Area Seepage Basins (FHSB) study was designed to generate site-specific contaminant transport factors for contaminated groundwater downgradient of the Basins. The experimental approach employed in this study was to collect soil and its associated porewater from contaminated areas downgradient of the FHSB. Samples were collected over a wide range of geochemical conditions (e.g., pH, conductivity, and contaminant concentration) and were used to describe the partitioning of contaminants between the aqueous phase and soil surfaces at the site. The partitioning behavior may be used to develop site-specific transport factors. This report summarizes the analytical procedures and results for both soil and porewater samples collected as part of this study and the database management of these data.
Cerebral cortical activity associated with non-experts' most accurate motor performance.
Dyke, Ford; Godwin, Maurice M; Goel, Paras; Rehm, Jared; Rietschel, Jeremy C; Hunt, Carly A; Miller, Matthew W
2014-10-01
This study's specific aim was to determine if non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. To assess this, EEG was recorded from non-expert golfers executing putts; EEG spectral power and coherence were calculated for the epoch preceding putt execution; and spectral power and coherence for the five most accurate putts were contrasted with that for the five least accurate. Results revealed marked power in the theta frequency bandwidth at all cerebral cortical regions for the most accurate putts relative to the least accurate, and considerable power in the low-beta frequency bandwidth at the left temporal region for the most accurate compared to the least. As theta power is associated with working memory and low-beta power at the left temporal region with verbal analysis, results suggest non-experts' most accurate motor performance is associated with verbal-analytic- and working memory-related cerebral cortical activity during motor preparation. PMID:25058623
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Williams, M.; Jantzen, C.; Burket, P.; Crawford, C.; Daniel, G.; Aponte, C.; Johnson, C.
2009-12-28
TTT steam reforming process ability to destroy organics in the Tank 48 simulant and produce a soluble carbonate waste form. The ESTD was operated at varying feed rates and Denitration and Mineralization Reformer (DMR) temperatures, and at a constant Carbon Reduction Reformer (CRR) temperature of 950 C. The process produced a dissolvable carbonate product suitable for processing downstream. ESTD testing was performed in 2009 at the Hazen facility to demonstrate the long term operability of an integrated FBSR processing system with carbonate product and carbonate slurry handling capability. The final testing demonstrated the integrated TTT FBSR capability to process the Tank 48 simulant from a slurry feed into a greater than 99.9% organic free and primarily dissolved carbonate FBSR product slurry. This paper will discuss the SRNL analytical results of samples analyzed from the 2008 and 2009 THOR{reg_sign} steam reforming ESTD performed with Tank 48H simulant at HRI in Golden, Colorado. The final analytical results will be compared to prior analytical results from samples in terms of organic, nitrite, and nitrate destruction.
Nonose, Naoko; Hioki, Akiharu; Chiba, Koichi
2014-01-01
In the present study the effects of the detector dead-time and its uncertainties on the accuracy and uncertainty of isotope dilution mass spectrometry (IDMS) were considered through an interlaboratory study on the analysis of low-alloy steel by using an ICP-sector field mass spectrometer. Also, an optimized mixing ratio of the sample and the spike to obtain highly precise results was theoretically and experimentally investigated. The detector dead-time used in the interlaboratory study showed a negative value. However, it less affected the trueness of the analytical result if the dead-time correction for the measured isotope ratio was done properly. As many researchers have pointed out, the detector dead-time showed a clear mass dependence. Therefore, it is desirable to check the dead-time in every target element by using assay standards or isotopic standards, which would lead to an accurate result even if the detector dead-time is a negative value. On the other hand, the effect of the uncertainty of the detector dead-time can be minimized when both isotope ratios and ICP-MS signals of the [sample + spike] blend in IDMS are equal to those of [spike + assay standard] in reverse IDMS. From standpoints of error magnification theory and the precision of the isotope ratio measurement, an optimized isotope ratio of the sample-spike blend would be 1.0 for an element with a large difference in ten times and more between the atomic fractions of two isotopes used for IDMS. In the case of an element with no significant difference between the atomic fractions of two isotopes, an optimized isotope ratio can be calculated by a formula expressed as a function of the atomic fractions of the sample and the spike as well as the signal of ICP-MS.
McFarland, L V
2015-01-01
Meta-analyses are used to evaluate pooled effects of a wide variety of investigational agents, but the interpretation of the results into clinical practices may be difficult. This mini-review offers a three-step process to enable healthcare providers to decipher pooled meta-analysis estimates into results that are useful for therapeutic decisions. As an example of how meta-analyses should be interpreted, a recent meta-analysis of probiotics for the prevention of paediatric antibiotic-associated diarrhoea (AAD) and the prevention of Clostridium difficile infections (CDI) will be used. First, the pooled results of this meta-analysis indicates a significant protective efficacy for AAD is found when the 16 different types of probiotics are combined (pooled relative risk (RR) = 0.43, 95% confidence interval (CI)=0.33-0.56) and also a significant reduction of paediatric CDI (pooled RR=0.34, 95%CI=0.16-0.74) was found pooling four different types of probiotics. Secondly, because the efficacy of probiotics is strain-specific, it is necessary to do a sensitivity analysis, restricting the meta-analysis to one specific strain. Two strains, Saccharomyces boulardii lyo and Lactobacillus rhamnosus GG showed significant efficacy for paediatric AAD when pooled (pooled RR for S. boulardii = 0.43, 95%CI=0.21-0.86 and pooled RR for L. rhamnosus GG = 0.44, 95%CI=0.20-0.95). Thirdly, if studies within probiotic types have different results, it is prudent to examine these studies individually to determine the reasons why non-significant differences in efficacy were found. By drilling down through these three analytic layers, physicians will be confident in recommending the correct probiotic strain to their patients.
Guertal, William R.; Stewart, Marie; Barbaro, Jeffrey R.; McHale, Timthoy J.
2004-01-01
A joint study by the Dover National Test Site and the U.S. Geological Survey was conducted from June 27 through July 18, 2001 to determine the spatial distribution of the gasoline oxygenate additive methyl tert-butyl ether and selected water-quality constituents in the surficial aquifer underlying the Dover National Test Site at Dover Air Force Base, Delaware. The study was conducted to support a planned enhanced bio-remediation demonstration and to assist the Dover National Test Site in identifying possible locations for future methyl tert-butyl ether remediation demonstrations. This report presents the analytical results from ground-water samples collected during the direct-push ground-water sampling study. A direct-push drill rig was used to quickly collect 115 ground-water samples over a large area at varying depths. The ground-water samples and associated quality-control samples were analyzed for volatile organic compounds and methyl tert-butyl ether by the Dover National Test Site analytical laboratory. Volatile organic compounds were above the method reporting limits in 59 of the 115 ground-water samples. The concentrations ranged from below detection limits to maximum values of 12.4 micrograms per liter of cis-1,2-dichloroethene, 1.14 micrograms per liter of trichloroethene, 2.65 micrograms per liter of tetrachloroethene, 1,070 micrograms per liter of methyl tert-butyl ether, 4.36 micrograms per liter of benzene, and 1.8 micrograms per liter of toluene. Vinyl chloride, ethylbenzene, p,m-xylene, and o-xylene were not detected in any of the samples collected during this investigation. Methyl tert-butyl ether was detected in 47 of the 115 ground-water samples. The highest methyl tert-butyl ether concentrations were found in the surficial aquifer from -4.6 to 6.4 feet mean sea level, however, methyl tert-butyl ether was detected as deep as -9.5 feet mean sea level. Increased methane concentrations and decreased dissolved oxygen concentrations were found in
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard
2014-10-01
The evaluation of measurement uncertainty is not required by the European Union regulation or blood establishments’ laboratory tests. However, it is required for tests accredited by ISO 15189. Also, the forthcoming ISO 9001 edition requires “risk based thinking” with risk described as “the effect of uncertainty on an expected result”. ISO recommends GUM models for determination of measurement uncertainty, but their application is not intended for ordinal value measurements, such as what happens with screening test binary results. This article reviews, discusses and proposes concepts intended for measurement uncertainty of screening test results. The precision model focuses on cutoff level allowing the evaluation of the indeterminate interval using analytical sources of variance. The intervalis considered in the estimation of the seroconversion window period. The delta-value of patients and healthy subjects’ samples allows ranking two tests according to the probability of the two classes of indeterminate results: chance of false negative results and chance of false positive results (waste on budget).
Pereira, Paulo; Westgard, James O; Encarnação, Pedro; Seghatchian, Jerard
2014-10-01
The evaluation of measurement uncertainty is not required by the European Union regulation or blood establishments’ laboratory tests. However, it is required for tests accredited by ISO 15189. Also, the forthcoming ISO 9001 edition requires “risk based thinking” with risk described as “the effect of uncertainty on an expected result”. ISO recommends GUM models for determination of measurement uncertainty, but their application is not intended for ordinal value measurements, such as what happens with screening test binary results. This article reviews, discusses and proposes concepts intended for measurement uncertainty of screening test results. The precision model focuses on cutoff level allowing the evaluation of the indeterminate interval using analytical sources of variance. The intervalis considered in the estimation of the seroconversion window period. The delta-value of patients and healthy subjects’ samples allows ranking two tests according to the probability of the two classes of indeterminate results: chance of false negative results and chance of false positive results (waste on budget). PMID:25457752
NASA Astrophysics Data System (ADS)
Buchberger, G.; Schoeftner, J.
2013-03-01
In this work a theory for a slender piezoelectric laminated beam taking into account lossy electrodes is developed. For the modeling of the bending behavior of the beam with conductivity, the kinematical assumptions of Bernoulli-Euler and a simplified form of the Telegraph equations are used. Applying d’Alembert’s principle, Gauss’ law of electrostatics and Kirchhoff’s voltage and current rules, the partial differential equations of motion are derived, describing the bending vibrations of the beam and the voltage distribution and current flow along the resistive electrodes. The theory is valid for applications that are used for actuation and for sensing. In the first case the voltage at a certain location on the electrodes is prescribed and the beam is deformed, whereas in the second case the structure is excited by a distributed external load and the voltage distribution is a result of the structural deformation. For a bimorph with constant width and constant material properties the beam is governed by two coupled partial differential equations for the elastic deformation and for the voltage distribution: the first one is an extension of the Bernoulli-Euler equation of an elastic beam, the second one is a diffusion equation for the voltage. The analytical results of the developed theory are validated by means of three-dimensional electromechanically coupled finite element simulations with ANSYS 11.0. Different mechanical and electrical boundary conditions and resistances of the electrodes are considered in the numerical case study. Eigenfrequencies are compared and the frequency responses of the mechanical and electrical quantities show a good agreement between the proposed beam theory and FE results.
ERIC Educational Resources Information Center
Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.
2001-01-01
Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)
NASA Technical Reports Server (NTRS)
Taylor, William J.; Chato, David J.
1993-01-01
The NASA Lewis Research Center (NASA/LeRC) have been investigating a no-vent fill method for refilling cryogenic storage tanks in low gravity. Analytical modeling based on analyzing the heat transfer of a droplet has successfully represented the process in 0.034 m and 0.142 cubic m commercial dewars using liquid nitrogen and hydrogen. Recently a large tank (4.96 cubic m) was tested with hydrogen. This lightweight tank is representative of spacecraft construction. This paper presents efforts to model the large tank test data. The droplet heat transfer model is found to over predict the tank pressure level when compared to the large tank data. A new model based on equilibrium thermodynamics has been formulated. This new model is compared to the published large scale tank's test results as well as some additional test runs with the same equipment. The results are shown to match the test results within the measurement uncertainty of the test data except for the initial transient wall cooldown where it is conservative (i.e., overpredicts the initial pressure spike found in this time frame).
NASA Technical Reports Server (NTRS)
Taylor, William J.; Chato, David J.
1992-01-01
NASA-Lewis has been investigating a no-vent fill method for refilling cryogenic storage tanks in low gravity. Analytical modeling based on analyzing the heat transfer of a droplet has successfully represented the process in 0.034 and 0.142 sq m commercial dewars using liquid nitrogen and hydrogen. Recently a large tank (4.96 sq m) was tested with hydrogen. This lightweight tank is representative of spacecraft construction. This paper presents efforts to model the large tank test data. The droplet heat transfer model is found to overpredict the tank pressure level when compared to the large tank data. A new model based on equilibrium thermodynamics has been formulated. This new model is compared to the published large scale tank's test results as well as some additional test runs with the same equipment. The results are shown to match the test results within the measurement uncertainty of the test data except for the initial transient wall cooldown where it is conservative (i.e., overpredicts the initial pressure spike found in this time frame).
Klima, Miriam; Altenburger, Markus J; Kempf, Jürgen; Auwärter, Volker; Neukamm, Merja A
2016-08-01
In burnt or skeletonized bodies dental hard tissue sometimes is the only remaining specimen available. Therefore, it could be used as an alternative matrix in post mortem toxicology. Additionally, analysis of dental tissues could provide a unique retrospective window of detection. For forensic interpretation, routes and rates of incorporation of different drugs as well as physicochemical differences between tooth root, tooth crown and carious material have to be taken into account. In a pilot study, one post mortem tooth each from three drug users was analyzed for medicinal and illicit drugs. The pulp was removed in two cases; in one case the tooth was root canal treated. The teeth were separated into root, crown and carious material and drugs were extracted from the powdered material with methanol under ultrasonication. The extracts were screened for drugs by LC-MS(n) (ToxTyper™) and quantitatively analyzed with LC-ESI-MS/MS in MRM mode. The findings were compared to the analytical results for cardiac blood, femoral blood, urine, stomach content and hair. In dental hard tissues, 11 drugs (amphetamine, MDMA, morphine, codeine, norcodeine, methadone, EDDP, fentanyl, tramadol, diazepam, nordazepam, and promethazine) could be detected and concentrations ranged from approximately 0.13pg/mg to 2,400pg/mg. The concentrations declined in the following order: carious material>root>crown. Only the root canal treated tooth showed higher concentrations in the crown than in the root. In post mortem toxicology, dental hard tissue could be a useful alternative matrix facilitating a more differentiated consideration of drug consumption patterns, as the window of detection seems to overlap those for body fluids and hair. PMID:26930453
Klima, Miriam; Altenburger, Markus J; Kempf, Jürgen; Auwärter, Volker; Neukamm, Merja A
2016-08-01
In burnt or skeletonized bodies dental hard tissue sometimes is the only remaining specimen available. Therefore, it could be used as an alternative matrix in post mortem toxicology. Additionally, analysis of dental tissues could provide a unique retrospective window of detection. For forensic interpretation, routes and rates of incorporation of different drugs as well as physicochemical differences between tooth root, tooth crown and carious material have to be taken into account. In a pilot study, one post mortem tooth each from three drug users was analyzed for medicinal and illicit drugs. The pulp was removed in two cases; in one case the tooth was root canal treated. The teeth were separated into root, crown and carious material and drugs were extracted from the powdered material with methanol under ultrasonication. The extracts were screened for drugs by LC-MS(n) (ToxTyper™) and quantitatively analyzed with LC-ESI-MS/MS in MRM mode. The findings were compared to the analytical results for cardiac blood, femoral blood, urine, stomach content and hair. In dental hard tissues, 11 drugs (amphetamine, MDMA, morphine, codeine, norcodeine, methadone, EDDP, fentanyl, tramadol, diazepam, nordazepam, and promethazine) could be detected and concentrations ranged from approximately 0.13pg/mg to 2,400pg/mg. The concentrations declined in the following order: carious material>root>crown. Only the root canal treated tooth showed higher concentrations in the crown than in the root. In post mortem toxicology, dental hard tissue could be a useful alternative matrix facilitating a more differentiated consideration of drug consumption patterns, as the window of detection seems to overlap those for body fluids and hair.
Coplen, Tyler B; Qi, Haiping
2012-01-10
Because there are no internationally distributed stable hydrogen and oxygen isotopic reference materials of human hair, the U.S. Geological Survey (USGS) has prepared two such materials, USGS42 and USGS43. These reference materials span values commonly encountered in human hair stable isotope analysis and are isotopically homogeneous at sample sizes larger than 0.2 mg. USGS42 and USGS43 human-hair isotopic reference materials are intended for calibration of δ(2)H and δ(18)O measurements of unknown human hair by quantifying (1) drift with time, (2) mass-dependent isotopic fractionation, and (3) isotope-ratio-scale contraction. While they are intended for measurements of the stable isotopes of hydrogen and oxygen, they also are suitable for measurements of the stable isotopes of carbon, nitrogen, and sulfur in human and mammalian hair. Preliminary isotopic compositions of the non-exchangeable fractions of these materials are USGS42(Tibetan hair)δ(2)H(VSMOW-SLAP) = -78.5 ± 2.3‰ (n = 62) and δ(18)O(VSMOW-SLAP) = +8.56 ± 0.10‰ (n = 18) USGS42(Indian hair)δ(2)H(VSMOW-SLAP) = -50.3 ± 2.8‰ (n = 64) and δ(18)O(VSMOW-SLAP) = +14.11 ± 0.10‰ (n = 18). Using recommended analytical protocols presented herein for δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurements, the least squares fit regression of 11 human hair reference materials is δ(2)H(VSMOW-SLAP) = 6.085δ(2)O(VSMOW-SLAP) - 136.0‰ with an R-square value of 0.95. The δ(2)H difference between the calibrated results of human hair in this investigation and a commonly accepted human-hair relationship is a remarkable 34‰. It is critical that readers pay attention to the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) of isotopic reference materials in publications, and they need to adjust the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurement results of human hair in previous publications, as needed, to ensure all results on are on the same scales.
Coplen, T.B.; Qi, H.
2012-01-01
Because there are no internationally distributed stable hydrogen and oxygen isotopic reference materials of human hair, the U.S. Geological Survey (USGS) has prepared two such materials, USGS42 and USGS43. These reference materials span values commonly encountered in human hair stable isotope analysis and are isotopically homogeneous at sample sizes larger than 0.2 mg. USGS42 and USGS43 human-hair isotopic reference materials are intended for calibration of δ(2)H and δ(18)O measurements of unknown human hair by quantifying (1) drift with time, (2) mass-dependent isotopic fractionation, and (3) isotope-ratio-scale contraction. While they are intended for measurements of the stable isotopes of hydrogen and oxygen, they also are suitable for measurements of the stable isotopes of carbon, nitrogen, and sulfur in human and mammalian hair. Preliminary isotopic compositions of the non-exchangeable fractions of these materials are USGS42(Tibetan hair)δ(2)H(VSMOW-SLAP) = -78.5 ± 2.3‰ (n = 62) and δ(18)O(VSMOW-SLAP) = +8.56 ± 0.10‰ (n = 18) USGS42(Indian hair)δ(2)H(VSMOW-SLAP) = -50.3 ± 2.8‰ (n = 64) and δ(18)O(VSMOW-SLAP) = +14.11 ± 0.10‰ (n = 18). Using recommended analytical protocols presented herein for δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurements, the least squares fit regression of 11 human hair reference materials is δ(2)H(VSMOW-SLAP) = 6.085δ(2)O(VSMOW-SLAP) - 136.0‰ with an R-square value of 0.95. The δ(2)H difference between the calibrated results of human hair in this investigation and a commonly accepted human-hair relationship is a remarkable 34‰. It is critical that readers pay attention to the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) of isotopic reference materials in publications, and they need to adjust the δ(2)H(VSMOW-SLAP) and δ(18)O(VSMOW-SLAP) measurement results of human hair in previous publications, as needed, to ensure all results on are on the same scales.
ERIC Educational Resources Information Center
Kilgo, Cindy A.; Pascarella, Ernest T.
2016-01-01
This study examines the effects of undergraduate students participating in independent research with faculty members on four-year graduation and graduate/professional degree aspirations. We analyzed four-year longitudinal data from the Wabash National Study of Liberal Arts Education using multiple analytic techniques. The findings support the…
ERIC Educational Resources Information Center
Olson, Carol Booth; Kim, James S.; Scarcella, Robin; Kramer, Jason; Pearson, Matthew; van Dyk, David A.; Collins, Penny; Land, Robert E.
2012-01-01
In this study, 72 secondary English teachers from the Santa Ana Unified School District were randomly assigned to participate in the Pathway Project, a cognitive strategies approach to teaching interpretive reading and analytical writing, or to a control condition involving typical district training focusing on teaching content from the textbook.…
Not Available
2006-06-01
In the Analytical Microscopy group, within the National Center for Photovoltaic's Measurements and Characterization Division, we combine two complementary areas of analytical microscopy--electron microscopy and proximal-probe techniques--and use a variety of state-of-the-art imaging and analytical tools. We also design and build custom instrumentation and develop novel techniques that provide unique capabilities for studying materials and devices. In our work, we collaborate with you to solve materials- and device-related R&D problems. This sheet summarizes the uses and features of four major tools: transmission electron microscopy, scanning electron microscopy, the dual-beam focused-ion-beam workstation, and scanning probe microscopy.
NASA Astrophysics Data System (ADS)
Molodenskii, S. M.; Molodenskii, M. S.; Begitova, T. A.
2016-09-01
In the first part of the paper, a new method was developed for solving the inverse problem of coseismic and postseismic deformations in the real (imperfectly elastic, radially and horizontally heterogeneous, self-gravitating) Earth with hydrostatic initial stresses from highly accurate modern satellite data. The method is based on the decomposition of the sought parameters in the orthogonalized basis. The method was suggested for estimating the ambiguity of the solution of the inverse problem for coseismic and postseismic deformations. For obtaining this estimate, the orthogonal complement is constructed to the n-dimensional space spanned by the system of functional derivatives of the residuals in the system of n observed and model data on the coseismic and postseismic displacements at a variety of sites on the ground surface with small variations in the models. Below, we present the results of the numerical modeling of the elastic displacements of the ground surface, which were based on calculating Green's functions of the real Earth for the plane dislocation surface and different orientations of the displacement vector as described in part I of the paper. The calculations were conducted for the model of a horizontally homogeneous but radially heterogeneous selfgravitating Earth with hydrostatic initial stresses and the mantle rheology described by the Lomnitz logarithmic creep function according to (M. Molodenskii, 2014). We compare our results with the previous numerical calculations (Okado, 1985; 1992) for the simplest model of a perfectly elastic nongravitating homogeneous Earth. It is shown that with the source depths starting from the first hundreds of kilometers and with magnitudes of about 8.0 and higher, the discrepancies significantly exceed the errors of the observations and should therefore be taken into account. We present the examples of the numerical calculations of the creep function of the crust and upper mantle for the coseismic deformations. We
Accurate Optical Reference Catalogs
NASA Astrophysics Data System (ADS)
Zacharias, N.
2006-08-01
Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.
Tank 241-AX-101 grab samples 1AX-97-1 through 1AX-97-3 analytical results for the final report
Esch, R.A.
1997-11-13
This document is the final report for tank 241-AX-101 grab samples. Four grab samples were collected from riser 5B on July 29, 1997. Analyses were performed on samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Miller, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. All four samples contained settled solids that appeared to be large salt crystals that precipitated upon cooling to ambient temperature. Less than 25 % settled solids were present in the first three samples, therefore only the supernate was sampled and analyzed. Sample 1AX-97-4 contained approximately 25.3 % settled solids. Compatibility analyses were not performed on this sample. Attachment 1 is provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information. All four samples contained settled solids that appeared to be large salt crystal that precipitated upon cooling to ambient temperature. The settled solids in samples 1AX-97-1, 1AX-97-2 and 1AX-97-3 were less than 25% by volume. Therefore, for these three samples, two 15-mL subsamples were pipetted to the surface of the liquid and submitted to the laboratory for analysis. In addition, a portion of the liquid was taken from each of the these three samples to perform an acidified ammonia analysis. No analysis was performed on the settled solid portion of the samples. Sample 1AX-97-4 was reserved for the Process Chemistry group to perform boil down and dissolution testing in accordance with Letter of Instruction for Non-Routine Analysis of Single-Shell Tank 241-AX-101 Grab Samples (Field, 1997) (Correspondence 1). However, prior to the analysis, the sample was inadvertently
NASA Technical Reports Server (NTRS)
Frady, Greg; Smaolloey, Kurt; LaVerde, Bruce; Bishop, Jim
2004-01-01
The paper will discuss practical and analytical findings of a test program conducted to assist engineers in determining which analytical strain fields are most appropriate to describe the crack initiating and crack propagating stresses in thin walled cylindrical hardware that serves as part of the Space Shuttle Main Engine's fuel system. In service the hardware is excited by fluctuating dynamic pressures in a cryogenic fuel that arise from turbulent flow/pump cavitation. A bench test using a simplified system was conducted using acoustic energy in air to excite the test articles. Strain measurements were used to reveal response characteristics of two Flowliner test articles that are assembled as a pair when installed in the engine feed system.
NASA Astrophysics Data System (ADS)
Chavanis, Pierre-Henri
2011-08-01
We provide an approximate analytical expression of the mass-radius relation of a Newtonian self-gravitating Bose-Einstein condensate (BEC) with short-range interactions described by the Gross-Pitaevskii-Poisson system. These equations model astrophysical objects such as boson stars and, presumably, dark matter galactic halos. Our study connects the noninteracting case studied by Ruffini and Bonazzola (1969) to the Thomas-Fermi limit studied by Böhmer and Harko (2007). For repulsive short-range interactions (positive scattering lengths), there exists configurations of arbitrary mass but their radius is always larger than a minimum value. For attractive short-range interactions (negative scattering lengths), equilibrium configurations only exist below a maximum mass. Above that mass, the system is expected to collapse and form a black hole. We also study the radius versus scattering length relation for a given mass. We find that equilibrium configurations only exist above a (negative) minimum scattering length. Our approximate analytical solution, based on a Gaussian ansatz, provides a very good agreement with the exact solution obtained by numerically solving a nonlinear differential equation representing hydrostatic equilibrium. Our analytical treatment is, however, easier to handle and permits one to study the stability problem, and derive an expression of the pulsation period, by developing an analogy with a simple mechanical problem.
Building pit dewatering: application of transient analytic elements.
Zaadnoordijk, Willem J
2006-01-01
Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1990-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potential and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Simple analytic potentials for linear ion traps
NASA Technical Reports Server (NTRS)
Janik, G. R.; Prestage, J. D.; Maleki, L.
1989-01-01
A simple analytical model was developed for the electric and ponderomotive (trapping) potentials in linear ion traps. This model was used to calculate the required voltage drive to a mercury trap, and the result compares well with experiments. The model gives a detailed picture of the geometric shape of the trapping potenital and allows an accurate calculation of the well depth. The simplicity of the model allowed an investigation of related, more exotic trap designs which may have advantages in light-collection efficiency.
Time-domain Raman analytical forward solvers.
Martelli, Fabrizio; Binzoni, Tiziano; Sekar, Sanathana Konugolu Venkata; Farina, Andrea; Cavalieri, Stefano; Pifferi, Antonio
2016-09-01
A set of time-domain analytical forward solvers for Raman signals detected from homogeneous diffusive media is presented. The time-domain solvers have been developed for two geometries: the parallelepiped and the finite cylinder. The potential presence of a background fluorescence emission, contaminating the Raman signal, has also been taken into account. All the solvers have been obtained as solutions of the time dependent diffusion equation. The validation of the solvers has been performed by means of comparisons with the results of "gold standard" Monte Carlo simulations. These forward solvers provide an accurate tool to explore the information content encoded in the time-resolved Raman measurements. PMID:27607645
Borecka, Marta; Białk-Bielińska, Anna; Siedlewicz, Grzegorz; Kornowska, Kinga; Kumirska, Jolanta; Stepnowski, Piotr; Pazdro, Ksenia
2013-08-23
Although the uncertainty estimate should be a necessary component of an analytical result, the presentation of measurements together with their uncertainties is still a serious problem, especially in the monitoring of the presence of pharmaceuticals in the environment. Here we discuss the estimation of expanded uncertainty in analytical procedures for determining residues of twelve pharmaceuticals in seawaters using solid-phase extraction (SPE) with H2O-Philic BAKERBOND speed disks and liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Matrix effects, extraction efficiency and absolute recovery of the developed analytical method were determined. A validation was performed to obtain the method's linearity, precision, accuracy, limits of detection (LODs) and quantification (LOQs). The expanded uncertainty of the data obtained was estimated according to the Guide to the Expression of Uncertainty in Measurement and ISO 17025:2005 standard. We applied our method to the analysis of drugs in seawaters samples from the coastal area of the southern Baltic Sea. As a result, a new approach (concerning the uncertainty estimation as well as the development of analytical method) to the analysis of pharmaceutical residues in environmental samples is presented. The information given here should facilitate the introduction of uncertainty estimation in chromatographic measurements on a much greater scale than is currently the case.
Borecka, Marta; Białk-Bielińska, Anna; Siedlewicz, Grzegorz; Kornowska, Kinga; Kumirska, Jolanta; Stepnowski, Piotr; Pazdro, Ksenia
2013-08-23
Although the uncertainty estimate should be a necessary component of an analytical result, the presentation of measurements together with their uncertainties is still a serious problem, especially in the monitoring of the presence of pharmaceuticals in the environment. Here we discuss the estimation of expanded uncertainty in analytical procedures for determining residues of twelve pharmaceuticals in seawaters using solid-phase extraction (SPE) with H2O-Philic BAKERBOND speed disks and liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Matrix effects, extraction efficiency and absolute recovery of the developed analytical method were determined. A validation was performed to obtain the method's linearity, precision, accuracy, limits of detection (LODs) and quantification (LOQs). The expanded uncertainty of the data obtained was estimated according to the Guide to the Expression of Uncertainty in Measurement and ISO 17025:2005 standard. We applied our method to the analysis of drugs in seawaters samples from the coastal area of the southern Baltic Sea. As a result, a new approach (concerning the uncertainty estimation as well as the development of analytical method) to the analysis of pharmaceutical residues in environmental samples is presented. The information given here should facilitate the introduction of uncertainty estimation in chromatographic measurements on a much greater scale than is currently the case. PMID:23885670
Flanagan, R J; Widdop, B; Ramsey, J D; Loveland, M
1988-09-01
1. Major advances in analytical toxicology followed the introduction of spectroscopic and chromatographic techniques in the 1940s and early 1950s and thin layer chromatography remains important together with some spectrophotometric and other tests. However, gas- and high performance-liquid chromatography together with a variety of immunoassay techniques are now widely used. 2. The scope and complexity of forensic and clinical toxicology continues to increase, although the compounds for which emergency analyses are needed to guide therapy are few. Exclusion of the presence of hypnotic drugs can be important in suspected 'brain death' cases. 3. Screening for drugs of abuse has assumed greater importance not only for the management of the habituated patient, but also in 'pre-employment' and 'employment' screening. The detection of illicit drug administration in sport is also an area of increasing importance. 4. In industrial toxicology, the range of compounds for which blood or urine measurements (so called 'biological monitoring') can indicate the degree of exposure is increasing. The monitoring of environmental contaminants (lead, chlorinated pesticides) in biological samples has also proved valuable. 5. In the near future a consensus as to the units of measurement to be used is urgently required and more emphasis will be placed on interpretation, especially as regards possible behavioural effects of drugs or other poisons. Despite many advances in analytical techniques there remains a need for reliable, simple tests to detect poisons for use in smaller hospital and other laboratories.
NASA Astrophysics Data System (ADS)
Panetta, Robert James; Seed, Mike
2016-04-01
Stable isotope applications that call for preconcentration (i.e., greenhouse gas measurements, small carbonate samples, etc.) universally call for cryogenic fluids such as liquid nitrogen, dry ice slurries, or expensive external recirculation chillers. This adds significant complexity, first and foremost in the requirements to store and handle such dangerous materials. A second layer of complexity is the instrument itself - with mechanisms to physically move either coolant around the trap, or move a trap in or out of the coolant. Not to mention design requirements for hardware that can safely isolate the fluid from other sensitive areas. In an effort to simplify the isotopic analysis of gases requiring preconcentration, we have developed a new separation technology, UltiTrapTM (patent pending), which leverage's the proprietary Advanced Purge & Trap (APT) Technology employed in elemental analysers from Elementar Analysensysteme GmbH products. UltiTrapTM has been specially developed as a micro volume, dynamically heated GC separation column. The introduction of solid-state cooling technology enables sub-zero temperatures without cryogenics or refrigerants, eliminates all moving parts, and increases analytical longevity due to no boiling losses of coolant . This new technology makes it possible for the system to be deployed as both a focussing device and as a gas separation device. Initial data on synthetic gas mixtures (CO2/CH4/N2O in air), and real-world applications including long-term room air and a comparison between carbonated waters of different origins show excellent agreement with previous technologies.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Statistically Qualified Neuro-Analytic system and Method for Process Monitoring
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
1998-11-04
An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.
A direct analytical approach for solving linear inverse heat conduction problems
NASA Astrophysics Data System (ADS)
Ainajem, N. M.; Ozisik, M. N.
1985-08-01
The analytical approach presented for the solution of linear inverse heat conduction problems demonstrates that applied surface conditions involving abrupt changes with time can be effectively accommodated with polynomial representations in time over the entire time domain; the resulting inverse analysis predicts surface conditions accurately. All previous attempts have experienced difficulties in the development of analytic solutions that are applicable over the entire time domain when a polynomial representation is used.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. PMID:24378297
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
Analytic barrage attack model. Final report, January 1986-January 1989
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
1989-01-01
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for the analytic model and for a numerical model used to check the analytic results.
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
Raillon, R.; Mahaut, S.; Leymarie, N.; Lonne, S.; Spies, M.
2009-03-03
This paper presents the results of the 2008 UT modeling benchmark with the ultrasonic simulation code for predicting echo-responses from flaws integrated into the Civa software platform and with the code developed by M. Spies. UT configurations addressed are similar to 2007 ones, to better understand some responses obtained last year. Experimental results proposed concern the responses of flat bottom holes at different depths inside surface curved blocks inspected by an immersion probe in normal incidence. They investigate the influence of surface curvature upon the amplitude and shape of flaw responses. Comparison of the simulated and experimental results is discussed.
Finding accurate frontiers: A knowledge-intensive approach to relational learning
NASA Technical Reports Server (NTRS)
Pazzani, Michael; Brunk, Clifford
1994-01-01
An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.
Accurate interlaminar stress recovery from finite element analysis
NASA Technical Reports Server (NTRS)
Tessler, Alexander; Riggs, H. Ronald
1994-01-01
The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.
Analytic theory for the selection of 2-D needle crystal at arbitrary Peclet number
NASA Technical Reports Server (NTRS)
Tanveer, Saleh
1989-01-01
An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.
Analytic theory for the selection of a two-dimensional needle crystal at arbitrary Peclet number
NASA Technical Reports Server (NTRS)
Tanveer, S.
1989-01-01
An accurate analytic theory is presented for the velocity selection of a two-dimensional needle crystal for arbitrary Peclet number for small values of the surface tension parameter. The velocity selection is caused by the effect of transcendentally small terms which are determined by analytic continuation to the complex plane and analysis of nonlinear equations. The work supports the general conclusion of previous small Peclet number analytical results of other investigators, though there are some discrepancies in details. It also addresses questions raised on the validity of selection theory owing to assumptions made on shape corrections at large distances from the tip.
NASA Technical Reports Server (NTRS)
Schwenke, David W.
1993-01-01
We report the results of a series of calculations of state-to-state integral cross sections for collisions between O and nonvibrating H2O in the gas phase on a model nonreactive potential energy surface. The dynamical methods used include converged quantum mechanical scattering calculations, the j(z) conserving centrifugal sudden (j(z)-CCS) approximation, and quasi-classical trajectory (QCT) calculations. We consider three total energies 0.001, 0.002, and 0.005 E(h) and the nine initial states with rotational angular momentum less than or equal to 2 (h/2 pi). The j(z)-CCS approximation gives good results, while the QCT method can be quite unreliable for transitions to specific rotational sublevels. However, the QCT cross sections summed over final sublevels and averaged over initial sublevels are in better agreement with the quantum results.
Ribeiro, Edison; Tauhata, Luiz; dos Santos, Eliane Eugenia; da Silveira Corrêa, Rosangela
2011-02-01
This paper presents the results of the Environmental Monitoring Program for the Radioactive waste repository of Abadia de Goiás, which was originated from the accident of Goiania, conducted by the Regional Center of Nuclear Sciences (CRCN-CO) of the National Commission on Nuclear Energy (CNEN), from 1998 to 2008. The results are related to the determination of (137)Cs activity per unit of mass or volume of samples from surface water, ground water, depth sediments of the river, soil and vegetation, and also the air-kerma rate estimation for gamma exposure in the monitored site. In the phase of operational Environmental Monitoring Program, the values of the geometric mean and standard deviation obtained for (137)Cs activity per unit of mass or volume in the analyzed samples were (0.08 ± 1.16) Bq.L(-1) for surface and underground water, (0.22 ± 2.79) Bq.kg(-1) for soil, and (0.19 ± 2.72) Bq.kg(-1) for sediment, and (0.19 ± 2.30) Bq.kg(-1) for vegetation. These results were similar to the values of the pre-operational Environmental Monitoring Program. With these data, estimations for effective dose were evaluated for public individuals in the neighborhood of the waste repository, considering the main possible way of exposure of this population group. The annual effective dose obtained from the analysis of these results were lower than 0.3 mSv.y(-1), which is the limit established by CNEN for environmental impact in the public individuals indicating that the facility is operating safely, without any radiological impact to the surrounding environment.
STEEN, F.H.
1999-12-01
This document is the format IV, final report for the tank 241-S-111 (S-111) grab samples taken in August 1999 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank S-111 samples were performed as directed in Compatibility Grab Sampling and Analysis Plan for Fiscal Year 1999 (Sasaki 1999a,b). Any deviations from the instructions provided in the tank sampling and analysis plan (TSAP) were discussed in this narrative. The notification limit for {sup 137}Cs was exceeded on two samples. Results are discussed in Section 5.3.2. No other notification limits were exceeded.
BELL, K.E.
1999-08-12
This document is the format IV, final report for the tank 241-AP-107 (AP-107) grab samples taken in May 1999 to address waste compatibility concerns. Chemical, radiochemical, and physical analyses on the tank AP-107 samples were performed as directed in Compatibility Grab Sampling and Analysis Plan for Fiscal year 1999. Any deviations from the instructions provided in the tank sampling and analysis plan (TSAP) were discussed in this narrative. Interim data were provided earlier to River Protection Project (RPP) personnel, however, the data presented here represent the official results. No notification limits were exceeded.
NASA Technical Reports Server (NTRS)
Johnson, Paul K.
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
McCurry, M.; Welhan, J.A.
1996-07-01
This report summarizes results of groundwater analyses for samples collected from wells USGS-44, -45, -46 and -59 in conjunction with the INEL Oversight Program straddle-packer project between 1992 and 1995. The purpose of this project was to develop and deploy a high-quality straddle-packer system for characterization of the three-dimensional geometry of solute plumes and aquifer hydrology near the Idaho Chemical Processing Plant (ICPP). Principle objectives included (1) characterizing vertical variations in aquifer chemistry; (2) documenting deviations in aquifer chemistry from that monitored by the existing network, and (3) making recommendations for improving monitoring efforts.
Spomer, Judith E.
2010-09-01
Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.
Analytic integrable systems: Analytic normalization and embedding flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang
In this paper we mainly study the existence of analytic normalization and the normal form of finite dimensional complete analytic integrable dynamical systems. More details, we will prove that any complete analytic integrable diffeomorphism F(x)=Bx+f(x) in (Cn,0) with B having eigenvalues not modulus 1 and f(x)=O(|) is locally analytically conjugate to its normal form. Meanwhile, we also prove that any complete analytic integrable differential system x˙=Ax+f(x) in (Cn,0) with A having nonzero eigenvalues and f(x)=O(|) is locally analytically conjugate to its normal form. Furthermore we will prove that any complete analytic integrable diffeomorphism defined on an analytic manifold can be embedded in a complete analytic integrable flow. We note that parts of our results are the improvement of Moser's one in J. Moser, The analytic invariants of an area-preserving mapping near a hyperbolic fixed point, Comm. Pure Appl. Math. 9 (1956) 673-692 and of Poincaré's one in H. Poincaré, Sur l'intégration des équations différentielles du premier order et du premier degré, II, Rend. Circ. Mat. Palermo 11 (1897) 193-239. These results also improve the ones in Xiang Zhang, Analytic normalization of analytic integrable systems and the embedding flows, J. Differential Equations 244 (2008) 1080-1092 in the sense that the linear part of the systems can be nonhyperbolic, and the one in N.T. Zung, Convergence versus integrability in Poincaré-Dulac normal form, Math. Res. Lett. 9 (2002) 217-228 in the way that our paper presents the concrete expression of the normal form in a restricted case.
Statistically qualified neuro-analytic failure detection method and system
Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.
2002-03-02
An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.
Numerical and Analytical Design of Functionally Graded Piezoelectric Transducers
NASA Astrophysics Data System (ADS)
Rubio, Wilfredo Montealegre; Buiochi, Flavio; Adamowski, Julio C.; Silva, Emílio Carlos Nelli
2008-02-01
This paper presents analytical and finite element methods to model broadband transducers with a graded piezoelectric parameter. The application of FGM (Functionally Graded Materials) concept to piezoelectric transducer design allows the design of composite transducers without interface between materials (e.g. piezoelectric ceramic and backing material), due to the continuous change of property values. Thus, large improvements can be achieved in their performance characteristics, mainly generating short-time waveform ultrasonic pulses. Nevertheless, recent research on functionally graded piezoelectric transducers shows lack of studies that compare numerical and analytical approaches used in their design. In this work analytical and numerical models of FGM piezoelectric transducers are developed to analyze the effects of piezoelectric material gradation, specifically, in ultrasonic applications. In addition, results using FGM piezoelectric transducers are compared with non-FGM piezoelectric transducers. We concluded that the developed modeling techniques are accurate, providing a useful tool for designing FGM piezoelectric transducers.
NASA Technical Reports Server (NTRS)
Eades, J. B., Jr.
1974-01-01
The mathematical developments carried out for this investigation are reported. In addition to describing and discussing the solutions which were acquired, there are compendia of data presented herein which summarize the equations and describe them as representative trace geometries. In this analysis the relative motion problems have been referred to two particular frames of reference; one which is inertially aligned, and one which is (local) horizon oriented. In addition to obtaining the classical initial values solutions, there are results which describe cases having applied specific forces serving as forcing functions. Also, in order to provide a complete state representation the speed components, as well as the displacements, have been described. These coordinates are traced on representative planes analogous to the displacement geometries. By this procedure a complete description of a relative motion is developed; and, as a consequence range rate as well as range information is obtained.
Menoni, O; Battevi, N; Colombini, D; Ricci, M G; Occhipinti, E; Zecchi, G
1999-01-01
The paper reports the results of risk evaluation of patient lifting or moving obtained from a multicentre study on 216 wards, for both acute hospital patients and in geriatric residences. In all situations the exposure to patient lifting was assessed using a concise index (MAPO). Analysis of the results showed that only 9% of the workers could be considered as exposed to negligible risk (MAPO Index = 0-1.5); of these 95.7% worked in hospital wards and only 4.3% in geriatric wards. A further confirmation of the higher level of exposure of workers in long-term hospitalization was that 42.3% were exposed to elevated levels (MAPO Index > 5) compared with 27.7% observed in hospital ward workers. The mean values of the exposure index were 6.8 for hospital wards and 9.64 for geriatric residences and, although much higher in the latter, both categories showed high exposure. In the orthopaedic departments of the hospitals the values were higher than in the geriatric wards (MAPO Index = 10.1); medical and surgical departments showed values similar to the mean values observed in the geriatric wards. These high values were due to: severe shortage of equipment life lifting devices (95.5%) and minor aids (99.5%), partial inadequacy of the working environment (69.2%), poor training and information (96.1% lacking); only the supply of wheelchairs was adequate (65.8%). All of which points to an almost generalized non-observance of the regulations listed under Chapter V of Law No. 626/94. However, the proposed method of evaluation allows anyone who has to carry out prevention and improvement measures to identify priority criteria specifically aimed at the individual factors taken into consideration. By simulating an intervention for improvement aimed at equipment and training, 96% of the wards would be included in the negligible exposure class (MAPO Index 0-1.5).
Walker, R.L.; Smith, D.H.; Carter, J.A.; Musick, W.R.; Donohue, D.L.; Deron, S.; Asakura, Y.; Kagami, K.; Irinouchi, S.; Masui, J.
1981-01-01
The first part of this report covers background of resin bead spectrometry and the new batch resin bead method. In the original technique, about ten anion resin beads in the nitrate form were exposed to the diluted sample solution. The solution was adjusted to be a 8 M HNO/sub 3/ and to have about 1 ..mu..g U per bead. Up to 48 hours of static contact between beads and solution was required for adsorption of 1 to 3 ng Pu and U per bead to be achieved. Under these conditions, contamination was a problem at reprocessing facilities. The new batch techniques reduces the risk of contamination by handling one hundred times more U in the final diluted sample which is exposed to a proportionately larger number of beads. Moreover, it only requires ten minutes adsorption time to provide about 1000 purified samples for mass spectrometry. The amounts of Pu and U adsorbed versus time were determined and results are tabulated. The second part of this report briefly summarizes results of resin bead field tests completed at the Power Reactor and Nuclear Fuel Development Corporation (PNC) reprocessing plant in Tokai-mura, Japan. Both methods, the original small-sample resin bead and the batch technique, were investigated on spent fuel solutions. Beads were prepared at PNC and distributed to IAEA and ORNL along with dried residues for conventional mass spectrometric analysis at IAEA. Parallel measurements were made at PNC using their normal measuring routines. The U and Pu measurements of all resin and those of PNC are in excellent agreement for the batch method. Discrepancies were noted in the U measurements by the original method.
Analytical Chemistry of Nitric Oxide
Hetrick, Evan M.
2013-01-01
Nitric oxide (NO) is the focus of intense research, owing primarily to its wide-ranging biological and physiological actions. A requirement for understanding its origin, activity, and regulation is the need for accurate and precise measurement techniques. Unfortunately, analytical assays for monitoring NO are challenged by NO’s unique chemical and physical properties, including its reactivity, rapid diffusion, and short half-life. Moreover, NO concentrations may span pM to µM in physiological milieu, requiring techniques with wide dynamic response ranges. Despite such challenges, many analytical techniques have emerged for the detection of NO. Herein, we review the most common spectroscopic and electrochemical methods, with special focus on the fundamentals behind each technique and approaches that have been coupled with modern analytical measurement tools or exploited to create novel NO sensors. PMID:20636069
High resolution semi-analytical three dimensional linear motions for tension leg platforms
Mullarkey, T.P.; McNamara, J.F. |
1995-12-31
Previous work by the authors has resulted in very accurate semi-analytical solutions for the three-dimensional linear hydrodynamics of submerged pontoons, floating columns and combinations of these solutions for the analysis of structures such as the ISSC TLP. The present paper is an extension of the semi-analytical approach to establishing the linear motions of the ISSC TLP including the effects of the tethers. A major advantage of the approach is that response functions may be computed accurately at an arbitrarily high resolution due to the inherent geometric and computational simplicity of the semi-analytical solution scheme. Therefore, the solution serves as a benchmark for the evaluation of results generated using panel methods, and this is illustrated by comparisons with the scatter obtained with available results in the literature.
Montesinos, Andres; Ardiaca, Maria; Juan-Sallés, Carles; Tesouro, Miguel A
2015-03-01
In this study we evaluated the effects of meloxicam administered at 0.5 mg/kg IM q12h for 14 days on hematologic and plasma biochemical values and on kidney tissue in 11 healthy African grey parrots (Psittacus erithacus). Before treatment with meloxicam, blood samples were collected and renal biopsy samples were obtained from the cranial portion of the left kidney from each of the birds. On day 14 of treatment, a second blood sample and biopsy from the middle portion of the left kidney were obtained from each bird. All birds remained clinically normal throughout the study period. No significant differences were found between hematologic and plasma biochemical values before and after 14 days of treatment with meloxicam, except for a slight increase in median beta globulin and corresponding total globulin concentrations, and a slight decrease in median phosphorus concentration. Renal lesions were absent in 9 of 10 representative posttreatment biopsy samples. On the basis of these results, meloxicam administered at the dosage used in this study protocol does not appear to cause renal disease in African grey parrots.
NASA Technical Reports Server (NTRS)
Zolensky, M. E.; Floss, C.; Allen, C.; Bajit, S.; Bechtel, H. A.; Borg, J.; Brenker, F.; Bridges, J; Brownlee, D. E.; Burchell, M.; Burghammer, M.; Butterworth, A. L.; Cloetens, P.; Davis, A. M.; Doll, R.; Flynn, G. J.; Frank, D.; Gainsforth, Z.; Grun, E.; Heck, P. R.; Hillier, J. K.; Hoppe, P
2011-01-01
In addition to samples from comet 81P/Wild 2, NASA's Stardust mission may have returned the first samples of contemporary interstellar dust. The interstellar tray collected particles for 229 days during two exposures prior to the spacecraft encounter with Wild 2 and tracked the interstellar dust stream for all but 34 days of that time. In addition to aerogel capture cells, the tray contains Al foils that make up approx.15% of the total exposed collection surface . Interstellar dust fluxes are poorly constrained, but suggest that on the order of 12-15 particles may have impacted the total exposed foil area of 15,300 sq mm; 2/3 of these are estimated to be less than approx.1 micrometer in size . Examination of the interstellar foils to locate the small rare craters expected from these impacts is proceeding under the auspices of the Stardust Interstellar Preliminary Examination (ISPE) plan. Below we outline the automated high-resolution imaging protocol we have established for this work and report results obtained from two interstellar foils.
NASA Technical Reports Server (NTRS)
Harrington, Douglas (Technical Monitor); Schweiger, P.; Stern, A.; Gamble, E.; Barber, T.; Chiappetta, L.; LaBarre, R.; Salikuddin, M.; Shin, H.; Majjigi, R.
2005-01-01
Hot flow aero-acoustic tests were conducted with Pratt & Whitney's High-Speed Civil Transport (HSCT) Mixer-Ejector Exhaust Nozzles by General Electric Aircraft Engines (GEAE) in the GEAE Anechoic Freejet Noise Facility (Cell 41) located in Evendale, Ohio. The tests evaluated the impact of various geometric and design parameters on the noise generated by a two-dimensional (2-D) shrouded, 8-lobed, mixer-ejector exhaust nozzle. The shrouded mixer-ejector provides noise suppression by mixing relatively low energy ambient air with the hot, high-speed primary exhaust jet. Additional attenuation was obtained by lining the shroud internal walls with acoustic panels, which absorb acoustic energy generated during the mixing process. Two mixer designs were investigated, the high mixing "vortical" and aligned flow "axial", along with variations in the shroud internal mixing area ratios and shroud length. The shrouds were tested as hardwall or lined with acoustic panels packed with a bulk absorber. A total of 21 model configurations at 1:11.47 scale were tested. The models were tested over a range of primary nozzle pressure ratios and primary exhaust temperatures representative of typical HSCT aero thermodynamic cycles. Static as well as flight simulated data were acquired during testing. A round convergent unshrouded nozzle was tested to provide an acoustic baseline for comparison to the test configurations. Comparisons were made to previous test results obtained with this hardware at NASA Glenn's 9- by 15-foot low-speed wind tunnel (LSWT). Laser velocimetry was used to investigate external as well as ejector internal velocity profiles for comparison to computational predictions. Ejector interior wall static pressure data were also obtained. A significant reduction in exhaust system noise was demonstrated with the 2-D shrouded nozzle designs.
Cram, Dawn; Roth, Christopher J; Towbin, Alexander J
2016-10-01
The decision to implement an orders-based versus an encounters-based imaging workflow poses various implications to image capture and storage. The impacts include workflows before and after an imaging procedure, electronic health record build, technical infrastructure, analytics, resulting, and revenue. Orders-based workflows tend to favor some imaging specialties while others require an encounters-based approach. The intent of this HIMSS-SIIM white paper is to offer lessons learned from early adopting institutions to physician champions and informatics leadership developing strategic planning and operational rollouts for specialties capturing clinical multimedia. PMID:27417208
Cram, Dawn; Roth, Christopher J; Towbin, Alexander J
2016-10-01
The decision to implement an orders-based versus an encounters-based imaging workflow poses various implications to image capture and storage. The impacts include workflows before and after an imaging procedure, electronic health record build, technical infrastructure, analytics, resulting, and revenue. Orders-based workflows tend to favor some imaging specialties while others require an encounters-based approach. The intent of this HIMSS-SIIM white paper is to offer lessons learned from early adopting institutions to physician champions and informatics leadership developing strategic planning and operational rollouts for specialties capturing clinical multimedia.
Analytical scatter kernels for portal imaging at 6 MV.
Spies, L; Bortfeld, T
2001-04-01
X-ray photon scatter kernels for 6 MV electronic portal imaging are investigated using an analytical and a semi-analytical model. The models are tested on homogeneous phantoms for a range of uniform circular fields and scatterer-to-detector air gaps relevant for clinical use. It is found that a fully analytical model based on an exact treatment of photons undergoing a single Compton scatter event and an approximate treatment of second and higher order scatter events, assuming a multiple-scatter source at the center of the scatter volume, is accurate within 1% (i.e., the residual scatter signal is less than 1% of the primary signal) for field sizes up to 100 cm2 and air gaps over 30 cm, but shows significant discrepancies for larger field sizes. Monte Carlo results are presented showing that the effective multiple-scatter source is located toward the exit surface of the scatterer, rather than at its center. A second model is therefore investigated where second and higher-order scattering is instead modeled by fitting an analytical function describing a nonstationary isotropic point-scatter source to Monte Carlo generated data. This second model is shown to be accurate to within 1% for air gaps down to 20 cm, for field sizes up to 900 cm2 and phantom thicknesses up to 50 cm. PMID:11339752
Analytic three-loop static potential
NASA Astrophysics Data System (ADS)
Lee, Roman N.; Smirnov, Alexander V.; Smirnov, Vladimir A.; Steinhauser, Matthias
2016-09-01
We present analytic results for the three-loop static potential of two heavy quarks. The analytic calculation of the missing ingredients is outlined, and results for the singlet and octet potential are provided.
Creating analytically divergence-free velocity fields from grid-based data
NASA Astrophysics Data System (ADS)
Ravu, Bharath; Rudman, Murray; Metcalfe, Guy; Lester, Daniel R.; Khakhar, Devang V.
2016-10-01
We present a method, based on B-splines, to calculate a C2 continuous analytic vector potential from discrete 3D velocity data on a regular grid. A continuous analytically divergence-free velocity field can then be obtained from the curl of the potential. This field can be used to robustly and accurately integrate particle trajectories in incompressible flow fields. Based on the method of Finn and Chacon (2005) [10] this new method ensures that the analytic velocity field matches the grid values almost everywhere, with errors that are two to four orders of magnitude lower than those of existing methods. We demonstrate its application to three different problems (each in a different coordinate system) and provide details of the specifics required in each case. We show how the additional accuracy of the method results in qualitatively and quantitatively superior trajectories that results in more accurate identification of Lagrangian coherent structures.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799
Keyword Search over Data Service Integration for Accurate Results
NASA Astrophysics Data System (ADS)
Zemleris, Vidmantas; Kuznetsov, Valentin; Gwadera, Robert
2014-06-01
Virtual Data Integration provides a coherent interface for querying heterogeneous data sources (e.g., web services, proprietary systems) with minimum upfront effort. Still, this requires its users to learn a new query language and to get acquainted with data organization which may pose problems even to proficient users. We present a keyword search system, which proposes a ranked list of structured queries along with their explanations. It operates mainly on the metadata, such as the constraints on inputs accepted by services. It was developed as an integral part of the CMS data discovery service, and is currently available as open source.
Analytical advantages of multivariate data processing. One, two, three, infinity?
Olivieri, Alejandro C
2008-08-01
Multidimensional data are being abundantly produced by modern analytical instrumentation, calling for new and powerful data-processing techniques. Research in the last two decades has resulted in the development of a multitude of different processing algorithms, each equipped with its own sophisticated artillery. Analysts have slowly discovered that this body of knowledge can be appropriately classified, and that common aspects pervade all these seemingly different ways of analyzing data. As a result, going from univariate data (a single datum per sample, employed in the well-known classical univariate calibration) to multivariate data (data arrays per sample of increasingly complex structure and number of dimensions) is known to provide a gain in sensitivity and selectivity, combined with analytical advantages which cannot be overestimated. The first-order advantage, achieved using vector sample data, allows analysts to flag new samples which cannot be adequately modeled with the current calibration set. The second-order advantage, achieved with second- (or higher-) order sample data, allows one not only to mark new samples containing components which do not occur in the calibration phase but also to model their contribution to the overall signal, and most importantly, to accurately quantitate the calibrated analyte(s). No additional analytical advantages appear to be known for third-order data processing. Future research may permit, among other interesting issues, to assess if this "1, 2, 3, infinity" situation of multivariate calibration is really true. PMID:18613646
Analytic prediction of airplane equilibrium spin characteristics
NASA Technical Reports Server (NTRS)
Adams, W. M., Jr.
1972-01-01
The nonlinear equations of motion are solved algebraically for conditions for which an airplane is in an equilibrium spin. Constrained minimization techniques are employed in obtaining the solution. Linear characteristics of the airplane about the equilibrium points are also presented and their significance in identifying the stability characteristics of the equilibrium points is discussed. Computer time requirements are small making the method appear potentially applicable in airplane design. Results are obtained for several configurations and are compared with other analytic-numerical methods employed in spin prediction. Correlation with experimental results is discussed for one configuration for which a rather extensive data base was available. A need is indicated for higher Reynolds number data taken under conditions which more accurately simulate a spin.
Accurate Mass Measurements in Proteomics
Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.
2007-08-01
To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation.
King, Harley D.; Chaffee, Maurice A.
2000-01-01
Desert BLM Resource Area and vicinity. Included in the 1,245 stream-sediment samples collected by the USGS are 284 samples collected as part of the current study, 817 samples collected as part of investigations of the12 BLM WSAs and re-analyzed for the present study, 45 samples from the Needles 1? X 2? quadrangle, and 99 samples from the El Centro 1? X 2? quadrangle. The NURE stream-sediment and soil samples were re-analyzed as part of the USGS study in the Needles quadrangle. Analytical data for samples from the Chocolate Mountain Aerial Gunnery Range, which is located within the area of the NECD, were previously reported (King and Chaffee, 1999a). For completeness, these results are also included in this report. Analytical data for samples from the area of Joshua Tree National Park that is within the NECD have also been reported (King and Chaffee, 1999b). These results are not included in this report. The analytical data presented here can be used for baseline geochemical, mineral resource, and environmental geochemical studies.
Accurate documentation and wound measurement.
Hampton, Sylvie
This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.
ERIC Educational Resources Information Center
MacNeill, Sheila; Campbell, Lorna M.; Hawksey, Martin
2014-01-01
This article presents an overview of the development and use of analytics in the context of education. Using Buckingham Shum's three levels of analytics, the authors present a critical analysis of current developments in the domain of learning analytics, and contrast the potential value of analytics research and development with real world…
ERIC Educational Resources Information Center
Oblinger, Diana G.
2012-01-01
Talk about analytics seems to be everywhere. Everyone is talking about analytics. Yet even with all the talk, many in higher education have questions about--and objections to--using analytics in colleges and universities. In this article, the author explores the use of analytics in, and all around, higher education. (Contains 1 note.)
Second-order analytic solutions for re-entry trajectories
NASA Astrophysics Data System (ADS)
Kim, Eun-Kyou
1993-01-01
With the development of aeroassist technology, either for near-earth orbital transfer with or without a plane change or for planetary aerocapture, it is of interest to have accurate analytic solutions for reentry trajectories in an explicit form. Starting with the equations of motion of a non-thrusting aerodynamic vehicle entering a non-rotating spherical planetary atmosphere, a normalization technique is used to transform the equations into a form suitable for an analytic integration. Then, depending on the type of planar entry modes with a constant angle-of-attack, namely, ballistic fly-through, lifting skip, and equilibrium glide trajectories, the first-order solutions are obtained with the appropriate simplification. By analytic continuation, the second-order solutions for the altitude, speed, and flight path angle are derived. The closed form solutions lead to explicit forms for the physical quantities of interest, such as the deceleration and aerodynamic heating rates. The analytic solutions for the planar case are extended to three-dimensional skip trajectories with a constant bank angle. The approximate solutions for the heading and latitude are developed to the second order. In each type of trajectory examined, explicit relations among the principal variables are in a form suitable for guidance and navigation purposes. The analytic solutions have excellent agreement with the numerical integrations. They also provide some new results which were not reported in the existing classical theory.
Tank 241U102 Grab Samples 2U-99-1 and 2U-99-2 and 2U-99-3 Analytical Results for the Final Report
STEEN, F.H.
1999-08-03
This document is the final report for tank 241-U-102 grab samples. Five grab samples were collected from riser 13 on May 26, 1999 and received by the 222-S laboratory on May 26 and May 27, 1999. Samples 2U-99-3 and 2U-99-4 were submitted to the Process Chemistry Laboratory for special studies. Samples 2U-99-1, 2U-99-2 and 2U-99-5 were submitted to the laboratory for analyses. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan for Fiscal year 1999 (TSAP) (Sasaki, 1999) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Fowler 1995, Mulkey and Miller 1998). The analytical results are presented in the data summary report. None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP.
Tank 241S109 Grab Samples 9S-99-1 and 9S-99-2 and 9S-99-3 Analytical Results for the Final Report
STEEN, F.H.
1999-11-23
This document is the final report for tank 2414-109 grab samples. Three grab samples were collected from riser 13 on July 28, 1999 and received by the 222-S laboratory on July 28, 1999. Analyses were performed in accordance with the Compatibility Grab Sampling and Analysis Plan for Fiscal Year 1999 (TSAP) (Sasaki, 1999) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Fowler 1995, Mulkey and Miller 1998). The analytical results are presented in the data summary report (Table 1). None of the subsamples submitted for differential scanning calorimetry (DSC), total organic carbon (TOC) and plutonium 239 (Pu239) analyses exceeded the notification limits as stated in TSAP (Sasaki, 1999).
Technology Transfer Automated Retrieval System (TEKTRAN)
Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...
Shells on nanowires detected by analytical TEM
NASA Astrophysics Data System (ADS)
Thomas, Jürgen; Gemming, Thomas
2005-09-01
Nanostructures in the form of nanowires or filled nanotubes and nanoparticles covered by shells are of great interest in materials science. They allow the creation of new materials with tailored new properties. For the characterisation of these structures and their shells by means of analytical transmission electron microscopy (TEM), especially by energy dispersive X-ray spectroscopy (EDXS), and electron energy loss spectroscopy (EELS), the accurate analysis of linescan intensity profiles is necessary. A mathematical model is described, which is suitable for this analysis. It considers the finite electron beam size, the beam convergence, and the beam broadening within the specimen. It is shown that the beam size influences the measured result of core radius and shell thickness. On the other hand, the influence of the beam broadening within the specimen is negligible. At EELS, the specimen thickness must be smaller than the mean free path for inelastic scattering. Otherwise, artifacts of the signal profile of a nanowire can pretend a nanotube.
Comparing numerical and analytic approximate gravitational waveforms
NASA Astrophysics Data System (ADS)
Afshari, Nousha; Lovelace, Geoffrey; SXS Collaboration
2016-03-01
A direct observation of gravitational waves will test Einstein's theory of general relativity under the most extreme conditions. The Laser Interferometer Gravitational-Wave Observatory, or LIGO, began searching for gravitational waves in September 2015 with three times the sensitivity of initial LIGO. To help Advanced LIGO detect as many gravitational waves as possible, a major research effort is underway to accurately predict the expected waves. In this poster, I will explore how the gravitational waveform produced by a long binary-black-hole inspiral, merger, and ringdown is affected by how fast the larger black hole spins. In particular, I will present results from simulations of merging black holes, completed using the Spectral Einstein Code (black-holes.org/SpEC.html), including some new, long simulations designed to mimic black hole-neutron star mergers. I will present comparisons of the numerical waveforms with analytic approximations.
Electron Microprobe Analysis Techniques for Accurate Measurements of Apatite
NASA Astrophysics Data System (ADS)
Goldoff, B. A.; Webster, J. D.; Harlov, D. E.
2010-12-01
Apatite [Ca5(PO4)3(F, Cl, OH)] is a ubiquitous accessory mineral in igneous, metamorphic, and sedimentary rocks. The mineral contains halogens and hydroxyl ions, which can provide important constraints on fugacities of volatile components in fluids and other phases in igneous and metamorphic environments in which apatite has equilibrated. Accurate measurements of these components in apatite are therefore necessary. Analyzing apatite by electron microprobe (EMPA), which is a commonly used geochemical analytical technique, has often been found to be problematic and previous studies have identified sources of error. For example, Stormer et al. (1993) demonstrated that the orientation of an apatite grain relative to the incident electron beam could significantly affect the concentration results. In this study, a variety of alternative EMPA operating conditions for apatite analysis were investigated: a range of electron beam settings, count times, crystal grain orientations, and calibration standards were tested. Twenty synthetic anhydrous apatite samples that span the fluorapatite-chlorapatite solid solution series, and whose halogen concentrations were determined by wet chemistry, were analyzed. Accurate measurements of these samples were obtained with many EMPA techniques. One effective method includes setting a static electron beam to 10-15nA, 15kV, and 10 microns in diameter. Additionally, the apatite sample is oriented with the crystal’s c-axis parallel to the slide surface and the count times are moderate. Importantly, the F and Cl EMPA concentrations are in extremely good agreement with the wet-chemical data. We also present EMPA operating conditions and techniques that are problematic and should be avoided. J.C. Stormer, Jr. et al., Am. Mineral. 78 (1993) 641-648.
Analytic Approximate Solution for Falkner-Skan Equation
Marinca, Bogdan
2014-01-01
This paper deals with the Falkner-Skan nonlinear differential equation. An analytic approximate technique, namely, optimal homotopy asymptotic method (OHAM), is employed to propose a procedure to solve a boundary-layer problem. Our method does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. The obtained results reveal that this procedure is very effective, simple, and accurate. A very good agreement was found between our approximate results and numerical solutions, which prove that OHAM is very efficient in practice, ensuring a very rapid convergence after only one iteration. PMID:24883417
Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium.
Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying
2015-01-01
A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium.
Analytic descriptions of cylindrical electromagnetic waves in a nonlinear medium
Xiong, Hao; Si, Liu-Gang; Yang, Xiaoxue; Wu, Ying
2015-01-01
A simple but highly efficient approach for dealing with the problem of cylindrical electromagnetic waves propagation in a nonlinear medium is proposed based on an exact solution proposed recently. We derive an analytical explicit formula, which exhibiting rich interesting nonlinear effects, to describe the propagation of any amount of cylindrical electromagnetic waves in a nonlinear medium. The results obtained by using the present method are accurately concordant with the results of using traditional coupled-wave equations. As an example of application, we discuss how a third wave affects the sum- and difference-frequency generation of two waves propagation in the nonlinear medium. PMID:26073066
Davenport, Thomas H
2006-01-01
We all know the power of the killer app. It's not just a support tool; it's a strategic weapon. Companies questing for killer apps generally focus all their firepower on the one area that promises to create the greatest competitive advantage. But a new breed of organization has upped the stakes: Amazon, Harrah's, Capital One, and the Boston Red Sox have all dominated their fields by deploying industrial-strength analytics across a wide variety of activities. At a time when firms in many industries offer similar products and use comparable technologies, business processes are among the few remaining points of differentiation--and analytics competitors wring every last drop of value from those processes. Employees hired for their expertise with numbers or trained to recognize their importance are armed with the best evidence and the best quantitative tools. As a result, they make the best decisions. In companies that compete on analytics, senior executives make it clear--from the top down--that analytics is central to strategy. Such organizations launch multiple initiatives involving complex data and statistical analysis, and quantitative activity is managed atthe enterprise (not departmental) level. In this article, professor Thomas H. Davenport lays out the characteristics and practices of these statistical masters and describes some of the very substantial changes other companies must undergo in order to compete on quantitative turf. As one would expect, the transformation requires a significant investment in technology, the accumulation of massive stores of data, and the formulation of company-wide strategies for managing the data. But, at least as important, it also requires executives' vocal, unswerving commitment and willingness to change the way employees think, work, and are treated.
SPLASH: Accurate OH maser positions
NASA Astrophysics Data System (ADS)
Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney
2013-10-01
The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.
Semi-Analytic Reconstruction of Flux in Finite Volume Formulations
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.
2006-01-01
Semi-analytic reconstruction uses the analytic solution to a second-order, steady, ordinary differential equation (ODE) to simultaneously evaluate the convective and diffusive flux at all interfaces of a finite volume formulation. The second-order ODE is itself a linearized approximation to the governing first- and second- order partial differential equation conservation laws. Thus, semi-analytic reconstruction defines a family of formulations for finite volume interface fluxes using analytic solutions to approximating equations. Limiters are not applied in a conventional sense; rather, diffusivity is adjusted in the vicinity of changes in sign of eigenvalues in order to achieve a sufficiently small cell Reynolds number in the analytic formulation across critical points. Several approaches for application of semi-analytic reconstruction for the solution of one-dimensional scalar equations are introduced. Results are compared with exact analytic solutions to Burger s Equation as well as a conventional, upwind discretization using Roe s method. One approach, the end-point wave speed (EPWS) approximation, is further developed for more complex applications. One-dimensional vector equations are tested on a quasi one-dimensional nozzle application. The EPWS algorithm has a more compact difference stencil than Roe s algorithm but reconstruction time is approximately a factor of four larger than for Roe. Though both are second-order accurate schemes, Roe s method approaches a grid converged solution with fewer grid points. Reconstruction of flux in the context of multi-dimensional, vector conservation laws including effects of thermochemical nonequilibrium in the Navier-Stokes equations is developed.
Evaluation of the WIND System atmospheric models: An analytic approach
Fast, J.D.
1991-11-25
An analytic approach was used in this study to test the logic, coding, and the theoretical limits of the WIND System atmospheric models for the Savannah River Plant. In this method, dose or concentration estimates predicted by the models were compared to the analytic solutions to evaluate their performance. The results from AREA EVACUATION and PLTFF/PLUME were very nearly identical to the analytic solutions they are based on and the evaluation procedure demonstrated that these models were able to reproduce the theoretical characteristics of a puff or a plume. The dose or concentration predicted by PLTFF/PLUME was always within 1% of the analytic solution. Differences between the dose predicted by 2DPUF and its analytic solution were substantially greater than those associated with PUFF/PLUME, but were usually smaller than 6%. This behavior was expected because PUFF/PLUME solves a form of the analytic solution for a single puff, and 2DPUF performs an integration over a period of time for several puffs to obtain the dose. Relatively large differences between the dose predicted by 2DPUF and its analytic solution were found to occur close to the source under stable atmospheric conditions. WIND System users should be aware of these situations in which the assumptions of the System atmospheric models may be violated so that dose predictions can be interpreted correctly. The WIND System atmospheric models are similar to many other dispersion codes used by the EPA, NRC, and DOE. If the quality of the source term and meteorological data is high, relatively accurate and timely forecasts for emergency response situations can be made by the WIND System atmospheric models.
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Line gas sampling system ensures accurate analysis
Not Available
1992-06-01
Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
White, J A; Dutton, A W; Schmidt, J A; Roemer, R B
2000-01-01
An automated three-element meshing method for generating finite element based models for the accurate thermal analysis of blood vessels imbedded in tissue has been developed and evaluated. The meshing method places eight noded hexahedral elements inside the vessels where advective flows exist, and four noded tetrahedral elements in the surrounding tissue. The higher order hexahedrals are used where advective flow fields occur, since high accuracy is required and effective upwinding algorithms exist. Tetrahedral elements are placed in the remaining tissue region, since they are computationally more efficient and existing automatic tetrahedral mesh generators can be used. Five noded pyramid elements connect the hexahedrals and tetrahedrals. A convective energy equation (CEE) based finite element algorithm solves for the temperature distributions in the flowing blood, while a finite element formulation of a generalized conduction equation is used in the surrounding tissue. Use of the CEE allows accurate solutions to be obtained without the necessity of assuming ad hoc values for heat transfer coefficients. Comparisons of the predictions of the three-element model to analytical solutions show that the three-element model accurately simulates temperature fields. Energy balance checks show that the three-element model has small, acceptable errors. In summary, this method provides an accurate, automatic finite element gridding procedure for thermal analysis of irregularly shaped tissue regions that contain important blood vessels. At present, the models so generated are relatively large (in order to obtain accurate results) and are, thus, best used for providing accurate reference values for checking other approximate formulations to complicated, conjugated blood heat transfer problems.
Application and Evaluation of Analytic Gaming
Riensche, Roderick M.; Martucci, Louis M.; Scholtz, Jean; Whiting, Mark A.
2009-08-31
We describe an "analytic gaming" framework and methodology, and introduce formal methods for evaluation of the analytic gaming process. This process involves conception, development, and playing of games that are informed by predictive models and driven by players. Evaluation of analytic gaming examines both the process of game development and the results of game play exercises.
Analytical models of steady-state plumes undergoing sequential first-order degradation.
Burnell, Daniel K; Mercer, James W; Sims, Lawrence S
2012-01-01
An exact, closed-form analytical solution is derived for one-dimensional (1D), coupled, steady-state advection-dispersion equations with sequential first-order degradation of three dissolved species in groundwater. Dimensionless and mathematical analyses are used to examine the sensitivity of longitudinal dispersivity in the parent and daughter analytical solutions. The results indicate that the relative error decreases to less than 15% for the 1D advection-dominated and advection-dispersion analytical solutions of the parent and daughter when the Damköhler number of the parent decreases to less than 1 (slow degradation rate) and the Peclet number increases to greater than 6 (advection-dominated). To estimate first-order daughter product rate constants in advection-dominated zones, 1D, two-dimensional (2D), and three-dimensional (3D) steady-state analytical solutions with zero longitudinal dispersivity are also derived for three first-order sequentially degrading compounds. The closed form of these exact analytical solutions has the advantage of having (1) no numerical integration or evaluation of complex-valued error function arguments, (2) computational efficiency compared to problems with long times to reach steady state, and (3) minimal effort for incorporation into spreadsheets. These multispecies analytical solutions indicate that BIOCHLOR produces accurate results for 1D steady-state, applications with longitudinal dispersion. Although BIOCHLOR is inaccurate in multidimensional applications with longitudinal dispersion, these multidimensional multispecies analytical solutions indicate that BIOCHLOR produces accurate steady-state results when the longitudinal dispersion is zero. As an application, the 1D advection-dominated analytical solution is applied to estimate field-scale rate constants of 0.81, 0.74, and 0.69/year for trichloroethene, cis-1,2-dichloroethene, and vinyl chloride, respectively, at the Harris Palm Bay, FL, CERCLA site. PMID:21883193
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method of manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.
Analytical transmission electron microscopy in materials science
Fraser, H.L.
1980-01-01
Microcharacterization of materials on a scale of less than 10 nm has been afforded by recent advances in analytical transmission electron microscopy. The factors limiting accurate analysis at the limit of spatial resolution for the case of a combination of scanning transmission electron microscopy and energy dispersive x-ray spectroscopy are examined in this paper.
Monsanto analytical testing program for NPDES discharge self-monitoring
Hoogheem, T.J.; Woods, L.A.
1985-06-01
The Monsanto Analytical Testing (MAT) program was devised and implemented in order to provide analytical standards to Monsanto manufacturing plants involved in the self-monitoring of plant discharges as required by National Pollutant Discharge Elimination System (NPDES) permit conditions. Standards are prepared and supplied at concentration levels normally observed at each individual plant. These levels were established by canvassing all Monsanto plants having NPDES permits and by determining which analyses and concentrations were most appropriate. Standards are prepared by Monsanto's analyses and concentrations were most appropriate. Standards are prepared by Monsanto's Environmental Sciences Center (ESC) using Environmental Protection Agency (EPA) methods. Eleven standards are currently available, each in three concentrations. Standards are issued quarterly in a company internal round-robin program or on a per request basis or both. Since initiation of the MAT program in 1981, the internal round-robin program has become an integral part of Monsanto's overall Good Laboratory Practices (GLP) program. Overall, results have shown that the company's plant analytical personnel can accurately analyze and report standard test samples. More importantly, such personnel have gained increased confidence in their ability to report accurate values for compounds regulated in their respective plant NPDES permits. 3 references, 3 tables.
Analytical multikinks in smooth potentials
NASA Astrophysics Data System (ADS)
de Brito, G. P.; Correa, R. A. C.; de Souza Dutra, A.
2014-03-01
In this work we present an approach that can be systematically used to construct nonlinear systems possessing analytical multikink profile configurations. In contrast with previous approaches to the problem, we are able to do it by using field potentials that are considerably smoother than the ones of the doubly quadratic family of potentials. This is done without losing the capacity of writing exact analytical solutions. The resulting field configurations can be applied to the study of problems from condensed matter to braneworld scenarios.
Thermodynamics of Gas Turbine Cycles with Analytic Derivatives in OpenMDAO
NASA Technical Reports Server (NTRS)
Gray, Justin; Chin, Jeffrey; Hearn, Tristan; Hendricks, Eric; Lavelle, Thomas; Martins, Joaquim R. R. A.
2016-01-01
A new equilibrium thermodynamics analysis tool was built based on the CEA method using the OpenMDAO framework. The new tool provides forward and adjoint analytic derivatives for use with gradient based optimization algorithms. The new tool was validated against the original CEA code to ensure an accurate analysis and the analytic derivatives were validated against finite-difference approximations. Performance comparisons between analytic and finite difference methods showed a significant speed advantage for the analytic methods. To further test the new analysis tool, a sample optimization was performed to find the optimal air-fuel equivalence ratio, , maximizing combustion temperature for a range of different pressures. Collectively, the results demonstrate the viability of the new tool to serve as the thermodynamic backbone for future work on a full propulsion modeling tool.
Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective.
Khazaei, Hamzeh; Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward
2015-01-01
The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children's Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children's Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto. PMID:27170907
Health Informatics for Neonatal Intensive Care Units: An Analytical Modeling Perspective.
Khazaei, Hamzeh; Mench-Bressan, Nadja; McGregor, Carolyn; Pugh, James Edward
2015-01-01
The effective use of data within intensive care units (ICUs) has great potential to create new cloud-based health analytics solutions for disease prevention or earlier condition onset detection. The Artemis project aims to achieve the above goals in the area of neonatal ICUs (NICU). In this paper, we proposed an analytical model for the Artemis cloud project which will be deployed at McMaster Children's Hospital in Hamilton. We collect not only physiological data but also the infusion pumps data that are attached to NICU beds. Using the proposed analytical model, we predict the amount of storage, memory, and computation power required for the system. Capacity planning and tradeoff analysis would be more accurate and systematic by applying the proposed analytical model in this paper. Numerical results are obtained using real inputs acquired from McMaster Children's Hospital and a pilot deployment of the system at The Hospital for Sick Children (SickKids) in Toronto.
Analytical chemistry of nickel.
Stoeppler, M
1984-01-01
Analytical chemists are faced with nickel contents in environmental and biological materials ranging from the mg/kg down to the ng/kg level. Sampling and sample treatment have to be performed with great care at lower levels, and this also applies to enrichment and separation procedures. The classical determination methods formerly used have been replaced almost entirely by different forms of atomic absorption spectrometry. Electroanalytical methods are also of increasing importance and at present provide the most sensitive approach. Despite the powerful methods available, achieving reliable results is still a challenge for the analyst requiring proper quality control measures.
Multimedia Analysis plus Visual Analytics = Multimedia Analytics
Chinchor, Nancy; Thomas, James J.; Wong, Pak C.; Christel, Michael; Ribarsky, Martin W.
2010-10-01
Multimedia analysis has focused on images, video, and to some extent audio and has made progress in single channels excluding text. Visual analytics has focused on the user interaction with data during the analytic process plus the fundamental mathematics and has continued to treat text as did its precursor, information visualization. The general problem we address in this tutorial is the combining of multimedia analysis and visual analytics to deal with multimedia information gathered from different sources, with different goals or objectives, and containing all media types and combinations in common usage.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Analytical Challenges in Biotechnology.
ERIC Educational Resources Information Center
Glajch, Joseph L.
1986-01-01
Highlights five major analytical areas (electrophoresis, immunoassay, chromatographic separations, protein and DNA sequencing, and molecular structures determination) and discusses how analytical chemistry could further improve these techniques and thereby have a major impact on biotechnology. (JN)
Abrupt PN junctions: Analytical solutions under equilibrium and non-equilibrium
NASA Astrophysics Data System (ADS)
Khorasani, Sina
2016-08-01
We present an explicit solution of carrier and field distributions in abrupt PN junctions under equilibrium. An accurate logarithmic numerical method is implemented and results are compared to the analytical solutions. Analysis of results shows reasonable agreement with numerical solution as well as the depletion layer approximation. We discuss extensions to the asymmetric junctions. Approximate relations for differential capacitance C-V and current-voltage I-V characteristics are also found under non-zero external bias.
Analyticity without Differentiability
ERIC Educational Resources Information Center
Kirillova, Evgenia; Spindler, Karlheinz
2008-01-01
In this article we derive all salient properties of analytic functions, including the analytic version of the inverse function theorem, using only the most elementary convergence properties of series. Not even the notion of differentiability is required to do so. Instead, analytical arguments are replaced by combinatorial arguments exhibiting…
[Application of analytical pyrolysis in air pollution control for green sand casting industry].
Wang, Yu-jue; Zhao, Qi; Chen, Ying; Wang, Cheng-wen
2010-02-01
Analytic pyrolysis was conducted to simulate the heating conditions that the raw materials of green sand would experience during metal casting process. The volatile organic compound (VOC) and hazardous air pollutant (HAP) emissions from analytical pyrolysis were analyzed by gas chromatograph-flame ionization detector/mass spectrometry (GC-FID/MS). The emissions from analytical pyrolysis exhibited some similarity in the compositions and distributions with those from actual casting processes. The major compositions of the emissions included benzene, toluene and phenol. The relative changes of emission levels that were observed in analytical pyrolysis of the various raw materials also showed similar trends with those observed in actual metal casting processes. The emission testing results of both analytic pyrolysis and pre-production foundry have shown that compared to the conventional phenolic urethane binder, the new non-naphthalene phenolic urethane binder diminished more than 50% of polycyclic aromatic hydrocarbon emissions, and the protein-based binder diminished more than 90% of HAP emissions. The similar trends in the two sets of tests offered promise that analytical pyrolysis techniques could be a fast and accurate way to establish the emission inventories, and to evaluate the relative emission levels of various raw materials of casting industry. The results of analytical pyrolysis could provide useful guides for the foundries to select and develop proper clean raw materials for the casting production. PMID:20391731
NASA Astrophysics Data System (ADS)
Schnase, J. L.; Duffy, D. Q.; McInerney, M. A.; Tamkin, G. S.; Thompson, J. H.; Gill, R.; Grieg, C. M.
2012-12-01
MERRA Analytic Services (MERRA/AS) is a cyberinfrastructure resource for developing and evaluating a new generation of climate data analysis capabilities. MERRA/AS supports OBS4MIP activities by reducing the time spent in the preparation of Modern Era Retrospective-Analysis for Research and Applications (MERRA) data used in data-model intercomparison. It also provides a testbed for experimental development of high-performance analytics. MERRA/AS is a cloud-based service built around the Virtual Climate Data Server (vCDS) technology that is currently used by the NASA Center for Climate Simulation (NCCS) to deliver Intergovernmental Panel on Climate Change (IPCC) data to the Earth System Grid Federation (ESGF). Crucial to its effectiveness, MERRA/AS's servers will use a workflow-generated realizable object capability to perform analyses over the MERRA data using the MapReduce approach to parallel storage-based computation. The results produced by these operations will be stored by the vCDS, which will also be able to host code sets for those who wish to explore the use of MapReduce for more advanced analytics. While the work described here will focus on the MERRA collection, these technologies can be used to publish other reanalysis, observational, and ancillary OBS4MIP data to ESGF and, importantly, offer an architectural approach to climate data services that can be generalized to applications and customers beyond the traditional climate research community. In this presentation, we describe our approach, experiences, lessons learned,and plans for the future.; (A) MERRA/AS software stack. (B) Example MERRA/AS interfaces.
Proficiency analytical testing program
Groff, J.H.; Schlecht, P.C.
1994-03-01
The Proficiency Analytical Testing (PAT) Program is a collaborative effort of the American Industrial Hygiene Association (AIHA) and researchers at the Centers for Disease Control and Prevention (CDC), National Institute for Occupational Safety and Health (NIOSH). The PAT Program provides quality control reference samples to over 1400 occupational health and environmental laboratories in over 15 countries. Although one objective of the PAT Program is to evaluate the analytical ability of participating laboratories, the primary objective is to assist these laboratories in improving their laboratory performance. Each calendar quarter (designated a round), samples are mailed to participating laboratories and the data are analyzed to evaluate laboratory performance on a series of analyses. Each mailing and subsequent data analysis are completed in time for participants to obtain repeat samples and to correct analytical problems before the next calendar quarter starts. The PAT Program currently includes four sets of samples. A mixture of 3 of the 4 possible metals, and 3 of the 15 possible organic solvents are rotated for each round. Laboratories are evaluated for each analysis by comparing their reported results against an acceptable performance limit for each PAT Program sample the laboratory analyses. Reference laboratories are preselected to provide the performance limits for each sample. These reference laboratories must meet the following criteria: (1) the laboratory was rated proficient in the last PAT evaluation of all the contaminants in the Program; and (2) the laboratory, if located in the United States, is AIHA accredited. Data are acceptable if they fall within the performance limits. Laboratories are rated based upon performance in the PAT Program over the last year (i.e., four calendar quarters), as well as on individual contaminant performance and overall performance. 1 ref., 3 tabs.
Proficiency analytical testing program
Schlecht, P.C.; Groff, J.H.
1994-06-01
The Proficiency Analytical Testing (PAT) Program is a collaborative effort of the American Industrial Hygiene Association (AIHA) and researchers at the Centers for Disease Control and Prevention, National Institute for Occupational Safety and Health (NIOSH). The PAT Program provides quality control reference samples to over 1400 occupational health and environmental laboratories in over 15 countries. Although one objective of the PAT Program is to evaluate the analytical ability of participating laboratories, the primary objective is to assist these laboratories in improving their laboratory performance. Each calendar quarter (designated a round), samples are mailed to participating laboratories and the data are analyzed to evaluate laboratory performance on a series of analyses. Each mailing and subsequent data analysis is completed in time for participants to obtain repeat samples and to correct analytical problems before the next calendar quarter starts. The PAT Program currently includes four sets of samples. A mixture of 3 of the 4 possible metals, and 3 of the 15 possible organic solvents are rotated for each round. Laboratories are evaluated for each analysis by comparing their reported results against an acceptable performance limit for each PAT Program sample the laboratory analyses. Reference laboratories are preselected to provide the performance limits for each sample. These reference laboratories must meet the following criteria: (1) the laboratory was rated proficient in the last PAT evaluation of all the contaminants in the Program; and (2) the laboratory, if located in the United States, is AIHA accredited. Data are acceptable if they fall within the performance limits. Laboratories are rated based upon performance in the PAT Program over the last year (i.e., four calendar quarters), as well as on individual contaminant performance and overall performance. 1 ref., 3 tabs.
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
Analytical calculation of spectral phase of grism pairs by the geometrical ray tracing method
NASA Astrophysics Data System (ADS)
Rahimi, L.; Askari, A. A.; Saghafifar, H.
2016-07-01
The most optimum operation of a grism pair is practically approachable when an analytical expression of its spectral phase is in hand. In this paper, we have employed the accurate geometrical ray tracing method to calculate the analytical phase shift of a grism pair, at transmission and reflection configurations. As shown by the results, for a great variety of complicated configurations, the spectral phase of a grism pair is in the same form of that of a prism pair. The only exception is when the light enters into and exits from different facets of a reflection grism. The analytical result has been used to calculate the second-order dispersions of several examples of grism pairs in various possible configurations. All results are in complete agreement with those from ray tracing method. The result of this work can be very helpful in the optimal design and application of grism pairs at various configurations.
The analytical validation of the Oncotype DX Recurrence Score assay
Baehner, Frederick L
2016-01-01
In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
Joint iris boundary detection and fit: a real-time method for accurate pupil tracking.
Barbosa, Marconi; James, Andrew C
2014-08-01
A range of applications in visual science rely on accurate tracking of the human pupil's movement and contraction in response to light. While the literature for independent contour detection and fitting of the iris-pupil boundary is vast, a joint approach, in which it is assumed that the pupil has a given geometric shape has been largely overlooked. We present here a global method for simultaneously finding and fitting of an elliptic or circular contour against a dark interior, which produces consistently accurate results even under non-ideal recording conditions, such as reflections near and over the boundary, droopy eye lids, or the sudden formation of tears. The specific form of the proposed optimization problem allows us to write down closed analytic formulae for the gradient and the Hessian of the objective function. Moreover, both the objective function and its derivatives can be cast into vectorized form, making the proposed algorithm significantly faster than its closest relative in the literature. We compare methods in multiple ways, both analytically and numerically, using real iris images as well as idealizations of the iris for which the ground truth boundary is precisely known. The method proposed here is illustrated under challenging recording conditions and it is shown to be robust. PMID:25136477
Esch, R.A.
1998-03-12
This document is the final report for tank 241-TX-302C grab samples. Six grabs samples (302C-TX-97-1A, 302C-TX-97-1B, 302C-TX-97-2A, 302C-TX-97-2B, 302C-TX-97-3A, and 302C-TX-97-3B) were collected from the catch tank level gauge riser on December 19, 1997. The ``A`` and ``B`` portions from each sample location were composited and analyses were performed on the composites in accordance with the Compatibility Grab Sampling and Analysis Plan (TSAP) (Sasaki, 1997) and the Data Quality Objectives for Tank Farms Waste Compatibility Program (DQO) (Rev. 1: Fowler, 1995; Rev. 2: Mulkey and Miller, 1997). The analytical results are presented in Table 1. No notification limits were exceeded. Appearance and Sample Handling Attachment 1 is provided as a cross-reference for relating the tank farm customer identification numbers with the 222-S Laboratory sample numbers and the portion of sample analyzed. Table 2 provides the appearance information.
Koch, Nicholas C; Newhauser, Wayne D
2010-02-01
Proton beam radiotherapy is an effective and non-invasive treatment for uveal melanoma. Recent research efforts have focused on improving the dosimetric accuracy of treatment planning and overcoming the present limitation of relative analytical dose calculations. Monte Carlo algorithms have been shown to accurately predict dose per monitor unit (D/MU) values, but this has yet to be shown for analytical algorithms dedicated to ocular proton therapy, which are typically less computationally expensive than Monte Carlo algorithms. The objective of this study was to determine if an analytical method could predict absolute dose distributions and D/MU values for a variety of treatment fields like those used in ocular proton therapy. To accomplish this objective, we used a previously validated Monte Carlo model of an ocular nozzle to develop an analytical algorithm to predict three-dimensional distributions of D/MU values from pristine Bragg peaks and therapeutically useful spread-out Bragg peaks (SOBPs). Results demonstrated generally good agreement between the analytical and Monte Carlo absolute dose calculations. While agreement in the proximal region decreased for beams with less penetrating Bragg peaks compared with the open-beam condition, the difference was shown to be largely attributable to edge-scattered protons. A method for including this effect in any future analytical algorithm was proposed. Comparisons of D/MU values showed typical agreement to within 0.5%. We conclude that analytical algorithms can be employed to accurately predict absolute proton dose distributions delivered by an ocular nozzle.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration.
A New Analytic Alignment Method for a SINS.
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
A New Analytic Alignment Method for a SINS
Tan, Caiming; Zhu, Xinhua; Su, Yan; Wang, Yu; Wu, Zhiqiang; Gu, Dongbing
2015-01-01
Analytic alignment is a type of self-alignment for a Strapdown inertial navigation system (SINS) that is based solely on two non-collinear vectors, which are the gravity and rotational velocity vectors of the Earth at a stationary base on the ground. The attitude of the SINS with respect to the Earth can be obtained directly using the TRIAD algorithm given two vector measurements. For a traditional analytic coarse alignment, all six outputs from the inertial measurement unit (IMU) are used to compute the attitude. In this study, a novel analytic alignment method called selective alignment is presented. This method uses only three outputs of the IMU and a few properties from the remaining outputs such as the sign and the approximate value to calculate the attitude. Simulations and experimental results demonstrate the validity of this method, and the precision of yaw is improved using the selective alignment method compared to the traditional analytic coarse alignment method in the vehicle experiment. The selective alignment principle provides an accurate relationship between the outputs and the attitude of the SINS relative to the Earth for a stationary base, and it is an extension of the TRIAD algorithm. The selective alignment approach has potential uses in applications such as self-alignment, fault detection, and self-calibration. PMID:26556353
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
The analytic renormalization group
NASA Astrophysics Data System (ADS)
Ferrari, Frank
2016-08-01
Finite temperature Euclidean two-point functions in quantum mechanics or quantum field theory are characterized by a discrete set of Fourier coefficients Gk, k ∈ Z, associated with the Matsubara frequencies νk = 2 πk / β. We show that analyticity implies that the coefficients Gk must satisfy an infinite number of model-independent linear equations that we write down explicitly. In particular, we construct "Analytic Renormalization Group" linear maps Aμ which, for any choice of cut-off μ, allow to express the low energy Fourier coefficients for |νk | < μ (with the possible exception of the zero mode G0), together with the real-time correlators and spectral functions, in terms of the high energy Fourier coefficients for |νk | ≥ μ. Operating a simple numerical algorithm, we show that the exact universal linear constraints on Gk can be used to systematically improve any random approximate data set obtained, for example, from Monte-Carlo simulations. Our results are illustrated on several explicit examples.
NASA Astrophysics Data System (ADS)
Papp, P.; Matejčík, Š.; Mach, P.; Urban, J.; Paidarová, I.; Horáček, J.
2013-06-01
The method of analytic continuation in the coupling constant (ACCC) in combination with use of the statistical Padé approximation is applied to the determination of resonance energy and width of some amino acids and formic acid dimer. Standard quantum chemistry codes provide accurate data which can be used for analytic continuation in the coupling constant to obtain the resonance energy and width of organic molecules with a good accuracy. The obtained results are compared with the existing experimental ones.
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Vastano, John A.; Lomax, Harvard
1992-01-01
Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.
Magnetic anomaly depth and structural index estimation using different height analytic signals data
NASA Astrophysics Data System (ADS)
Zhou, Shuai; Huang, Danian; Su, Chao
2016-09-01
This paper proposes a new semi-automatic inversion method for magnetic anomaly data interpretation that uses the combination of analytic signals of the anomaly at different heights to determine the depth and the structural index N of the sources. The new method utilizes analytic signals of the original anomaly at different height to effectively suppress the noise contained in the anomaly. Compared with the other high-order derivative calculation methods based on analytic signals, our method only computes first-order derivatives of the anomaly, which can be used to obtain more stable and accurate results. Tests on synthetic noise-free and noise-corrupted magnetic data indicate that the new method can estimate the depth and N efficiently. The technique is applied to a real measured magnetic anomaly in Southern Illinois caused by a known dike, and the result is in agreement with the drilling information and inversion results within acceptable calculation error.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Time-accurate Navier-Stokes computations of classical two-dimensional edge tone flow fields
NASA Technical Reports Server (NTRS)
Liu, B. L.; O'Farrell, J. M.; Jones, Jess H.
1990-01-01
Time-accurate Navier-Stokes computations were performed to study a Class II (acoustic) whistle, the edge tone, and gain knowledge of the vortex-acoustic coupling mechanisms driving production of these tones. Results were obtained by solving the full Navier-Stokes equations for laminar compressible air flow of a two-dimensional jet issuing from a slit interacting with a wedge. Cases considered were determined by varying the distance from the slit to the edge. Flow speed was kept constant at 1750 cm/sec as was the slit thickness of 0.1 cm, corresponding to conditions in the experiments of Brown. Excellent agreement was obtained in all four edge tone stage cases between the present computational results and the experimentally obtained results of Brown. Specific edge tone generated phenomena and further confirmation of certain theories concerning these phenomena were brought to light in this analytical simulation of edge tones.
Partially Coherent Scattering in Stellar Chromospheres. Part 4; Analytic Wing Approximations
NASA Technical Reports Server (NTRS)
Gayley, K. G.
1993-01-01
Simple analytic expressions are derived to understand resonance-line wings in stellar chromospheres and similar astrophysical plasmas. The results are approximate, but compare well with accurate numerical simulations. The redistribution is modeled using an extension of the partially coherent scattering approximation (PCS) which we term the comoving-frame partially coherent scattering approximation (CPCS). The distinction is made here because Doppler diffusion is included in the coherent/noncoherent decomposition, in a form slightly improved from the earlier papers in this series.
Analytical Chemistry in Russia.
Zolotov, Yuri
2016-09-01
Research in Russian analytical chemistry (AC) is carried out on a significant scale, and the analytical service solves practical tasks of geological survey, environmental protection, medicine, industry, agriculture, etc. The education system trains highly skilled professionals in AC. The development and especially manufacturing of analytical instruments should be improved; in spite of this, there are several good domestic instruments and other satisfy some requirements. Russian AC has rather good historical roots.
ANALYTIC MODELING OF STARSHADES
Cash, Webster
2011-09-01
External occulters, otherwise known as starshades, have been proposed as a solution to one of the highest priority yet technically vexing problems facing astrophysics-the direct imaging and characterization of terrestrial planets around other stars. New apodization functions, developed over the past few years, now enable starshades of just a few tens of meters diameter to occult central stars so efficiently that the orbiting exoplanets can be revealed and other high-contrast imaging challenges addressed. In this paper, an analytic approach to the analysis of these apodization functions is presented. It is used to develop a tolerance analysis suitable for use in designing practical starshades. The results provide a mathematical basis for understanding starshades and a quantitative approach to setting tolerances.
Science Update: Analytical Chemistry.
ERIC Educational Resources Information Center
Worthy, Ward
1980-01-01
Briefly discusses new instrumentation in the field of analytical chemistry. Advances in liquid chromatography, photoacoustic spectroscopy, the use of lasers, and mass spectrometry are also discussed. (CS)
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Stellar Parameters for Exoplanet Host Stars
NASA Astrophysics Data System (ADS)
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Road Transportable Analytical Laboratory (RTAL) system
Finger, S.M.
1995-10-01
The goal of the Road Transportable Analytical Laboratory (RTAL) Project is the development and demonstration of a system to meet the unique needs of the DOE for rapid, accurate analysis of a wide variety of hazardous and radioactive contaminants in soil, groundwater, and surface waters. This laboratory system has been designed to provide the field and laboratory analytical equipment necessary to detect and quantify radionuclides, organics, heavy metals and other inorganic compounds. The laboratory system consists of a set of individual laboratory modules deployable independently or as an interconnected group to meet each DOE site`s specific needs.
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability.
service line analytics in the new era.
Spence, Jay; Seargeant, Dan
2015-08-01
To succeed under the value-based business model, hospitals and health systems require effective service line analytics that combine inpatient and outpatient data and that incorporate quality metrics for evaluating clinical operations. When developing a framework for collection, analysis, and dissemination of service line data, healthcare organizations should focus on five key aspects of effective service line analytics: Updated service line definitions. Ability to analyze and trend service line net patient revenues by payment source. Access to accurate service line cost information across multiple dimensions with drill-through capabilities. Ability to redesign key reports based on changing requirements. Clear assignment of accountability. PMID:26548137
NASA Technical Reports Server (NTRS)
Prust, H. W., Jr.
1978-01-01
Published experimental aerodynamic efficiency results were compared with results predicted from two published analytical methods. This is the second of two such comparisons. One of the analytical methods was used as published; the other was modified for certain cases of coolant discharge from the blade suction surface. The results show that for 23 cases of single row and multirow discharge covering coolant fractions from 0 to about 9 percent, the difference between the experimental and predicted results was no greater than about 1 percent in any case and less than 1/2 percent in most cases.
Krings, Thomas; Mauerhofer, Eric
2011-06-01
This work improves the reliability and accuracy in the reconstruction of the total isotope activity content in heterogeneous nuclear waste drums containing point sources. The method is based on χ(2)-fits of the angular dependent count rate distribution measured during a drum rotation in segmented gamma scanning. A new description of the analytical calculation of the angular count rate distribution is introduced based on a more precise model of the collimated detector. The new description is validated and compared to the old description using MCNP5 simulations of angular dependent count rate distributions of Co-60 and Cs-137 point sources. It is shown that the new model describes the angular dependent count rate distribution significantly more accurate compared to the old model. Hence, the reconstruction of the activity is more accurate and the errors are considerably reduced that lead to more reliable results. Furthermore, the results are compared to the conventional reconstruction method assuming a homogeneous matrix and activity distribution.
Accurate Cross Sections for Microanalysis
Rez, Peter
2002-01-01
To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747
Assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry
Taylor, H.E.; Garbarino, J.R.
1988-01-01
A thorough assessment of the analytical capabilities of inductively coupled plasma-mass spectrometry was conducted for selected analytes of importance in water quality applications and hydrologic research. A multielement calibration curve technique was designed to produce accurate and precise results in analysis times of approximately one minute. The suite of elements included Al, As, B, Ba, Be, Cd, Co, Cr, Cu, Hg, Li, Mn, Mo, Ni, Pb, Se, Sr, V, and Zn. The effects of sample matrix composition on the accuracy of the determinations showed that matrix elements (such as Na, Ca, Mg, and K) that may be present in natural water samples at concentration levels greater than 50 mg/L resulted in as much as a 10% suppression in ion current for analyte elements. Operational detection limits are presented.
Hadjitheodorou, Amalia; Kalosakas, George
2014-09-01
We investigate, both analytically and numerically, diffusion-controlled drug release from composite spherical formulations consisting of an inner core and an outer shell of different drug diffusion coefficients. Theoretically derived analytical results are based on the exact solution of Fick's second law of diffusion for a composite sphere, while numerical data are obtained using Monte Carlo simulations. In both cases, and for the range of matrix parameter values considered in this work, fractional drug release profiles are described accurately by a stretched exponential function. The release kinetics obtained is quantified through a detailed investigation of the dependence of the two stretched exponential release parameters on the device characteristics, namely the geometrical radii of the inner core and outer shell and the corresponding drug diffusion coefficients. Similar behaviors are revealed by both the theoretical results and the numerical simulations, and approximate analytical expressions are presented for the dependencies. PMID:25063169
Li, Rui; Wang, Pengcheng; Tian, Yu; Wang, Bo; Li, Gang
2015-01-01
A unified analytic solution approach to both static bending and free vibration problems of rectangular thin plates is demonstrated in this paper, with focus on the application to corner-supported plates. The solution procedure is based on a novel symplectic superposition method, which transforms the problems into the Hamiltonian system and yields accurate enough results via step-by-step rigorous derivation. The main advantage of the developed approach is its wide applicability since no trial solutions are needed in the analysis, which is completely different from the other methods. Numerical examples for both static bending and free vibration plates are presented to validate the developed analytic solutions and to offer new numerical results. The approach is expected to serve as a benchmark analytic approach due to its effectiveness and accuracy. PMID:26608602
Photovoltaic Degradation Rates -- An Analytical Review
Jordan, D. C.; Kurtz, S. R.
2012-06-01
As photovoltaic penetration of the power grid increases, accurate predictions of return on investment require accurate prediction of decreased power output over time. Degradation rates must be known in order to predict power delivery. This article reviews degradation rates of flat-plate terrestrial modules and systems reported in published literature from field testing throughout the last 40 years. Nearly 2000 degradation rates, measured on individual modules or entire systems, have been assembled from the literature, showing a median value of 0.5%/year. The review consists of three parts: a brief historical outline, an analytical summary of degradation rates, and a detailed bibliography partitioned by technology.
A SPICE model for a phase-change memory cell based on the analytical conductivity model
NASA Astrophysics Data System (ADS)
Yiqun, Wei; Xinnan, Lin; Yuchao, Jia; Xiaole, Cui; Jin, He; Xing, Zhang
2012-11-01
By way of periphery circuit design of the phase-change memory, it is necessary to present an accurate compact model of a phase-change memory cell for the circuit simulation. Compared with the present model, the model presented in this work includes an analytical conductivity model, which is deduced by means of the carrier transport theory instead of the fitting model based on the measurement. In addition, this model includes an analytical temperature model based on the 1D heat-transfer equation and the phase-transition dynamic model based on the JMA equation to simulate the phase-change process. The above models for phase-change memory are integrated by using Verilog-A language, and results show that this model is able to simulate the I-V characteristics and the programming characteristics accurately.
Learning Analytics Considered Harmful
ERIC Educational Resources Information Center
Dringus, Laurie P.
2012-01-01
This essay is written to present a prospective stance on how learning analytics, as a core evaluative approach, must help instructors uncover the important trends and evidence of quality learner data in the online course. A critique is presented of strategic and tactical issues of learning analytics. The approach to the critique is taken through…
ERIC Educational Resources Information Center
Ember, Lois R.
1977-01-01
The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)
Not Available
1990-01-01
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Analytical mass spectrometry. Abstracts
Not Available
1990-12-31
This 43rd Annual Summer Symposium on Analytical Chemistry was held July 24--27, 1990 at Oak Ridge, TN and contained sessions on the following topics: Fundamentals of Analytical Mass Spectrometry (MS), MS in the National Laboratories, Lasers and Fourier Transform Methods, Future of MS, New Ionization and LC/MS Methods, and an extra session. (WET)
Extreme Scale Visual Analytics
Wong, Pak C.; Shen, Han-Wei; Pascucci, Valerio
2012-05-08
Extreme-scale visual analytics (VA) is about applying VA to extreme-scale data. The articles in this special issue examine advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications.
Signals: Applying Academic Analytics
ERIC Educational Resources Information Center
Arnold, Kimberly E.
2010-01-01
Academic analytics helps address the public's desire for institutional accountability with regard to student success, given the widespread concern over the cost of higher education and the difficult economic and budgetary conditions prevailing worldwide. Purdue University's Signals project applies the principles of analytics widely used in…
ERIC Educational Resources Information Center
Jackson, Brian
2010-01-01
Using a survey of 138 writing programs, I argue that we must be more explicit about what we think students should get out of analysis to make it more likely that students will transfer their analytical skills to different settings. To ensure our students take analytical skills with them at the end of the semester, we must simplify the task we…
Chen, W M; Deng, H W
2001-07-01
Transmission disequilibrium test (TDT) is a nuclear family-based analysis that can test linkage in the presence of association. It has gained extensive attention in theoretical investigation and in practical application; in both cases, the accuracy and generality of the power computation of the TDT are crucial. Despite extensive investigations, previous approaches for computing the statistical power of the TDT are neither accurate nor general. In this paper, we develop a general and highly accurate approach to analytically compute the power of the TDT. We compare the results from our approach with those from several other recent papers, all against the results obtained from computer simulations. We show that the results computed from our approach are more accurate than or at least the same as those from other approaches. More importantly, our approach can handle various situations, which include (1) families that consist of one or more children and that have any configuration of affected and nonaffected sibs; (2) families ascertained through the affection status of parent(s); (3) any mixed sample with different types of families in (1) and (2); (4) the marker locus is not a disease susceptibility locus; and (5) existence of allelic heterogeneity. We implement this approach in a user-friendly computer program: TDT Power Calculator. Its applications are demonstrated. The approach and the program developed here should be significant for theoreticians to accurately investigate the statistical power of the TDT in various situations, and for empirical geneticists to plan efficient studies using the TDT.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
NASA Technical Reports Server (NTRS)
Prust, H. W., Jr.
1978-01-01
Previously published experimental aerodynamic efficiency results for a film cooled turbine stator blade are compared with analytical results computed from two published analytical methods. One method was used as published; the other was modified for certain cases of coolant discharge from the blade suction surface. For coolant ejection from blade surface regions where the surface static pressures are higher than the blade exit pressure, both methods predict the experimental results quite well. However, for ejection from regions with surface static pressures lower than the blade exit pressure, both methods predict too small a change in efficiency. The modified method gives the better prediction.
Accurate FDTD modelling for dispersive media using rational function and particle swarm optimisation
NASA Astrophysics Data System (ADS)
Chung, Haejun; Ha, Sang-Gyu; Choi, Jaehoon; Jung, Kyung-Young
2015-07-01
This article presents an accurate finite-difference time domain (FDTD) dispersive modelling suitable for complex dispersive media. A quadratic complex rational function (QCRF) is used to characterise their dispersive relations. To obtain accurate coefficients of QCRF, in this work, we use an analytical approach and a particle swarm optimisation (PSO) simultaneously. In specific, an analytical approach is used to obtain the QCRF matrix-solving equation and PSO is applied to adjust a weighting function of this equation. Numerical examples are used to illustrate the validity of the proposed FDTD dispersion model.
NASA Astrophysics Data System (ADS)
Martínez, M. J.; Marco, F. J.; López, J. A.
2009-02-01
The Hipparcos catalog provides a reference frame at optical wavelengths for the new International Celestial Reference System (ICRS). This new reference system was adopted following the resolution agreed at the 23rd IAU General Assembly held in Kyoto in 1997. Differences in the Hipparcos system of proper motions and the previous materialization of the reference frame, the FK5, are expected to be caused only by the combined effects of the motion of the equinox of the FK5 and the precession of the equator and the ecliptic. Several authors have pointed out an inconsistency between the differences in proper motion of the Hipparcos-FK5 and the correction of the precessional values derived from VLBI and lunar laser ranging (LLR) observations. Most of them have claimed that these discrepancies are due to slightly biased proper motions in the FK5 catalog. The different mathematical models that have been employed to explain these errors have not fully accounted for the discrepancies in the correction of the precessional parameters. Our goal here is to offer an explanation for this fact. We propose the use of independent parametric and nonparametric models. The introduction of a nonparametric model, combined with the inner product in the square integrable functions over the unitary sphere, would give us values which do not depend on the possible interdependencies existing in the data set. The evidence shows that zonal studies are needed. This would lead us to introduce a local nonparametric model. All these models will provide independent corrections to the precessional values, which could then be compared in order to study the reliability in each case. Finally, we obtain values for the precession corrections that are very consistent with those that are currently adopted.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
Accurate Weather Forecasting for Radio Astronomy
NASA Astrophysics Data System (ADS)
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon
2016-04-01
Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.
Enzymes in Analytical Chemistry.
ERIC Educational Resources Information Center
Fishman, Myer M.
1980-01-01
Presents tabular information concerning recent research in the field of enzymes in analytic chemistry, with methods, substrate or reaction catalyzed, assay, comments and references listed. The table refers to 128 references. Also listed are 13 general citations. (CS)
Idealized textile composites for experimental/analytical correlation
NASA Technical Reports Server (NTRS)
Adams, Daniel O.
1994-01-01
Textile composites are fiber reinforced materials produced by weaving, braiding, knitting, or stitching. These materials offer possible reductions in manufacturing costs compared to conventional laminated composites. Thus, they are attractive candidate materials for aircraft structures. To date, numerous experimental studies have been performed to characterize the mechanical performance of specific textile architectures. Since many materials and architectures are of interest, there is a need for analytical models to predict the mechanical properties of a specific textile composite material. Models of varying sophistication have been proposed based on mechanics of materials, classical laminated plate theory, and the finite element method. These modeling approaches assume an idealized textile architecture and generally consider a single unit cell. Due to randomness of the textile architectures produced using conventional processing techniques, experimental data obtained has been of limited use for verifying the accuracy of these analytical approaches. This research is focused on fabricating woven textile composites with highly aligned and accurately placed fiber tows that closely represent the idealized architectures assumed in analytical models. These idealized textile composites have been fabricated with three types of layer nesting configurations: stacked, diagonal, and split-span. Compression testing results have identified strength variations as a function of nesting. Moire interferometry experiments are being used to determine localized deformations for detailed correlation with model predictions.
An Analytic Function of Lunar Surface Temperature for Exospheric Modeling
NASA Technical Reports Server (NTRS)
Hurley, Dana M.; Sarantos, Menelaos; Grava, Cesare; Williams, Jean-Pierre; Retherford, Kurt D.; Siegler, Matthew; Greenhagen, Benjamin; Paige, David
2014-01-01
We present an analytic expression to represent the lunar surface temperature as a function of Sun-state latitude and local time. The approximation represents neither topographical features nor compositional effects and therefore does not change as a function of selenographic latitude and longitude. The function reproduces the surface temperature measured by Diviner to within +/-10 K at 72% of grid points for dayside solar zenith angles of less than 80, and at 98% of grid points for nightside solar zenith angles greater than 100. The analytic function is least accurate at the terminator, where there is a strong gradient in the temperature, and the polar regions. Topographic features have a larger effect on the actual temperature near the terminator than at other solar zenith angles. For exospheric modeling the effects of topography on the thermal model can be approximated by using an effective longitude for determining the temperature. This effective longitude is randomly redistributed with 1 sigma of 4.5deg. The resulting ''roughened'' analytical model well represents the statistical dispersion in the Diviner data and is expected to be generally useful for future models of lunar surface temperature, especially those implemented within exospheric simulations that address questions of volatile transport.
Extreme Scale Visual Analytics
Steed, Chad A; Potok, Thomas E; Pullum, Laura L; Ramanathan, Arvind; Shipman, Galen M; Thornton, Peter E; Potok, Thomas E
2013-01-01
Given the scale and complexity of today s data, visual analytics is rapidly becoming a necessity rather than an option for comprehensive exploratory analysis. In this paper, we provide an overview of three applications of visual analytics for addressing the challenges of analyzing climate, text streams, and biosurveilance data. These systems feature varying levels of interaction and high performance computing technology integration to permit exploratory analysis of large and complex data of global significance.
Analytical Improvements in PV Degradation Rate Determination
Jordan, D. C.; Kurtz, S. R.
2011-02-01
As photovoltaic (PV) penetration of the power grid increases, it becomes vital to know how decreased power output may affect cost over time. In order to predict power delivery, the decline or degradation rates must be determined accurately. For non-spectrally corrected data several complete seasonal cycles (typically 3-5 years) are required to obtain reasonably accurate degradation rates. In a rapidly evolving industry such a time span is often unacceptable and the need exists to determine degradation rates accurately in a shorter period of time. Occurrence of outliers and data shifts are two examples of analytical problems leading to greater uncertainty and therefore to longer observation times. In this paper we compare three methodologies of data analysis for robustness in the presence of outliers, data shifts and shorter measurement time periods.
Analytic boosted boson discrimination
NASA Astrophysics Data System (ADS)
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-01
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D 2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits. By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. Our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.
Analytic boosted boson discrimination
Larkoski, Andrew J.; Moult, Ian; Neill, Duff
2016-05-20
Observables which discriminate boosted topologies from massive QCD jets are of great importance for the success of the jet substructure program at the Large Hadron Collider. Such observables, while both widely and successfully used, have been studied almost exclusively with Monte Carlo simulations. In this paper we present the first all-orders factorization theorem for a two-prong discriminant based on a jet shape variable, D2, valid for both signal and background jets. Our factorization theorem simultaneously describes the production of both collinear and soft subjets, and we introduce a novel zero-bin procedure to correctly describe the transition region between these limits.more » By proving an all orders factorization theorem, we enable a systematically improvable description, and allow for precision comparisons between data, Monte Carlo, and first principles QCD calculations for jet substructure observables. Using our factorization theorem, we present numerical results for the discrimination of a boosted Z boson from massive QCD background jets. We compare our results with Monte Carlo predictions which allows for a detailed understanding of the extent to which these generators accurately describe the formation of two-prong QCD jets, and informs their usage in substructure analyses. In conclusion, our calculation also provides considerable insight into the discrimination power and calculability of jet substructure observables in general.« less
Analyticity and Features of Semantic Interaction.
ERIC Educational Resources Information Center
Steinberg, Danny D.
The findings reported in this paper are the result of an experiment to determine the empirical validity of such semantic concepts as analytic, synthetic, and contradictory. Twenty-eight university students were presented with 156 sentences to assign to one of four semantic categories: (1) synthetic ("The dog is a poodle"), (2) analytic ("The tulip…
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule.
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728
Waste minimization in analytical methods
Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.
1995-05-01
The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department`s goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision.
Analytical SAR-GMTI principles
NASA Astrophysics Data System (ADS)
Soumekh, Mehrdad; Majumder, Uttam K.; Barnes, Christopher; Sobota, David; Minardi, Michael
2016-05-01
This paper provides analytical principles to relate the signature of a moving target to parameters in a SAR system. Our objective is to establish analytical tools that could predict the shift and smearing of a moving target in a subaperture SAR image. Hence, a user could identify the system parameters such as the coherent processing interval for a subaperture that is suitable to localize the signature of a moving target for detection, tracking and geolocating the moving target. The paper begins by outlining two well-known SAR data collection methods to detect moving targets. One uses a scanning beam in the azimuth domain with a relatively high PRF to separate the moving targets and the stationary background (clutter); this is also known as Doppler Beam Sharpening. The other scheme uses two receivers along the track to null the clutter and, thus, provide GMTI. We also present results on implementing our SAR-GMTI analytical principles for the anticipated shift and smearing of a moving target in a simulated code. The code would provide a tool for the user to change the SAR system and moving target parameters, and predict the properties of a moving target signature in a subaperture SAR image for a scene that is composed of both stationary and moving targets. Hence, the SAR simulation and imaging code could be used to demonstrate the validity and accuracy of the above analytical principles to predict the properties of a moving target signature in a subaperture SAR image.
Applying an accurate spherical model to gamma-ray burst afterglow observations
NASA Astrophysics Data System (ADS)
Leventis, K.; van der Horst, A. J.; van Eerten, H. J.; Wijers, R. A. M. J.
2013-05-01
We present results of model fits to afterglow data sets of GRB 970508, GRB 980703 and GRB 070125, characterized by long and broad-band coverage. The model assumes synchrotron radiation (including self-absorption) from a spherical adiabatic blast wave and consists of analytic flux prescriptions based on numerical results. For the first time it combines the accuracy of hydrodynamic simulations through different stages of the outflow dynamics with the flexibility of simple heuristic formulas. The prescriptions are especially geared towards accurate description of the dynamical transition of the outflow from relativistic to Newtonian velocities in an arbitrary power-law density environment. We show that the spherical model can accurately describe the data only in the case of GRB 970508, for which we find a circumburst medium density n ∝ r-2. We investigate in detail the implied spectra and physical parameters of that burst. For the microphysics we show evidence for equipartition between the fraction of energy density carried by relativistic electrons and magnetic field. We also find that for the blast wave to be adiabatic, the fraction of electrons accelerated at the shock has to be smaller than 1. We present best-fitting parameters for the afterglows of all three bursts, including uncertainties in the parameters of GRB 970508, and compare the inferred values to those obtained by different authors.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Comparison between analytical and numerical solution of mathematical drying model
NASA Astrophysics Data System (ADS)
Shahari, N.; Rasmani, K.; Jamil, N.
2016-02-01
Drying is often related to the food industry as a process of shifting heat and mass inside food, which helps in preserving food. Previous research using a mass transfer equation showed that the results were mostly concerned with the comparison between the simulation model and the experimental data. In this paper, the finite difference method was used to solve a mass equation during drying using different kinds of boundary condition, which are equilibrium and convective boundary conditions. The results of these two models provide a comparison between the analytical and the numerical solution. The result shows a close match between the two solution curves. It is concluded that the two proposed models produce an accurate solution to describe the moisture distribution content during the drying process. This analysis indicates that we have confidence in the behaviour of moisture in the numerical simulation. This result demonstrated that a combined analytical and numerical approach prove that the system is behaving physically. Based on this assumption, the model of mass transfer was extended to include the temperature transfer, and the result shows a similar trend to those presented in the simpler case.
Importance of Accurate Measurements in Nutrition Research: Dietary Flavonoids as a Case Study
Technology Transfer Automated Retrieval System (TEKTRAN)
Accurate measurements of the secondary metabolites in natural products and plant foods are critical to establishing diet/health relationships. There are as many as 50,000 secondary metabolites which may influence human health. Their structural and chemical diversity present a challenge to analytic...
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space
NASA Astrophysics Data System (ADS)
Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Fast and accurate predictions of covalent bonds in chemical space.
Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole
2016-05-01
We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi
Johnson, Timothy C.; Wellman, Dawn M.
2015-06-26
Electrical resistivity tomography (ERT) has been widely used in environmental applications to study processes associated with subsurface contaminants and contaminant remediation. Anthropogenic alterations in subsurface electrical conductivity associated with contamination often originate from highly industrialized areas with significant amounts of buried metallic infrastructure. The deleterious influence of such infrastructure on imaging results generally limits the utility of ERT where it might otherwise prove useful for subsurface investigation and monitoring. In this manuscript we present a method of accurately modeling the effects of buried conductive infrastructure within the forward modeling algorithm, thereby removing them from the inversion results. The method is implemented in parallel using immersed interface boundary conditions, whereby the global solution is reconstructed from a series of well-conditioned partial solutions. Forward modeling accuracy is demonstrated by comparison with analytic solutions. Synthetic imaging examples are used to investigate imaging capabilities within a subsurface containing electrically conductive buried tanks, transfer piping, and well casing, using both well casings and vertical electrode arrays as current sources and potential measurement electrodes. Results show that, although accurate infrastructure modeling removes the dominating influence of buried metallic features, the presence of metallic infrastructure degrades imaging resolution compared to standard ERT imaging. However, accurate imaging results may be obtained if electrodes are appropriately located.
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Insight solutions are correct more often than analytic solutions
Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark
2016-01-01
How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960
Analytical methods under emergency conditions
Sedlet, J.
1983-01-01
This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references. (ACR)
Advances in analytical chemistry
NASA Technical Reports Server (NTRS)
Arendale, W. F.; Congo, Richard T.; Nielsen, Bruce J.
1991-01-01
Implementation of computer programs based on multivariate statistical algorithms makes possible obtaining reliable information from long data vectors that contain large amounts of extraneous information, for example, noise and/or analytes that we do not wish to control. Three examples are described. Each of these applications requires the use of techniques characteristic of modern analytical chemistry. The first example, using a quantitative or analytical model, describes the determination of the acid dissociation constant for 2,2'-pyridyl thiophene using archived data. The second example describes an investigation to determine the active biocidal species of iodine in aqueous solutions. The third example is taken from a research program directed toward advanced fiber-optic chemical sensors. The second and third examples require heuristic or empirical models.
Competing on talent analytics.
Davenport, Thomas H; Harris, Jeanne; Shapiro, Jeremy
2010-10-01
Do investments in your employees actually affect workforce performance? Who are your top performers? How can you empower and motivate other employees to excel? Leading-edge companies such as Google, Best Buy, Procter & Gamble, and Sysco use sophisticated data-collection technology and analysis to answer these questions, leveraging a range of analytics to improve the way they attract and retain talent, connect their employee data to business performance, differentiate themselves from competitors, and more. The authors present the six key ways in which companies track, analyze, and use data about their people-ranging from a simple baseline of metrics to monitor the organization's overall health to custom modeling for predicting future head count depending on various "what if" scenarios. They go on to show that companies competing on talent analytics manage data and technology at an enterprise level, support what analytical leaders do, choose realistic targets for analysis, and hire analysts with strong interpersonal skills as well as broad expertise.
NASA Astrophysics Data System (ADS)
Rassoulinejad-Mousavi, S. M.; Abbasbandy, S.; Alsulami, H. H.
2014-08-01
Hydrodynamics of a conducting visco-elastic fluid in a porous medium sandwiched between two parallel plates, under the effect of the Lorentz force, for both moving and stationary wall boundary conditions, is considered in this paper. The non-linear momentum equation is solved analytically using the optimal homotopy analysis method (OHAM) and the effect of existing parameters in the physics of the problem is demonstrated on the dimensionless velocity profile and skin friction coefficient. The robustness of the analytical solution is checked by comparison with numerical results and plotting the residual errors. Results show that there is excellent agreement between numerical and analytical solutions. Furthermore, the error diagrams show that OHAM yields accurate results in all values of the different effective parameters, such as porous medium shape factor, Forchheimer number, visco-elastic parameter, Reynolds number and the parameters related to the Lorentz force.
Analytical optical scattering in clouds
NASA Technical Reports Server (NTRS)
Phanord, Dieudonne D.
1989-01-01
An analytical optical model for scattering of light due to lightning by clouds of different geometry is being developed. The self-consistent approach and the equivalent medium concept of Twersky was used to treat the case corresponding to outside illumination. Thus, the resulting multiple scattering problem is transformed with the knowledge of the bulk parameters, into scattering by a single obstacle in isolation. Based on the size parameter of a typical water droplet as compared to the incident wave length, the problem for the single scatterer equivalent to the distribution of cloud particles can be solved either by Mie or Rayleigh scattering theory. The super computing code of Wiscombe can be used immediately to produce results that can be compared to the Monte Carlo computer simulation for outside incidence. A fairly reasonable inverse approach using the solution of the outside illumination case was proposed to model analytically the situation for point sources located inside the thick optical cloud. Its mathematical details are still being investigated. When finished, it will provide scientists an enhanced capability to study more realistic clouds. For testing purposes, the direct approach to the inside illumination of clouds by lightning is under consideration. Presently, an analytical solution for the cubic cloud will soon be obtained. For cylindrical or spherical clouds, preliminary results are needed for scattering by bounded obstacles above or below a penetrable surface interface.
Frontiers in analytical chemistry
Amato, I.
1988-12-15
Doing more with less was the modus operandi of R. Buckminster Fuller, the late science genius, and inventor of such things as the geodesic dome. In late September, chemists described their own version of this maxim--learning more chemistry from less material and in less time--in a symposium titled Frontiers in Analytical Chemistry at the 196th National Meeting of the American Chemical Society in Los Angeles. Symposium organizer Allen J. Bard of the University of Texas at Austin assembled six speakers, himself among them, to survey pretty widely different areas of analytical chemistry.
Monitoring the analytic surface.
Spence, D P; Mayes, L C; Dahl, H
1994-01-01
How do we listen during an analytic hour? Systematic analysis of the speech patterns of one patient (Mrs. C.) strongly suggests that the clustering of shared pronouns (e.g., you/me) represents an important aspect of the analytic surface, preconsciously sensed by the analyst and used by him to determine when to intervene. Sensitivity to these patterns increases over the course of treatment, and in a final block of 10 hours shows a striking degree of contingent responsivity: specific utterances by the patient are consistently echoed by the analyst's interventions. PMID:8182248
SU-E-T-517: Analytic Formalism to Compute in Real Time Dose Distributions Delivered by HDR Units
Pokhrel, S; Loyalka, S; Palaniswaamy, G; Rangaraj, D; Izaguirre, E
2014-06-01
Purpose: Develop an analytical algorithm to compute the dose delivered by Ir-192 dwell positions with high accuracy using the 3-dimensional (3D) dose distribution of an HDR source. Using our analytical function, the dose delivered by an HDR unit as treatment progresses can be determined using the actual delivered temporal and positional data of each individual dwell. Consequently, true delivered dose can be computed when each catheter becomes active. We hypothesize that the knowledge of such analytical formulation will allow developing HDR systems with a real time treatment evaluation tool to avoid mistreatments. Methods: In our analytic formulation, the dose is computed by using the full anisotropic function data of the TG 43 formalism with 3D ellipsoidal function. The discrepancy between the planned dose and the delivered dose is computed using an analytic perturbation method over the initial dose distribution. This methodology speeds up the computation because only changes in dose discrepancies originated by spatial and temporal deviations are computed. A dose difference map at the point of interest is obtained from these functions and this difference can be shown during treatment in real time to examine the treatment accuracy. Results: We determine the analytical solution and a perturbation function for the 3 translational 3 rotational, and 1D temporal errors in source distributions. The analytic formulation is a sequence of simple equations that can be processed in any modern computer in few seconds. Because computations are based in an analytical solution, small deviations of the dose when sub-millimeter positional changes occur can be detected. Conclusions: We formulated an analytical method to compute 4D dose distributions and dose differences based on an analytical solution and perturbations to the original dose. This method is highly accurate and can be.
A-B Similarity-Complementarity and Accurate Empathy.
ERIC Educational Resources Information Center
Gillam, Sandra; McGinley, Hugh
1983-01-01
Rated the audio portions of videotaped segments of 32 dyadic interviews between A-type and B-type undergraduate males for accurate empathy using Truax's AE-Scale. Results indicated B-types elicited higher levels of empathy when they interacted with other B-types, while any dyad that contained an A-type resulted in less empathy. (JAC)
Toward more accurate loss tangent measurements in reentrant cavities
Moyer, R. D.
1980-05-01
Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
Liu, Hongcheng; Ponniah, Gomathinayagam; Neill, Alyssa; Patel, Rekha; Andrien, Bruce
2013-12-17
Methionine (Met) oxidation is a major modification of proteins, which converts Met to Met sulfoxide as the common product. It is challenging to determine the level of Met sulfoxide, because it can be generated during sample preparation and analysis as an artifact. To determine the level of Met sulfoxide in proteins accurately, an isotope labeling and LC-MS peptide mapping method was developed. Met residues in proteins were fully oxidized using hydrogen peroxide enriched with (18)O atoms before sample preparation. Therefore, it was impossible to generate Met sulfoxide as an artifact during sample preparation. The molecular weight difference of 2 Da between Met sulfoxide with the (16)O atom and Met sulfoxide with the (18)O atom was used to differentiate and calculate the level of Met sulfoxide in the sample originally. Using a recombinant monoclonal antibody as a model protein, much lower levels of Met sulfoxide were detected for the two susceptible Met residues with this new method compared to a typical peptide mapping procedure. The results demonstrated efficient elimination of the analytical artifact during LC-MS peptide mapping for the measurement of Met sulfoxide. This method can thus be used when accurate determination of the level of Met sulfoxide is critical.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Accurate 12D dipole moment surfaces of ethylene
NASA Astrophysics Data System (ADS)
Delahaye, Thibault; Nikitin, Andrei V.; Rey, Michael; Szalay, Péter G.; Tyuterev, Vladimir G.
2015-10-01
Accurate ab initio full-dimensional dipole moment surfaces of ethylene are computed using coupled-cluster approach and its explicitly correlated counterpart CCSD(T)-F12 combined respectively with cc-pVQZ and cc-pVTZ-F12 basis sets. Their analytical representations are provided through 4th order normal mode expansions. First-principles prediction of the line intensities using variational method up to J = 30 are in excellent agreement with the experimental data in the range of 0-3200 cm-1. Errors of 0.25-6.75% in integrated intensities for fundamental bands are comparable with experimental uncertainties. Overall calculated C2H4 opacity in 600-3300 cm-1 range agrees with experimental determination better than to 0.5%.
Visual Analytics for MOOC Data.
Qu, Huamin; Chen, Qing
2015-01-01
With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers. PMID:26594957
Visual Analytics for MOOC Data.
Qu, Huamin; Chen, Qing
2015-01-01
With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.
Analytical Services Management System
Church, Shane; Nigbor, Mike; Hillman, Daniel
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standard chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.
Analytical Services Management System
2005-03-30
Analytical Services Management System (ASMS) provides sample management services. Sample management includes sample planning for analytical requests, sample tracking for shipping and receiving by the laboratory, receipt of the analytical data deliverable, processing the deliverable and payment of the laboratory conducting the analyses. ASMS is a web based application that provides the ability to manage these activities at multiple locations for different customers. ASMS provides for the assignment of single to multiple samples for standardmore » chemical and radiochemical analyses. ASMS is a flexible system which allows the users to request analyses by line item code. Line item codes are selected based on the Basic Ordering Agreement (BOA) format for contracting with participating laboratories. ASMS also allows contracting with non-BOA laboratories using a similar line item code contracting format for their services. ASMS allows sample and analysis tracking from sample planning and collection in the field through sample shipment, laboratory sample receipt, laboratory analysis and submittal of the requested analyses, electronic data transfer, and payment of the laboratories for the completed analyses. The software when in operation contains business sensitive material that is used as a principal portion of the Kaiser Analytical Management Services business model. The software version provided is the most recent version, however the copy of the application does not contain business sensitive data from the associated Oracle tables such as contract information or price per line item code.« less
Challenges for Visual Analytics
Thomas, James J.; Kielman, Joseph
2009-09-23
Visual analytics has seen unprecedented growth in its first five years of mainstream existence. Great progress has been made in a short time, yet great challenges must be met in the next decade to provide new technologies that will be widely accepted by societies throughout the world. This paper sets the stage for some of those challenges in an effort to provide the stimulus for the research, both basic and applied, to address and exceed the envisioned potential for visual analytics technologies. We start with a brief summary of the initial challenges, followed by a discussion of the initial driving domains and applications, as well as additional applications and domains that have been a part of recent rapid expansion of visual analytics usage. We look at the common characteristics of several tools illustrating emerging visual analytics technologies, and conclude with the top ten challenges for the field of study. We encourage feedback and collaborative participation by members of the research community, the wide array of user communities, and private industry.
Analytical Chemistry Laboratory
NASA Technical Reports Server (NTRS)
Anderson, Mark
2013-01-01
The Analytical Chemistry and Material Development Group maintains a capability in chemical analysis, materials R&D failure analysis and contamination control. The uniquely qualified staff and facility support the needs of flight projects, science instrument development and various technical tasks, as well as Cal Tech.
Analytics: Changing the Conversation
ERIC Educational Resources Information Center
Oblinger, Diana G.
2013-01-01
In this third and concluding discussion on analytics, the author notes that we live in an information culture. We are accustomed to having information instantly available and accessible, along with feedback and recommendations. We want to know what people think and like (or dislike). We want to know how we compare with "others like me."…
Specificity is key to successful application of analytics.
Costello, David; Kaldenberg, Dennis
2015-02-01
More data do not necessarily equate to better analytics. Choosing the right analytics tools and applying them to specific areas leads to better results. A Midwest hospital used single-point metrics to identify underperforming facilities and drive improvements.
Accurate Biomass Estimation via Bayesian Adaptive Sampling
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Knuth, Kevin H.; Castle, Joseph P.; Lvov, Nikolay
2005-01-01
The following concepts were introduced: a) Bayesian adaptive sampling for solving biomass estimation; b) Characterization of MISR Rahman model parameters conditioned upon MODIS landcover. c) Rigorous non-parametric Bayesian approach to analytic mixture model determination. d) Unique U.S. asset for science product validation and verification.
Modified chemiluminescent NO analyzer accurately measures NOX
NASA Technical Reports Server (NTRS)
Summers, R. L.
1978-01-01
Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.
Two Approaches in the Lunar Libration Theory: Analytical vs. Numerical Methods
NASA Astrophysics Data System (ADS)
Petrova, Natalia; Zagidullin, Arthur; Nefediev, Yurii; Kosulin, Valerii
2016-10-01
Observation of the physical libration of the Moon and the celestial bodies is one of the astronomical methods to remotely evaluate the internal structure of a celestial body without using expensive space experiments. Review of the results obtained due to the physical libration study, is presented in the report.The main emphasis is placed on the description of successful lunar laser ranging for libration determination and on the methods of simulating the physical libration. As a result, estimation of the viscoelastic and dissipative properties of the lunar body, of the lunar core parameters were done. The core's existence was confirmed by the recent reprocessing of seismic data Apollo missions. Attention is paid to the physical interpretation of the phenomenon of free libration and methods of its determination.A significant part of the report is devoted to describing the practical application of the most accurate to date the analytical tables of lunar libration built by comprehensive analytical processing of residual differences obtained when comparing the long-term series of laser observations with numerical ephemeris DE421 [1].In general, the basic outline of the report reflects the effectiveness of two approaches in the libration theory - numerical and analytical solution. It is shown that the two approaches complement each other for the study of the Moon in different aspects: numerical approach provides high accuracy of the theory necessary for adequate treatment of modern high-accurate observations and the analytic approach allows you to see the essence of the various kind manifestations in the lunar rotation, predict and interpret the new effects in observations of physical libration [2].[1] Rambaux, N., J. G. Williams, 2011, The Moon's physical librations and determination of their free modes, Celest. Mech. Dyn. Astron., 109, 85–100.[2] Petrova N., A. Zagidullin, Yu. Nefediev. Analysis of long-periodic variations of lunar libration parameters on the basis
Accurately measuring MPI broadcasts in a computational grid
Karonis N T; de Supinski, B R
1999-05-06
An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the
Analytic evolution of singular distribution amplitudes in QCD
NASA Astrophysics Data System (ADS)
Radyushkin, A. V.; Tandogan, A.
2014-04-01
We describe a method of analytic evolution of distribution amplitudes (DAs) that have singularities, such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA and antisymmetric flat DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
Palm: Easing the Burden of Analytical Performance Modeling
Tallent, Nathan R.; Hoisie, Adolfy
2014-06-01
Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexity (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.
Analytic Evolution of Singular Distribution Amplitudes in QCD
Radyushkin, Anatoly V.; Tandogan Kunkel, Asli
2014-03-01
We describe a method of analytic evolution of distribution amplitudes (DA) that have singularities, such as non-zero values at the end-points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a flat (constant) DA, anti-symmetric at DA and then use it for evolution of the two-photon generalized distribution amplitude. Our approach has advantages over the standard method of expansion in Gegenbauer polynomials, which requires infinite number of terms in order to accurately reproduce functions in the vicinity of singular points, and over a straightforward iteration of an initial distribution with evolution kernel. The latter produces logarithmically divergent terms at each iteration, while in our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve, with only one or two iterations needed afterwards in order to get rather precise results.
Analytical Characterization of Scalar-Field Oscillons in Quartic Potentials
NASA Astrophysics Data System (ADS)
Sicilia, David Pasquale
In this thesis I present a series of simple models of scalar field oscillons which allow estimation of the basic properties of oscillons using nonperturbative analytical methods, with minimal dependence on computer simulation. The methods are applied to oscillons in phi^4 Klein-Gordon models in two and three spatialdimensions, yielding results with good accuracy in the characterization of most aspects of oscillon dynamics. In particular, I show how oscillons can be interpreted as long-lived perturbations about an attractor in field configuration space. By investigating their radiation rate as they approach the attractor, I obtain an accurate estimate of their lifetimes in d=3 and explain why they seem to be perturbatively stable in d=2, where d is the number of spatial dimensions. I also present some preliminary work on a method to calculate the form of the spatial profile of the oscillon.
Networked analytical sample management system
Kerrigan, W.J.; Spencer, W.A.
1986-01-01
Since 1982, the Savannah River Laboratory (SRL) has operated a computer-controlled analytical sample management system. The system, pogrammed in COBOL, runs on the site IBM 3081 mainframe computer. The system provides for the following subtasks: sample logging, analytical method assignment, worklist generation, cost accounting, and results reporting. Within these subtasks the system functions in a time-sharing mode. Communications between subtasks are done overnight in a batch mode. The system currently supports management of up to 3000 samples a month. Each sample requires, on average, three independent methods. Approximately 100 different analytical techniques are available for customized input of data. The laboratory has implemented extensive computer networking using Ethernet. Electronic mail, RS/1, and online literature searches are in place. Based on our experience with the existing sample management system, we have begun a project to develop a second generation system. The new system will utilize the panel designs developed for the present LIMS, incorporate more realtime features, and take advantage of the many commercial LIMS systems.
Can Appraisers Rate Work Performance Accurately?
ERIC Educational Resources Information Center
Hedge, Jerry W.; Laue, Frances J.
The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
Analytical quality by design: a tool for regulatory flexibility and robust analytics.
Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).
Analytical Quality by Design: A Tool for Regulatory Flexibility and Robust Analytics
Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy
2015-01-01
Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT). PMID:25722723
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Analytical technique for satellite projected cross-sectional area calculation
NASA Astrophysics Data System (ADS)
Ben-Yaacov, Ohad; Edlerman, Eviatar; Gurfil, Pini
2015-07-01
Calculating the projected cross-sectional area (PCSA) of a satellite along a given direction is essential for implementing attitude control modes such as Sun pointing or minimum-drag. The PCSA may also be required for estimating the forces and torques induced by atmospheric drag and solar radiation pressure. This paper develops a new analytical method for calculating the PCSA, the concomitant torques and the satellite exposed surface area, based on the theory of convex polygons. A scheme for approximating the outer surface of any satellite by polygons is developed. Then, a methodology for calculating the projections of the polygons along a given vector is employed. The methodology also accounts for overlaps among projections, and is capable of providing the true PCSA in a computationally-efficient manner. Using the Space Autonomous Mission for Swarming and Geo-locating Nanosatellites mechanical model, it is shown that the new analytical method yields accurate results, which are similar to results obtained from alternative numerical tools.
Lee, Ping I
2011-10-10
The purpose of this review is to provide an overview of approximate analytical solutions to the general moving boundary diffusion problems encountered during the release of a dispersed drug from matrix systems. Starting from the theoretical basis of the Higuchi equation and its subsequent improvement and refinement, available approximate analytical solutions for the more complicated cases involving heterogeneous matrix, boundary layer effect, finite release medium, surface erosion, and finite dissolution rate are also discussed. Among various modeling approaches, the pseudo-steady state assumption employed in deriving the Higuchi equation and related approximate analytical solutions appears to yield reasonably accurate results in describing the early stage release of a dispersed drug from matrices of different geometries whenever the initial drug loading (A) is much larger than the drug solubility (C(s)) in the matrix (or A≫C(s)). However, when the drug loading is not in great excess of the drug solubility (i.e. low A/C(s) values) or when the drug loading approaches the drug solubility (A→C(s)) which occurs often with drugs of high aqueous solubility, approximate analytical solutions based on the pseudo-steady state assumption tend to fail, with the Higuchi equation for planar geometry exhibiting a 11.38% error as compared with the exact solution. In contrast, approximate analytical solutions to this problem without making the pseudo-steady state assumption, based on either the double-integration refinement of the heat balance integral method or the direct simplification of available exact analytical solutions, show close agreement with the exact solutions in different geometries, particularly in the case of low A/C(s) values or drug loading approaching the drug solubility (A→C(s)). However, the double-integration heat balance integral approach is generally more useful in obtaining approximate analytical solutions especially when exact solutions are not
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1987-01-01
This document discusses the determination of caustic surfaces in terms of rays, reflectors, and wavefronts. Analytical caustics are obtained as a family of lines, a set of points, and several types of equations for geometries encountered in optics and microwave applications. Standard methods of differential geometry are applied under different approaches: directly to reflector surfaces, and alternatively, to wavefronts, to obtain analytical caustics of two sheets or branches. Gauss/Seidel aberrations are introduced into the wavefront approach, forcing the retention of all three coefficients of both the first- and the second-fundamental forms of differential geometry. An existing method for obtaining caustic surfaces through exploitation of the singularities in flux density is examined, and several constant-intensity contour maps are developed using only the intrinsic Gaussian, mean, and normal curvatures of the reflector. Numerous references are provided for extending the material of the present document to the morphologies of caustics and their associated diffraction patterns.
Requirements for Predictive Analytics
Troy Hiltbrand
2012-03-01
It is important to have a clear understanding of how traditional Business Intelligence (BI) and analytics are different and how they fit together in optimizing organizational decision making. With tradition BI, activities are focused primarily on providing context to enhance a known set of information through aggregation, data cleansing and delivery mechanisms. As these organizations mature their BI ecosystems, they achieve a clearer picture of the key performance indicators signaling the relative health of their operations. Organizations that embark on activities surrounding predictive analytics and data mining go beyond simply presenting the data in a manner that will allow decisions makers to have a complete context around the information. These organizations generate models based on known information and then apply other organizational data against these models to reveal unknown information.
Analytic ICF Hohlraum Energetics
Rosen, M D; Hammer, J
2003-08-27
We apply recent analytic solutions to the radiation diffusion equation to problems of interest for ICF hohlraums. The solutions provide quantitative values for absorbed energy which are of use for generating a desired radiation temperature vs. time within the hohlraum. Comparison of supersonic and subsonic solutions (heat front velocity faster or slower, respectively, than the speed of sound in the x-ray heated material) suggests that there may be some advantage in using high Z metallic foams as hohlraum wall material to reduce hydrodynamic losses, and hence, net absorbed energy by the walls. Analytic and numerical calculations suggest that the loss per unit area might be reduced {approx} 20% through use of foam hohlraum walls. Reduced hydrodynamic motion of the wall material may also reduce symmetry swings, as found for heavy ion targets.
Brune, D.; Forkman, B.; Persson, B.
1984-01-01
This book covers the general theories and techniques of nuclear chemical analysis, directed at applications in analytical chemistry, nuclear medicine, radiophysics, agriculture, environmental sciences, geological exploration, industrial process control, etc. The main principles of nuclear physics and nuclear detection on which the analysis is based are briefly outlined. An attempt is made to emphasise the fundamentals of activation analysis, detection and activation methods, as well as their applications. The book provides guidance in analytical chemistry, agriculture, environmental and biomedical sciences, etc. The contents include: the nuclear periodic system; nuclear decay; nuclear reactions; nuclear radiation sources; interaction of radiation with matter; principles of radiation detectors; nuclear electronics; statistical methods and spectral analysis; methods of radiation detection; neutron activation analysis; charged particle activation analysis; photon activation analysis; sample preparation and chemical separation; nuclear chemical analysis in biological and medical research; the use of nuclear chemical analysis in the field of criminology; nuclear chemical analysis in environmental sciences, geology and mineral exploration; and radiation protection.
Analytical applications of aptamers
NASA Astrophysics Data System (ADS)
Tombelli, S.; Minunni, M.; Mascini, M.
2007-05-01
Aptamers are single stranded DNA or RNA ligands which can be selected for different targets starting from a library of molecules containing randomly created sequences. Aptamers have been selected to bind very different targets, from proteins to small organic dyes. Aptamers are proposed as alternatives to antibodies as biorecognition elements in analytical devices with ever increasing frequency. This in order to satisfy the demand for quick, cheap, simple and highly reproducible analytical devices, especially for protein detection in the medical field or for the detection of smaller molecules in environmental and food analysis. In our recent experience, DNA and RNA aptamers, specific for three different proteins (Tat, IgE and thrombin), have been exploited as bio-recognition elements to develop specific biosensors (aptasensors). These recognition elements have been coupled to piezoelectric quartz crystals and surface plasmon resonance (SPR) devices as transducers where the aptamers have been immobilized on the gold surface of the crystals electrodes or on SPR chips, respectively.
Analytic holographic superconductor
NASA Astrophysics Data System (ADS)
Herzog, Christopher P.
2010-06-01
We investigate a holographic superconductor that admits an analytic treatment near the phase transition. In the dual 3+1-dimensional field theory, the phase transition occurs when a scalar operator of scaling dimension two gets a vacuum expectation value. We calculate current-current correlation functions along with the speed of second sound near the critical temperature. We also make some remarks about critical exponents. An analytic treatment is possible because an underlying Heun equation describing the zero mode of the phase transition has a polynomial solution. Amusingly, the treatment here may generalize for an order parameter with any integer spin, and we propose a Lagrangian for a spin-two holographic superconductor.
Cowell, Andrew J.; Cowell, Amanda K.
2009-08-29
This paper discusses the design and use of anthropomorphic computer characters as nonplayer characters (NPC’s) within analytical games. These new environments allow avatars to play a central role in supporting training and education goals instead of planning the supporting cast role. This new ‘science’ of gaming, driven by high-powered but inexpensive computers, dedicated graphics processors and realistic game engines, enables game developers to create learning and training opportunities on par with expensive real-world training scenarios. However, there needs to be care and attention placed on how avatars are represented and thus perceived. A taxonomy of non-verbal behavior is presented and its application to analytical gaming discussed.
Industrial Analytics Corporation
Industrial Analytics Corporation
2004-01-30
The lost foam casting process is sensitive to the properties of the EPS patterns used for the casting operation. In this project Industrial Analytics Corporation (IAC) has developed a new low voltage x-ray instrument for x-ray radiography of very low mass EPS patterns. IAC has also developed a transmitted visible light method for characterizing the properties of EPS patterns. The systems developed are also applicable to other low density materials including graphite foams.
Analytical investigation of squeeze film dampers
NASA Astrophysics Data System (ADS)
Bicak, Mehmet Murat Altug
Squeeze film damping effects naturally occur if structures are subjected to loading situations such that a very thin film of fluid is trapped within structural joints, interfaces, etc. An accurate estimate of squeeze film effects is important to predict the performance of dynamic structures. Starting from linear Reynolds equation which governs the fluid behavior coupled with structure domain which is modeled by Kirchhoff plate equation, the effects of nondimensional parameters on the damped natural frequencies are presented using boundary characteristic orthogonal functions. For this purpose, the nondimensional coupled partial differential equations are obtained using Rayleigh-Ritz method and the weak formulation, are solved using polynomial and sinusoidal boundary characteristic orthogonal functions for structure and fluid domain respectively. In order to implement present approach to the complex geometries, a two dimensional isoparametric coupled finite element is developed based on Reissner-Mindlin plate theory and linearized Reynolds equation. The coupling between fluid and structure is handled by considering the pressure forces and structural surface velocities on the boundaries. The effects of the driving parameters on the frequency response functions are investigated. As the next logical step, an analytical method for solution of squeeze film damping based upon Green's function to the nonlinear Reynolds equation considering elastic plate is studied. This allows calculating modal damping and stiffness force rapidly for various boundary conditions. The nonlinear Reynolds equation is divided into multiple linear non-homogeneous Helmholtz equations, which then can be solvable using the presented approach. Approximate mode shapes of a rectangular elastic plate are used, enabling calculation of damping ratio and frequency shift as well as complex resistant pressure. Moreover, the theoretical results are correlated and compared with experimental results both in the
Analytical Ultrasonics in Materials Research and Testing
NASA Technical Reports Server (NTRS)
Vary, A.
1986-01-01
Research results in analytical ultrasonics for characterizing structural materials from metals and ceramics to composites are presented. General topics covered by the conference included: status and advances in analytical ultrasonics for characterizing material microstructures and mechanical properties; status and prospects for ultrasonic measurements of microdamage, degradation, and underlying morphological factors; status and problems in precision measurements of frequency-dependent velocity and attenuation for materials analysis; procedures and requirements for automated, digital signal acquisition, processing, analysis, and interpretation; incentives for analytical ultrasonics in materials research and materials processing, testing, and inspection; and examples of progress in ultrasonics for interrelating microstructure, mechanical properites, and dynamic response.
Pérez-Ortega, Patricia; Lara-Ortega, Felipe J; García-Reyes, Juan F; Gilbert-López, Bienvenida; Trojanowicz, Marek; Molina-Díaz, Antonio
2016-11-01
The feasibility of accurate-mass multi-residue screening methods using liquid chromatography high-resolution mass spectrometry (UHPLC-HRMS) using time-of-flight mass spectrometry has been evaluated, including over 625 multiclass food contaminants as case study. Aspects such as the selectivity and confirmation capability provided by HRMS with different acquisition modes (full-scan or full-scan combined with collision induced dissociation (CID) with no precursor ion isolation), and chromatographic separation along with main limitations such as sensitivity or automated data processing have been examined. Compound identification was accomplished with retention time matching and accurate mass measurements of the targeted ions for each analyte (mainly (de)protonated molecules). Compounds with the same nominal mass (isobaric species) were very frequent due to the large number of compounds included. Although 76% of database compounds were involved in isobaric groups, they were resolved in most cases (99% of these isobaric species were distinguished by retention time, resolving power, isotopic profile or fragment ions). Only three pairs could not be resolved with these tools. In-source CID fragmentation was evaluated in depth, although the results obtained in terms of information provided were not as thorough as those obtained using fragmentation experiments without precursor ion isolation (all ion mode). The latter acquisition mode was found to be the best suited for this type of large-scale screening method instead of classic product ion scan, as provided excellent fragmentation information for confirmatory purposes for an unlimited number of compounds. Leaving aside the sample treatment limitations, the main weaknesses noticed are basically the relatively low sensitivity for compounds which does not map well against electrospray ionization and also quantitation issues such as those produced by signal suppression due to either matrix effects from coeluting matrix or from
Pérez-Ortega, Patricia; Lara-Ortega, Felipe J; García-Reyes, Juan F; Gilbert-López, Bienvenida; Trojanowicz, Marek; Molina-Díaz, Antonio
2016-11-01
The feasibility of accurate-mass multi-residue screening methods using liquid chromatography high-resolution mass spectrometry (UHPLC-HRMS) using time-of-flight mass spectrometry has been evaluated, including over 625 multiclass food contaminants as case study. Aspects such as the selectivity and confirmation capability provided by HRMS with different acquisition modes (full-scan or full-scan combined with collision induced dissociation (CID) with no precursor ion isolation), and chromatographic separation along with main limitations such as sensitivity or automated data processing have been examined. Compound identification was accomplished with retention time matching and accurate mass measurements of the targeted ions for each analyte (mainly (de)protonated molecules). Compounds with the same nominal mass (isobaric species) were very frequent due to the large number of compounds included. Although 76% of database compounds were involved in isobaric groups, they were resolved in most cases (99% of these isobaric species were distinguished by retention time, resolving power, isotopic profile or fragment ions). Only three pairs could not be resolved with these tools. In-source CID fragmentation was evaluated in depth, although the results obtained in terms of information provided were not as thorough as those obtained using fragmentation experiments without precursor ion isolation (all ion mode). The latter acquisition mode was found to be the best suited for this type of large-scale screening method instead of classic product ion scan, as provided excellent fragmentation information for confirmatory purposes for an unlimited number of compounds. Leaving aside the sample treatment limitations, the main weaknesses noticed are basically the relatively low sensitivity for compounds which does not map well against electrospray ionization and also quantitation issues such as those produced by signal suppression due to either matrix effects from coeluting matrix or from
STRengthening Analytical Thinking for Observational Studies: the STRATOS initiative
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-01-01
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even ‘standard’ analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved. PMID:25074480
STRengthening analytical thinking for observational studies: the STRATOS initiative.
Sauerbrei, Willi; Abrahamowicz, Michal; Altman, Douglas G; le Cessie, Saskia; Carpenter, James
2014-12-30
The validity and practical utility of observational medical research depends critically on good study design, excellent data quality, appropriate statistical methods and accurate interpretation of results. Statistical methodology has seen substantial development in recent times. Unfortunately, many of these methodological developments are ignored in practice. Consequently, design and analysis of observational studies often exhibit serious weaknesses. The lack of guidance on vital practical issues discourages many applied researchers from using more sophisticated and possibly more appropriate methods when analyzing observational studies. Furthermore, many analyses are conducted by researchers with a relatively weak statistical background and limited experience in using statistical methodology and software. Consequently, even 'standard' analyses reported in the medical literature are often flawed, casting doubt on their results and conclusions. An efficient way to help researchers to keep up with recent methodological developments is to develop guidance documents that are spread to the research community at large. These observations led to the initiation of the strengthening analytical thinking for observational studies (STRATOS) initiative, a large collaboration of experts in many different areas of biostatistical research. The objective of STRATOS is to provide accessible and accurate guidance in the design and analysis of observational studies. The guidance is intended for applied statisticians and other data analysts with varying levels of statistical education, experience and interests. In this article, we introduce the STRATOS initiative and its main aims, present the need for guidance documents and outline the planned approach and progress so far. We encourage other biostatisticians to become involved.
The Locus analytical framework for indoor localization and tracking applications
NASA Astrophysics Data System (ADS)
Segou, Olga E.; Thomopoulos, Stelios C. A.
2015-05-01
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system's performance limitations (in a best-case/worst-case scenario basis).
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Analytical investigation of curved steel girder behaviour
NASA Astrophysics Data System (ADS)
Simpson, Michael Donald
Horizontally curved bridges meet an increasing demand for complex highway geometries in congested urban areas. A popular type of curved bridge consists of steel I-girders interconnected by cross-frames and a composite concrete deck slab. Prior to hardening of the concrete deck each I-girder is susceptible to a lateral torsional buckling-type failure. Unlike a straight I-girder, a curved I-girder resists major components of stress resulting from strong axis bending, weak axis bending and warping. The combination of these stresses reduce the available strength of a curved girder versus that of an equivalent straight girder. Experiments demonstrating the ultimate strength characteristics of curved girders are few in number. Of the available experimental research, few studies have used full scale-tests and boundary conditions indicative of those found in an actual bridge structure. Unlike straight girders, curved girders are characterized by nonlinear out-of-plane deformations which, depending upon the magnitude of curvature, may occur at very low load levels. Because of the inherent nonlinear behaviour, some have questioned the application of the term lateral torsional buckling to curved girders; rather curved girders behave in a manner consistent with a deflection-amplification problem. Even with the advent of sophisticated analytical techniques, there is a glaring void in the documented literature regarding calibration of these techniques with known experimental curved girder behaviour. Presented here is an analytical study of the nonlinear modelling of curved steel girders and bridges. This is accomplished by incorporating large deflection and nonlinear material behaviour into three dimensional finite element models generated using the program ANSYS. Emphasis is placed on the calibration of the finite method with known experimental ultimate strength data. It is demonstrated that accurate predictions of load deformation and ultimate strength are attainable via the
Visual Analytics: How Much Visualization and How Much Analytics?
Keim, Daniel; Mansmann, Florian; Thomas, James J.
2009-12-16
The term Visual Analytics has been around for almost five years by now, but still there are on-going discussions about what it actually is and in particular what is new about it. The core of our view on Visual Analytics is the new enabling and accessible analytic reasoning interactions supported by the combination of automated and visual analytics. In this paper, we outline the scope of Visual Analytics using two problem and three methodological classes in order to work out the need for and purpose of Visual Analytics. Thereby, the respective methods are explained plus examples of analytic reasoning interaction leading to a glimpse into the future of how Visual Analytics methods will enable us to go beyond what is possible when separately using the two methods.
Preparation and accurate measurement of pure ozone.
Janssen, Christof; Simone, Daniela; Guinet, Mickaël
2011-03-01
Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.