NASA Astrophysics Data System (ADS)
Perry, Adam J.; Hodges, James N.; Markus, Charles R.; Kocheril, G. Stephen; McCall, Benjamin J.
2015-11-01
The H3+ molecular ion has served as a long-standing benchmark for state-of-the-art ab initio calculations of molecular potentials and variational calculations of rovibrational energy levels. However, the accuracy of such calculations would not have been confirmed if not for the wealth of spectroscopic data that has been made available for this molecule. Recently, a new high-precision ion spectroscopy technique was demonstrated by Hodges et al., which led to the first highly accurate and precise (∼MHz) H3+ transition frequencies. As an extension of this work, we present ten additional R-branch transitions measured to similar precision as a next step toward the ultimate goal of producing a comprehensive high-precision survey of this molecule, from which rovibrational energy levels can be calculated.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-08-14
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-01-01
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203
Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics
2015-01-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125
Hardron production and neutrino beams
NASA Astrophysics Data System (ADS)
Guglielmi, A.
2006-11-01
The precise measurements of the neutrino mixing parameters in the oscillation experiments at accelerators require new high-intensity and high-purity neutrino beams. Ancillary hadron-production measurements are then needed as inputs to precise calculation of neutrino beams and of atmospheric neutrino fluxes.
The advancement of the high precision stress polishing
NASA Astrophysics Data System (ADS)
Li, Chaoqiang; Lei, Baiping; Han, Yu
2016-10-01
The stress polishing is a kind of large-diameter aspheric machining technology with high efficiency. This paper focuses on the principle, application in the processing of large aspheric mirror, and the domestic and foreign research status of stress polishing, aimed at the problem of insufficient precision of mirror surface deformation calculated by some traditional theories and the problem that the output precision and stability of the support device in stress polishing cannot meet the requirements. The improvement methods from these three aspects are put forward, the characterization method of mirror's elastic deformation in stress polishing, the deformation theory of influence function and the calculation of correction force, the design of actuator's mechanical structure. These improve the precision of stress polishing and provide theoretical basis for the further application of stress polishing in large-diameter aspheric machining.
Spectroscopic Factors From the Single Neutron Pickup Reaction ^64Zn(d,t)
NASA Astrophysics Data System (ADS)
Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Wirth, H.-F.; Herten-Berger, R.
2008-10-01
A great deal of attention has recently been paid towards high precision superallowed β-decay Ft values. With the availability of extremely high precision (<0.1%) experimental data, the precision on Ft is now limited by the ˜1% theoretical corrections.ootnotetextI.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008). This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking correction calculations become more difficult due to the truncated model space. Experimental data is needed to help constrain input parameters for these calculations, and thus experimental spectroscopic factors for these nuclei are important. Preliminary results from the single-nucleon-transfer reaction ^64Zn(d,t)^63Zn will be presented, and the implications for calculations of isospin-symmetry breaking in the superallowed &+circ; decay of ^62Ga will be discussed.
High-precision positioning system of four-quadrant detector based on the database query
NASA Astrophysics Data System (ADS)
Zhang, Xin; Deng, Xiao-guo; Su, Xiu-qin; Zheng, Xiao-qiang
2015-02-01
The fine pointing mechanism of the Acquisition, Pointing and Tracking (APT) system in free space laser communication usually use four-quadrant detector (QD) to point and track the laser beam accurately. The positioning precision of QD is one of the key factors of the pointing accuracy to APT system. A positioning system is designed based on FPGA and DSP in this paper, which can realize the sampling of AD, the positioning algorithm and the control of the fast swing mirror. We analyze the positioning error of facular center calculated by universal algorithm when the facular energy obeys Gauss distribution from the working principle of QD. A database is built by calculation and simulation with MatLab software, in which the facular center calculated by universal algorithm is corresponded with the facular center of Gaussian beam, and the database is stored in two pieces of E2PROM as the external memory of DSP. The facular center of Gaussian beam is inquiry in the database on the basis of the facular center calculated by universal algorithm in DSP. The experiment results show that the positioning accuracy of the high-precision positioning system is much better than the positioning accuracy calculated by universal algorithm.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
High precision NC lathe feeding system rigid-flexible coupling model reduction technology
NASA Astrophysics Data System (ADS)
Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai
2017-08-01
This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.
Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions
Kaur, Parminder; O’Connor, Peter B.
2008-01-01
Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354
Self-Force Corrections to the Periapsis Advance around a Spinning Black Hole
NASA Astrophysics Data System (ADS)
van de Meent, Maarten
2017-01-01
The linear in mass ratio correction to the periapsis advance of equatorial nearly circular orbits around a spinning black hole is calculated for the first time and to a very high precision, providing a key benchmark for different approaches modeling spinning binaries. The high precision of the calculation is leveraged to discriminate between two recent incompatible derivations of the 4 post-Newtonian equations of motion. Finally, the limit of the periapsis advance near the innermost stable orbit (ISCO) allows the determination of the ISCO shift, validating previous calculations using the first law of binary mechanics. Calculation of the ISCO shift is further extended into the near-extremal regime (with spins up to 1 -a =10-20), revealing new unexpected phenomenology. In particular, we find that the shift of the ISCO does not have a well-defined extremal limit but instead continues to oscillate.
High Astrometric Precision in the Calculation of the Coordinates of Orbiters in the GEO Ring
NASA Astrophysics Data System (ADS)
Lacruz, E.; Abad, C.; Downes, J. J.; Hernández-Pérez, F.; Casanova, D.; Tresaco, E.
2018-04-01
We present an astrometric method for the calculation of the positions of orbiters in the GEO ring with a high precision, through a rigorous astrometric treatment of observations with a 1-m class telescope, which are part of the CIDA survey of the GEO ring. We compute the distortion pattern to correct for the systematic errors introduced by the optics and electronics of the telescope, resulting in absolute mean errors of 0.16″ and 0.12″ in right ascension and declination, respectively. These correspond to ≍25 m at the mean distance of the GEO ring, and are thus good quality results.
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
Sakurai Prize: The Future of Higgs Physics
NASA Astrophysics Data System (ADS)
Dawson, Sally
2017-01-01
The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.
Inexact hardware for modelling weather & climate
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, Tim
2014-05-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing exact calculations in exchange for improvements in performance and potentially accuracy and a reduction in power consumption. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud resolving atmospheric modelling. The impact of both, hardware induced faults and low precision arithmetic is tested in the dynamical core of a global atmosphere model. Our simulations show that both approaches to inexact calculations do not substantially affect the quality of the model simulations, provided they are restricted to act only on smaller scales. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations.
Spectroscopic Factors from the Single Neutron Pickup ^64Zn(d,t)
NASA Astrophysics Data System (ADS)
Leach, Kyle; Garrett, P. E.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Towner, I. S.; Ball, G. C.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.
2010-11-01
A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed ° decay of ^62Ga will be discussed.
Real-time polarization imaging algorithm for camera-based polarization navigation sensors.
Lu, Hao; Zhao, Kaichun; You, Zheng; Huang, Kaoli
2017-04-10
Biologically inspired polarization navigation is a promising approach due to its autonomous nature, high precision, and robustness. Many researchers have built point source-based and camera-based polarization navigation prototypes in recent years. Camera-based prototypes can benefit from their high spatial resolution but incur a heavy computation load. The pattern recognition algorithm in most polarization imaging algorithms involves several nonlinear calculations that impose a significant computation burden. In this paper, the polarization imaging and pattern recognition algorithms are optimized through reduction to several linear calculations by exploiting the orthogonality of the Stokes parameters without affecting precision according to the features of the solar meridian and the patterns of the polarized skylight. The algorithm contains a pattern recognition algorithm with a Hough transform as well as orientation measurement algorithms. The algorithm was loaded and run on a digital signal processing system to test its computational complexity. The test showed that the running time decreased to several tens of milliseconds from several thousand milliseconds. Through simulations and experiments, it was found that the algorithm can measure orientation without reducing precision. It can hence satisfy the practical demands of low computational load and high precision for use in embedded systems.
The use of imprecise processing to improve accuracy in weather & climate prediction
NASA Astrophysics Data System (ADS)
Düben, Peter D.; McNamara, Hugh; Palmer, T. N.
2014-08-01
The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and power costs without adversely affecting the quality of the simulations. This would allow higher resolution models to be run at the same computational cost.
Compensation of Horizontal Gravity Disturbances for High Precision Inertial Navigation
Cao, Juliang; Wu, Meiping; Lian, Junxiang; Cai, Shaokun; Wang, Lin
2018-01-01
Horizontal gravity disturbances are an important factor that affects the accuracy of inertial navigation systems in long-duration ship navigation. In this paper, from the perspective of the coordinate system and vector calculation, the effects of horizontal gravity disturbance on the initial alignment and navigation calculation are simultaneously analyzed. Horizontal gravity disturbances cause the navigation coordinate frame built in initial alignment to not be consistent with the navigation coordinate frame in which the navigation calculation is implemented. The mismatching of coordinate frame violates the vector calculation law, which will have an adverse effect on the precision of the inertial navigation system. To address this issue, two compensation methods suitable for two different navigation coordinate frames are proposed, one of the methods implements the compensation in velocity calculation, and the other does the compensation in attitude calculation. Finally, simulations and ship navigation experiments confirm the effectiveness of the proposed methods. PMID:29562653
High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft
NASA Astrophysics Data System (ADS)
Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying
2018-02-01
In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.
Design and algorithm research of high precision airborne infrared touch screen
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Bing; Wang, Shuang-Jie; Fu, Yan; Chen, Zhao-Quan
2016-10-01
There are shortcomings of low precision, touch shaking, and sharp decrease of touch precision when emitting and receiving tubes are failure in the infrared touch screen. A high precision positioning algorithm based on extended axis is proposed to solve these problems. First, the unimpeded state of the beam between emitting and receiving tubes is recorded as 0, while the impeded state is recorded as 1. Then, the method of oblique scan is used, in which the light of one emitting tube is used for five receiving tubes. The impeded information of all emitting and receiving tubes is collected as matrix. Finally, according to the method of arithmetic average, the position of the touch object is calculated. The extended axis positioning algorithm is characteristic of high precision in case of failure of individual infrared tube and affects slightly the precision. The experimental result shows that the 90% display area of the touch error is less than 0.25D, where D is the distance between adjacent emitting tubes. The conclusion is gained that the algorithm based on extended axis has advantages of high precision, little impact when individual infrared tube is failure, and using easily.
Asymptotic Energies and QED Shifts for Rydberg States of Helium
NASA Technical Reports Server (NTRS)
Drake, G.W.F.
2007-01-01
This paper reviews progress that has been made in obtaining essentially exact solutions to the nonrelativistic three-body problem for helium by a combination of variational and asymptotic expansion methods. The calculation of relativistic and quantum electrodynamic corrections by perturbation theory is discussed, and in particular, methods for the accurate calculation of the Bethe logarithm part of the electron self energy are presented. As an example, the results are applied to the calculation of isotope shifts for the short-lived 'halo' nucleus He-6 relative to He-4 in order to determine the nuclear charge radius of He-6 from high precision spectroscopic measurements carried out at the Argonne National Laboratory. The results demonstrate that the high precision that is now available from atomic theory is creating new opportunities to create novel measurement tools, and helium, along with hydrogen, can be regarded as a fundamental atomic system whose spectrum is well understood for all practical purposes.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Spectroscopic Factors from the Single Neutron Pickup Reaction ^64Zn(d,t)
NASA Astrophysics Data System (ADS)
Leach, Kyle; Garrett, P. E.; Ball, G. C.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Faestermann, T.; Finlay, P.; Green, K. L.; Hertenberger, R.; Krücken, R.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wirth, H.-F.; Wong, J.
2009-10-01
A great deal of attention has recently been paid towards high-precision superallowed β-decay Ft values. With the availability of extremely high-precision (<0.1%) experimental data, precision on the individual Ft values are now dominated by the ˜1% theoretical corrections^[1]. This limitation is most evident in heavier superallowed nuclei (e.g. ^62Ga) where the isospin-symmetry-breaking (ISB) correction calculations become more difficult due to the truncated model space. Experimental spectroscopic factors for these nuclei are important for the identification of the relevant orbitals that should be included in the model space of the calculations. Motivated by this need, the single-nucleon transfer reaction ^64Zn(d,t)^63Zn was conducted at the Maier-Leibnitz-Laboratory (MLL) of TUM/LMU in Munich, Germany, using a 22 MeV polarized deuteron beam from the tandem Van de Graaff accelerator and the TUM/LMU Q3D magnetic spectrograph, with angular distributions from 10^o to 60^o. Results from this experiment will be presented and implications for calculations of ISB corrections in the superallowed &+circ; decay of ^62Ga will be discussed.^[1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).
NASA Astrophysics Data System (ADS)
Perera, Dimuthu
Diffusion weighted (DW) Imaging is a non-invasive MR technique that provides information about the tissue microstructure using the diffusion of water molecules. The diffusion is generally characterized by the apparent diffusion coefficient (ADC) parametric map. The purpose of this study is to investigate in silico how the calculation of ADC is affected by image SNR, b-values, and the true tissue ADC. Also, to provide optimal parameter combination depending on the percentage accuracy and precision for prostate peripheral region cancer application. Moreover, to suggest parameter choices for any type of tissue, while providing the expected accuracy and precision. In this research DW images were generated assuming a mono-exponential signal model at two different b-values and for known true ADC values. Rician noise of different levels was added to the DWI images to adjust the image SNR. Using the two DWI images, ADC was calculated using a mono-exponential model for each set of b-values, SNR, and true ADC. 40,000 ADC data were collected for each parameter setting to determine the mean and the standard-deviation of the calculated ADC, as well as the percentage accuracy and precision with respect to the true ADC. The accuracy was calculated using the difference between known and calculated ADC. The precision was calculated using the standard-deviation of calculated ADC. The optimal parameters for a specific study was determined when both the percentage accuracy and precision were minimized. In our study, we simulated two true ADCs (ADC 0.00102 for tumor and 0.00180 mm2/s for normal prostate peripheral region tissue). Image SNR was varied from 2 to 100 and b-values were varied from 0 to 2000s/mm2. The results show that the percentage accuracy and percentage precision were minimized with image SNR. To increase SNR, 10 signal-averagings (NEX) were used considering the limitation in total scan time. The optimal NEX combination for tumor and normal tissue for prostate peripheral region was 1: 9. Also, the minimum percentage accuracy and percentage precision were obtained when low b-value is 0 and high b-value is 800 mm2/s for normal tissue and 1400 mm2/s for tumor tissue. Results also showed that for tissues with 1 x 10-3 < ADC < 2.1 x 10-3 mm 2/s the parameter combination at SNR = 20, b-value pair 0, 800 mm 2/s with NEX = 1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%. Also, for tissues with 0.6 x 10-3 < ADC < 1.25 x 10-3 mm2 /s the parameter combination at SNR = 20, b-value pair 0, 1400 mm 2/s with NEX =1:9 can calculate ADC with a percentage accuracy of less than 2% and percentage precision of 6-8%.
High speed FPGA-based Phasemeter for the far-infrared laser interferometers on EAST
NASA Astrophysics Data System (ADS)
Yao, Y.; Liu, H.; Zou, Z.; Li, W.; Lian, H.; Jie, Y.
2017-12-01
The far-infrared laser-based HCN interferometer and POlarimeter/INTerferometer\\break (POINT) system are important diagnostics for plasma density measurement on EAST tokamak. Both HCN and POINT provide high spatial and temporal resolution of electron density measurement and used for plasma density feedback control. The density is calculated by measuring the real-time phase difference between the reference beams and the probe beams. For long-pulse operations on EAST, the calculation of density has to meet the requirements of Real-Time and high precision. In this paper, a Phasemeter for far-infrared laser-based interferometers will be introduced. The FPGA-based Phasemeter leverages fast ADCs to obtain the three-frequency signals from VDI planar-diode Mixers, and realizes digital filters and an FFT algorithm in FPGA to provide real-time, high precision electron density output. Implementation of the Phasemeter will be helpful for the future plasma real-time feedback control in long-pulse discharge.
Proton Radii of 4,6,8He Isotopes from High-Precision Nucleon-Nucleon Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caurier, E; Navratil, P
2005-11-16
Recently, precision laser spectroscopy on {sup 6}He atoms determined accurately the isotope shift between {sup 4}He and {sup 6}He and, consequently, the charge radius of {sup 6}He. A similar experiment for {sup 8}He is under way. We have performed large-scale ab initio calculations for {sup 4,6,8}He isotopes using high-precision nucleon-nucleon (NN) interactions within the no-core shell model (NCSM) approach. With the CD-Bonn 2000 NN potential we found point-proton root-mean-square (rms) radii of {sup 4}He and {sup 6}He 1.45(1) fm and 1.89(4), respectively, in agreement with experiment and predict the {sup 8}He point proton rms radius to be 1.88(6) fm. Atmore » the same time, our calculations show that the recently developed nonlocal INOY NN potential gives binding energies closer to experiment, but underestimates the charge radii.« less
Progress Towards a High-Precision Infrared Spectroscopic Survey of the H_3^+ Ion
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Hodges, James N.; Markus, Charles R.; Kocheril, G. Stephen; Jenkins, Paul A., II; McCall, Benjamin J.
2015-06-01
The trihydrogen cation, H_3^+, represents one of the most important and fundamental molecular systems. Having only two electrons and three nuclei, H_3^+ is the simplest polyatomic system and is a key testing ground for the development of new techniques for calculating potential energy surfaces and predicting molecular spectra. Corrections that go beyond the Born-Oppenheimer approximation, including adiabatic, non-adiabatic, relativistic, and quantum electrodynamic corrections are becoming more feasible to calculate. As a result, experimental measurements performed on the H_3^+ ion serve as important benchmarks which are used to test the predictive power of new computational methods. By measuring many infrared transitions with precision at the sub-MHz level it is possible to construct a list of the most highly precise experimental rovibrational energy levels for this molecule. Until recently, only a select handful of infrared transitions of this molecule have been measured with high precision (˜ 1 MHz). Using the technique of Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, we are aiming to produce the largest high-precision spectroscopic dataset for this molecule to date. Presented here are the current results from our survey along with a discussion of the combination differences analysis used to extract the experimentally determined rovibrational energy levels. O. Polyansky, et al., Phil. Trans. R. Soc. A (2012), 370, 5014. M. Pavanello, et al., J. Chem. Phys. (2012), 136, 184303. L. Diniz, et al., Phys. Rev. A (2013), 88, 032506. L. Lodi, et al., Phys. Rev. A (2014), 89, 032505. J. Hodges, et al., J. Chem. Phys (2013), 139, 164201.
High-precision radius automatic measurement using laser differential confocal technology
NASA Astrophysics Data System (ADS)
Jiang, Hongwei; Zhao, Weiqian; Yang, Jiamiao; Guo, Yongkui; Xiao, Yang
2015-02-01
A high precision radius automatic measurement method using laser differential confocal technology is proposed. Based on the property of an axial intensity curve that the null point precisely corresponds to the focus of the objective and the bipolar property, the method uses the composite PID (proportional-integral-derivative) control to ensure the steady movement of the motor for process of quick-trigger scanning, and uses least-squares linear fitting to obtain the position of the cat-eye and confocal positions, then calculates the radius of curvature of lens. By setting the number of measure times, precision auto-repeat measurement of the radius of curvature is achieved. The experiment indicates that the method has the measurement accuracy of better than 2 ppm, and the measuring repeatability is better than 0.05 μm. In comparison with the existing manual-single measurement, this method has a high measurement precision, a strong environment anti-interference capability, a better measuring repeatability which is only tenth of former's.
Dynamical investigations of the multiple stars
NASA Astrophysics Data System (ADS)
Kiyaeva, Olga V.; Zhuchkov, Roman Ya.
2017-11-01
Two multiple stars - the quadruple star - Bootis (ADS 9173) and the triple star T Taury were investigated. The visual double star - Bootiswas studied on the basis of the Pulkovo 26-inch refractor observations 1982-2013. An invisible satellite of the component A was discovered due to long-term uniform series of observations. Its orbital period is 20 ± 2 years. The known invisible satellite of the component B with near 5 years period was confirmed due to high precision CCD observations. The astrometric orbits of the both components were calculated. The orbits of inner and outer pairs of the pre-main sequence binary T Taury were calculated on the basis of high precision observations by the VLT and on the Keck II Telescope. This weakly hierarchical triple system is stable with probability more than 70%.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2016-02-01
The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.
Aaltonen, T.; Álvarez González, B.; Amerio, S.; ...
2012-09-26
The transverse momentum cross section of e⁺e⁻ pairs in the Z-boson mass region of 66–116 GeV/c² is precisely measured using Run II data corresponding to 2.1 fb⁻¹ of integrated luminosity recorded by the Collider Detector at Fermilab. The cross section is compared with two quantum chromodynamic calculations. One is a fixed-order perturbative calculation at O(α 2s), and the other combines perturbative predictions at high transverse momentum with the gluon resummation formalism at low transverse momentum. Comparisons of the measurement with calculations show reasonable agreement. The measurement is of sufficient precision to allow refinements in the understanding of the transverse momentummore » distribution.« less
Precision half-life measurement of 11C: The most precise mirror transition F t value
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.
2018-03-01
Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.
Isotope dependence of the Zeeman effect in lithium-like calcium
Köhler, Florian; Blaum, Klaus; Block, Michael; Chenmarev, Stanislav; Eliseev, Sergey; Glazov, Dmitry A.; Goncharov, Mikhail; Hou, Jiamin; Kracke, Anke; Nesterenko, Dmitri A.; Novikov, Yuri N.; Quint, Wolfgang; Minaya Ramirez, Enrique; Shabaev, Vladimir M.; Sturm, Sven; Volotka, Andrey V.; Werth, Günter
2016-01-01
The magnetic moment μ of a bound electron, generally expressed by the g-factor μ=−g μB s ħ−1 with μB the Bohr magneton and s the electron's spin, can be calculated by bound-state quantum electrodynamics (BS-QED) to very high precision. The recent ultra-precise experiment on hydrogen-like silicon determined this value to eleven significant digits, and thus allowed to rigorously probe the validity of BS-QED. Yet, the investigation of one of the most interesting contribution to the g-factor, the relativistic interaction between electron and nucleus, is limited by our knowledge of BS-QED effects. By comparing the g-factors of two isotopes, it is possible to cancel most of these contributions and sensitively probe nuclear effects. Here, we present calculations and experiments on the isotope dependence of the Zeeman effect in lithium-like calcium ions. The good agreement between the theoretical predicted recoil contribution and the high-precision g-factor measurements paves the way for a new generation of BS-QED tests. PMID:26776466
NASA Astrophysics Data System (ADS)
Wang, Yue; Yu, Jingjun; Pei, Xu
2018-06-01
A new forward kinematics algorithm for the mechanism of 3-RPS (R: Revolute; P: Prismatic; S: Spherical) parallel manipulators is proposed in this study. This algorithm is primarily based on the special geometric conditions of the 3-RPS parallel mechanism, and it eliminates the errors produced by parasitic motions to improve and ensure accuracy. Specifically, the errors can be less than 10-6. In this method, only the group of solutions that is consistent with the actual situation of the platform is obtained rapidly. This algorithm substantially improves calculation efficiency because the selected initial values are reasonable, and all the formulas in the calculation are analytical. This novel forward kinematics algorithm is well suited for real-time and high-precision control of the 3-RPS parallel mechanism.
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Grinyer, G. F.; Leslie, J. R.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.
2009-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada. A beam of ˜10^5 ^26Al^m/s was delivered in October 2007 and its decay was observed using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [4pt] [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 79, 055502 (2009).
Ultra-High Precision Half-Life Measurement for the Superallowed &+circ; Emitter ^26Al^m
NASA Astrophysics Data System (ADS)
Finlay, P.; Demand, G.; Garrett, P. E.; Leach, K. G.; Phillips, A. A.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Ball, G. C.; Bandyopadhyay, D.; Djongolov, M.; Ettenauer, S.; Hackman, G.; Pearson, C. J.; Williams, S. J.; Andreoiu, C.; Cross, D.; Austin, R. A. E.; Grinyer, G. F.; Leslie, J. R.
2008-10-01
The calculated nuclear structure dependent correction for ^26Al^m (δC-δNS= 0.305(27)% [1]) is smaller by nearly a factor of two than the other twelve precision superallowed cases, making it an ideal case to pursue a reduction in the experimental errors contributing to the Ft value. An ultra-high precision half-life measurement for the superallowed &+circ; emitter ^26Al^m has been made using a 4π continuous gas flow proportional counter as part of an ongoing experimental program in superallowed Fermi β decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a beam of ˜10^5 ^26Al^m/s in October 2007. With a statistical precision of ˜0.008%, the present work represents the single most precise measurement of any superallowed half-life to date. [1] I.S. Towner and J.C. Hardy, Phys. Rev. C 77, 025501 (2008).
Influence of Waveform Characteristics on LiDAR Ranging Accuracy and Precision
Yang, Bingwei; Xie, Xinhao; Li, Duan
2018-01-01
Time of flight (TOF) based light detection and ranging (LiDAR) is a technology for calculating distance between start/stop signals of time of flight. In lab-built LiDAR, two ranging systems for measuring flying time between start/stop signals include time-to-digital converter (TDC) that counts time between trigger signals and analog-to-digital converter (ADC) that processes the sampled start/stop pulses waveform for time estimation. We study the influence of waveform characteristics on range accuracy and precision of two kinds of ranging system. Comparing waveform based ranging (WR) with analog discrete return system based ranging (AR), a peak detection method (WR-PK) shows the best ranging performance because of less execution time, high ranging accuracy, and stable precision. Based on a novel statistic mathematical method maximal information coefficient (MIC), WR-PK precision has a high linear relationship with the received pulse width standard deviation. Thus keeping the received pulse width of measuring a constant distance as stable as possible can improve ranging precision. PMID:29642639
Boucher, M.S.
1994-01-01
Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.
NASA Astrophysics Data System (ADS)
Wang, Minghai; Wang, Hujun; Liu, Zhonghai
2011-05-01
Isotropic pyrolyric graphite (IPG) is a new kind of brittle material, it can be used for sealing the aero-engine turbine shaft and the ethylene high-temperature equipment. It not only has the general advantages of ordinal carbonaceous materials such as high temperature resistance, lubrication and abrasion resistance, but also has the advantages of impermeability and machinability that carbon/carbon composite doesn't have. Therefore, it has broad prospects for development. Mechanism of brittle-ductile transition of IPG is the foundation of precision cutting while the plastic deformation of IPG is the essential and the most important mechanical behavior of precision cutting. Using the theory of strain gradient, the mechanism of this material removal during the precision cutting is analyzed. The critical cutting thickness of IPG is calculated for the first time. Furthermore, the cutting process parameters such as cutting depth, feed rate which corresponding to the scale of brittle-ductile transition deformation of IPG are calculated. In the end, based on the theory of micromechanics, the deformation behaviors of IPG such as brittle fracture, plastic deformation and mutual transformation process are all simulated under the Sih.G.C fracture criterion. The condition of the simulation is that the material under the pressure-shear loading conditions .The result shows that the best angle during the IPG precision cutting is -30°. The theoretical analysis and the simulation result are validated by precision cutting experiments.
Development of High Precision Tsunami Runup Calculation Method Coupled with Structure Analysis
NASA Astrophysics Data System (ADS)
Arikawa, Taro; Seki, Katsumi; Chida, Yu; Takagawa, Tomohiro; Shimosako, Kenichiro
2017-04-01
The 2011 Great East Japan Earthquake (GEJE) has shown that tsunami disasters are not limited to inundation damage in a specified region, but may destroy a wide area, causing a major disaster. Evaluating standing land structures and damage to them requires highly precise evaluation of three-dimensional fluid motion - an expensive process. Our research goals were thus to develop a coupling STOC-CADMAS (Arikawa and Tomita, 2016) coupling with the structure analysis (Arikawa et. al., 2009) to efficiently calculate all stages from tsunami source to runup including the deformation of structures and to verify their applicability. We also investigated the stability of breakwaters at Kamaishi Bay. Fig. 1 shows the whole of this calculation system. The STOC-ML simulator approximates pressure by hydrostatic pressure and calculates the wave profiles based on an equation of continuity, thereby lowering calculation cost, primarily calculating from a e epi center to the shallow region. As a simulator, STOC-IC solves pressure based on a Poisson equation to account for a shallower, more complex topography, but reduces computation cost slightly to calculate the area near a port by setting the water surface based on an equation of continuity. CS3D also solves a Navier-Stokes equation and sets the water surface by VOF to deal with the runup area, with its complex surfaces of overflows and bores. STR solves the structure analysis including the geo analysis based on the Biot's formula. By coupling these, it efficiently calculates the tsunami profile from the propagation to the inundation. The numerical results compared with the physical experiments done by Arikawa et. al.,2012. It was good agreement with the experimental ones. Finally, the system applied to the local situation at Kamaishi bay. The almost breakwaters were washed away, whose situation was similar to the damage at Kamaishi bay. REFERENCES T. Arikawa and T. Tomita (2016): "Development of High Precision Tsunami Runup Calculation Method Based on a Hierarchical Simulation", Journal of Disaster ResearchVol.11 No.4 T. Arikawa, K. Hamaguchi, K. Kitagawa, T. Suzuki (2009): "Development of Numerical Wave Tank Coupled with Structure Analysis Based on FEM", Journal of J.S.C.E., Ser. B2 (Coastal Engineering) Vol. 65, No. 1 T. Arikawa et. al.(2012) "Failure Mechanism of Kamaishi Breakwaters due to the Great East Japan Earthquake Tsunami", 33rd International Conference on Coastal Engineering, No.1191
NASA Astrophysics Data System (ADS)
Burgess, S. D.; Bowring, S. A.; Heaman, L. M.
2012-12-01
Accurate and precise U-Pb geochronology of accessory phases other than zircon are required for dating some LIP basalts or determining the temporal patterns of kimberlite pipes, for example. Advances in precision and accuracy lead directly to an increase in the complexity of questions that can be posed. U-Pb geochronology of perovskite (CaTiO3) has been applied to silica-undersaturated basalts, carbonatites, alkaline igneous rocks, and kimberlites. Most published IDTIMS perovskite dates have 2-sigma precisions at the ~0.2% level for weighted mean 206Pb/238U dates, much less than possible with IDTIMS analyses of zircons, which limits the applicability of perovskite in high-precision applications. Precision on perovskite dates is lower than zircon because of common Pb, which in some cases can be up to 50% of the total Pb and must be corrected for and accurately partitioned between blank and initial. Relatively small changes in the composition of common Pb can result in inaccurate but precise dates. In many cases minerals with significant common Pb are corrected using Stacey and Kramers (1975) two stage Pb evolution model. This can be done without serious consequence to the final date for minerals with high U/Pb ratios. In the more common case where U/Pb ratios are relatively low and the proportion of common Pb is large, applying a model-derived Pb isotopic composition rather than measuring it directly can introduce percent-level inaccuracy to dates calculated with precisely known U/Pb ratios. Direct measurement of the common Pb composition can be done on a U-poor mineral that co-crystallized with perovskite; feldspar and clinopyroxene are commonly used. Clinopyroxene can contain significant in-grown radiogenic Pb and our experiments indicate that it is not eliminated by aggressive step-wise leaching. The U/Pb ratio in clinopyroxene is generally low (20 < mu < 50) but significant. Other workers (e.g. Kamo et al., 2003; Corfu and Dahlgren, 2008), have used two methods to determine the amount of ingrown Pb. First, by measuring the U/Pb ratio in clinopyroxene and assuming a crystallization age the amount of ingrown Pb can be calculated. Second, by assuming that perovskite and clinopyroxene (± other phases) are isochronous, the initial Pb isotopic composition can be calculated using the y-intercept on 206Pb/238U, 207Pb/235U, and 3-D isochron diagrams. To further develop a perovskite mineral standard for use in high-precision dating applications, we have focused on single grains/fragments of perovskite and multi-grain clinopyroxene fractions from a melteigite sample (IR90.3) within the Ice River complex, a zoned alkaline-ultramafic intrusion in southeastern British Columbia. Perovskite from this sample has variable measured 206Pb/204Pb (22-263), making this an ideal sample on which to test the sensitivity of the date on grains with variable amounts of radiogenic Pb to changes in common Pb composition. Using co-existing clinopyroxene for the initial common Pb composition by both direct measurement and by the isochron method allows us to calculate an accurate weighted-mean 206Pb/238U date on perovskite at the < 0.1% level, which overlaps within uncertainty for the two different methods. We recommend the Ice River 90.3 perovskite as a suitable EARTHTIME standard for interlaboratory and intertechnique comparison.
Emitter location errors in electronic recognition system
NASA Astrophysics Data System (ADS)
Matuszewski, Jan; Dikta, Anna
2017-04-01
The paper describes some of the problems associated with emitter location calculations. This aspect is the most important part of the series of tasks in the electronic recognition systems. The basic tasks include: detection of emission of electromagnetic signals, tracking (determining the direction of emitter sources), signal analysis in order to classify different emitter types and the identification of the sources of emission of the same type. The paper presents a brief description of the main methods of emitter localization and the basic mathematical formulae for calculating their location. The errors' estimation has been made to determine the emitter location for three different methods and different scenarios of emitters and direction finding (DF) sensors deployment in the electromagnetic environment. The emitter has been established using a special computer program. On the basis of extensive numerical calculations, the evaluation of precise emitter location in the recognition systems for different configuration alignment of bearing devices and emitter was conducted. The calculations which have been made based on the simulated data for different methods of location are presented in the figures and respective tables. The obtained results demonstrate that calculation of the precise emitter location depends on: the number of DF sensors, the distances between emitter and DF sensors, their mutual location in the reconnaissance area and bearing errors. The precise emitter location varies depending on the number of obtained bearings. The higher the number of bearings, the better the accuracy of calculated emitter location in spite of relatively high bearing errors for each DF sensor.
NASA Technical Reports Server (NTRS)
Prevot, Thomas
2012-01-01
This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.
Towards a Precision Measurement of the Lamb Shift in Hydrogen-Like Nitrogen
NASA Astrophysics Data System (ADS)
Myers, E. G.; Tarbutt, M. R.
Measurements of the 2S1/2-2P1/2 and 2S1/2 -2P3/2 transitions in moderate Z hydrogen-like ions can test Quantum-Electrodynamic calculations relevant to the interpretation of high-precision spectroscopy of atomic hydrogen. There is now particular interest in testing calculations of the two-loop self-energy. Experimental conditions are favorable for a measurement of the 2S1/2 - 2P3/2 transition in N6+ using a carbon dioxide laser. As a preliminary experiment, we have observed the 2S1/2 -2P3/2 transition in 14N6+ using a 2.5 MeV2 laser operating on the hot band of 12C16O2. The measured value of the transition centroid, 834.94(7) cm-1, agrees with, but is less precise than theory. However, the counting rate and signal-to-background ratio obtained indicate, that with careful control of systematics, a precision test of the theory is practical. Work towards constructing such a set-up is in pro gress.
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-01-01
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs. PMID:27999351
Gravity Compensation Using EGM2008 for High-Precision Long-Term Inertial Navigation Systems.
Wu, Ruonan; Wu, Qiuping; Han, Fengtian; Liu, Tianyi; Hu, Peida; Li, Haixia
2016-12-18
The gravity disturbance vector is one of the major error sources in high-precision and long-term inertial navigation applications. Specific to the inertial navigation systems (INSs) with high-order horizontal damping networks, analyses of the error propagation show that the gravity-induced errors exist almost exclusively in the horizontal channels and are mostly caused by deflections of the vertical (DOV). Low-frequency components of the DOV propagate into the latitude and longitude errors at a ratio of 1:1 and time-varying fluctuations in the DOV excite Schuler oscillation. This paper presents two gravity compensation methods using the Earth Gravitational Model 2008 (EGM2008), namely, interpolation from the off-line database and computing gravity vectors directly using the spherical harmonic model. Particular attention is given to the error contribution of the gravity update interval and computing time delay. It is recommended for the marine navigation that a gravity vector should be calculated within 1 s and updated every 100 s at most. To meet this demand, the time duration of calculating the current gravity vector using EGM2008 has been reduced to less than 1 s by optimizing the calculation procedure. A few off-line experiments were conducted using the data of a shipborne INS collected during an actual sea test. With the aid of EGM2008, most of the low-frequency components of the position errors caused by the gravity disturbance vector have been removed and the Schuler oscillation has been attenuated effectively. In the rugged terrain, the horizontal position error could be reduced at best 48.85% of its regional maximum. The experimental results match with the theoretical analysis and indicate that EGM2008 is suitable for gravity compensation of the high-precision and long-term INSs.
Towards a dispersive determination of the pion transition form factor
NASA Astrophysics Data System (ADS)
Leupold, Stefan; Hoferichter, Martin; Kubis, Bastian; Niecknig, Franz; Schneider, Sebastian P.
2018-01-01
We start with a brief motivation why the pion transition form factor is interesting and, in particular, how it is related to the high-precision standard-model calculation of the gyromagnetic ratio of the muon. Then we report on the current status of our ongoing project to calculate the pion transition form factor using dispersion theory. Finally we present and discuss a wish list of experimental data that would help to improve the input for our calculations and/or to cross-check our results.
Error measuring system of rotary Inductosyn
NASA Astrophysics Data System (ADS)
Liu, Chengjun; Zou, Jibin; Fu, Xinghe
2008-10-01
The inductosyn is a kind of high-precision angle-position sensor. It has important applications in servo table, precision machine tool and other products. The precision of inductosyn is calibrated by its error. It's an important problem about the error measurement in the process of production and application of the inductosyn. At present, it mainly depends on the method of artificial measurement to obtain the error of inductosyn. Therefore, the disadvantages can't be ignored such as the high labour intensity of the operator, the occurrent error which is easy occurred and the poor repeatability, and so on. In order to solve these problems, a new automatic measurement method is put forward in this paper which based on a high precision optical dividing head. Error signal can be obtained by processing the output signal of inductosyn and optical dividing head precisely. When inductosyn rotating continuously, its zero position error can be measured dynamically, and zero error curves can be output automatically. The measuring and calculating errors caused by man-made factor can be overcome by this method, and it makes measuring process more quickly, exactly and reliably. Experiment proves that the accuracy of error measuring system is 1.1 arc-second (peak - peak value).
NASA Astrophysics Data System (ADS)
Majumder, Tiku
2017-04-01
In recent decades, substantial experimental effort has centered on heavy (high-Z) atomic and molecular systems for atomic-physics-based tests of standard model physics, through (for example) measurements of atomic parity nonconservation and searches for permanent electric dipole moments. In all of this work, a crucial role is played by atomic theorists, whose accurate wave function calculations are essential in connecting experimental observables to tests of relevant fundamental physics parameters. At Williams College, with essential contributions from dozens of undergraduate students, we have pursued a series of precise atomic structure measurements in heavy metal atoms such as thallium, indium, and lead. These include measurements of hyperfine structure, transition amplitudes, and atomic polarizability. This work, involving diode lasers, heated vapor cells, and an atomic beam apparatus, has both tested the accuracy and helped guide the refinement of new atomic theory calculations. I will discuss a number of our recent experimental results, emphasizing the role played by students and the opportunities that have been afforded for research-training in this undergraduate environment. Work supported by Research Corporation, the NIST Precision Measurement Grants program, and the National Science Foundation.
NASA Astrophysics Data System (ADS)
Meng, ZhuXuan; Fan, Hu; Peng, Ke; Zhang, WeiHua; Yang, HuiXin
2016-12-01
This article presents a rapid and accurate aeroheating calculation method for hypersonic vehicles. The main innovation is combining accurate of numerical method with efficient of engineering method, which makes aeroheating simulation more precise and faster. Based on the Prandtl boundary layer theory, the entire flow field is divided into inviscid and viscid flow at the outer edge of the boundary layer. The parameters at the outer edge of the boundary layer are numerically calculated from assuming inviscid flow. The thermodynamic parameters of constant-volume specific heat, constant-pressure specific heat and the specific heat ratio are calculated, the streamlines on the vehicle surface are derived and the heat flux is then obtained. The results of the double cone show that at the 0° and 10° angle of attack, the method of aeroheating calculation based on inviscid outer edge of boundary layer parameters reproduces the experimental data better than the engineering method. Also the proposed simulation results of the flight vehicle reproduce the viscid numerical results well. Hence, this method provides a promising way to overcome the high cost of numerical calculation and improves the precision.
In vivo thermoluminescence dosimetry for total body irradiation.
Palkosková, P; Hlavata, H; Dvorák, P; Novotný, J; Novotný, J
2002-01-01
An improvement in the clinical results obtained using total body irradiation (TBI) with photon beams requires precise TBI treatment planning, reproducible irradiation, precise in vivo dosimetry, accurate documentation and careful evaluation. In vivo dosimetry using LiF Harshaw TLD-100 chips was used during the TBI treatments performed in our department. The results of in vivo thermoluminescence dosimetry (TLD) show that using TLD measurements and interactive adjustment of some treatment parameters based on these measurements, like monitor unit calculations, lung shielding thickness and patient positioning, it is possible to achieve high precision in absorbed dose delivery (less than 0.5%) as well as in homogeneity of irradiation (less than 6%).
Head-target tracking control of well drilling
NASA Astrophysics Data System (ADS)
Agzamov, Z. V.
2018-05-01
The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.
High Precision Spectroscopy of CH_5^+ Using Nice-Ohvms
NASA Astrophysics Data System (ADS)
Hodges, James N.; Perry, Adam J.; McCall, Benjamin J.
2013-06-01
The elusive methonium ion, CH_5^+, is of great interest due to its highly fluxional nature. The only published high-resolution infrared spectrum remains completely unassigned to this date. The primary challenge in understanding the CH_5^+ spectrum is that traditional spectroscopic approaches rely on a molecule having only small (or even large) amplitude motions about a well-defined reference geometry, and this is not the case with CH_5^+. We are in the process of re-scanning Oka's spectrum, in the original Black Widow discharge cell, using the new technique of Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy (NICE-OHVMS). The high precision afforded by optical saturation in conjunction with a frequency comb allows transition line centers to be determined with sub-MHz accuracy and precision -- a substantial improvement over the 90 MHz precision of Oka's work. With a high-precision linelist in hand, we plan to search for four line combination differences to directly determine the spacings between rotational energy levels. Such a search is currently infeasible due to the large number of false positives resulting from the relatively low precision and high spectral density of Oka's spectrum. The resulting combination differences, in conjunction with state-of-the-art theoretical calculations from Tucker Carrington, may provide the first insight into the rotational structure of this unique molecular system. E. T. White, J. Tang, T. Oka, Science (1999) 284, 135--137. B. M. Siller, et al. Opt. Express (2011), 19, 24822--24827. K. N. Crabtree, et al. Chem. Phys. Lett. (2012), 551, 1--6. X. Wang, T. Carrington, J. Chem. Phys., (2008), 129, 234102.
Rashev, Svetoslav; Moule, David C; Rashev, Vladimir
2012-11-01
We perform converged high precision variational calculations to determine the frequencies of a large number of vibrational levels in S(0) D(2)CO, extending from low to very high excess vibrational energies. For the calculations we use our specific vibrational method (recently employed for studies on H(2)CO), consisting of a combination of a search/selection algorithm and a Lanczos iteration procedure. Using the same method we perform large scale converged calculations on the vibrational level spectral structure and fragmentation at selected highly excited overtone states, up to excess vibrational energies of ∼17,000 cm(-1), in order to study the characteristics of intramolecular vibrational redistribution (IVR), vibrational level density and mode selectivity. Copyright © 2012 Elsevier B.V. All rights reserved.
Computational Science: Ensuring America’s Competitiveness
2005-06-01
Supercharging U. S. Innovation & Competitiveness, Washington, D.C. , July 2004. Davies, C. T. H. , et al. , “High-Precision Lattice QCD Confronts Experiment...together to form a class of particles call hadrons (that include protons and neutrons) . For 30 years, researchers in lattice QCD have been trying to use...the basic QCD equations to calculate the properties of hadrons, especially their masses, using numerical lattice gauge theory calculations in order to
An Online Gravity Modeling Method Applied for High Precision Free-INS
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-01-01
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261
An Online Gravity Modeling Method Applied for High Precision Free-INS.
Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao
2016-09-23
For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.
a Rigorous Comparison of Theoretical and Measured Carbon Dioxide Line Intensities
NASA Astrophysics Data System (ADS)
Yi, Hongming; Fleisher, Adam J.; Gameson, Lyn; Zak, Emil J.; Polyansky, Oleg; Tennyson, Jonathan; Hodges, Joseph T.
2017-06-01
The ability to calculate molecular line intensities from first principles plays an increasingly important role in populating line-by-line spectroscopic databases because of its generality and extensibility to various isotopologues, spectral ranges and temperature conditions. Such calculations require a spectroscopically determined potential energy surface, and an accurate dipole moment surface that can be either fully ab initio or an effective quantity based on fits to measurements Following our recent work where we used high-precision measurements of intensities in the (30013 →00001) band of ^{12}C^{16}O_2 to bound the uncertainty of calculated line lists, here we carry out high-precision, frequency-stabilized cavity ring-down spectroscopy measurements in the R-branch of the ^{12}C^{16}O_2 (20012 →00001) band from J = 16 to 52. Gas samples consisted of 50 μmol mol^{-1} or 100 μmol mol^{-1} of nitrogen-broadened carbon dioxide with gravimetrically determined SI-traceable molar composition. We demonstrate relative measurement precision (Type A) at the 0.15 % level and estimate systematic (Type B) uncertainty contributions in % of: isotopic abundance 0.01; sample density, 0.016; cavity free spectral rang,e 0.03; line shape, 0.05; line interferences, 0.05; and carbon dioxide molar fraction, 0.06. Combined in quadrature, these components yield a relative standard uncertainty in measured line intensity less than 0.2 % for most observed transitions. These intensities differ by more than 2 % from those measured by Fourier transform spectroscopy and archived in HITRAN 2012 but differ by less than 0.5 % with the calculations of Zak et al. E. Zak et al., J. Quant. Spectrosc. Radiat. Transf. 177, (2016) 31. Huang et al., J. Quant. Spectrosc. Radiat. Transf. 130, (2013) 134. Tashkun et al., J. Quant. Spectrosc. Radiat. Transf. 152, (2015) 45.
Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier
2010-05-01
Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with < or =d individuals unvaccinated or lower if the coverage was set at the UT (pUT) to calculate beta (1-pUT) and the proportion of simulations with >d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha < or = 5% beta < or = 20% in the unclustered design; alpha < or = 10% beta < or = 25% when the lots were divided in five clusters. When the interval between UT and LT was larger than 10% (e.g. 15%), we were able to select precise LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.
NASA Astrophysics Data System (ADS)
Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie
The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medvedev, Emile S., E-mail: esmedved@orc.ru; Meshkov, Vladimir V.; Stolyarov, Andrey V.
In the recent work devoted to the calculation of the rovibrational line list of the CO molecule [G. Li et al., Astrophys. J., Suppl. Ser. 216, 15 (2015)], rigorous validation of the calculated parameters including intensities was carried out. In particular, the Normal Intensity Distribution Law (NIDL) [E. S. Medvedev, J. Chem. Phys. 137, 174307 (2012)] was employed for the validation purposes, and it was found that, in the original CO line list calculated for large changes of the vibrational quantum number up to Δn = 41, intensities with Δn > 11 were unphysical. Therefore, very high overtone transitions weremore » removed from the published list in Li et al. Here, we show how this type of validation is carried out and prove that the quadruple precision is indispensably required to predict the reliable intensities using the conventional 32-bit computers. Based on these calculations, the NIDL is shown to hold up for the 0 → n transitions till the dissociation limit around n = 83, covering 45 orders of magnitude in the intensity. The low-intensity 0 → n transition predicted in the work of Medvedev [Determination of a new molecular constant for diatomic systems. Normal intensity distribution law for overtone spectra of diatomic and polyatomic molecules and anomalies in overtone absorption spectra of diatomic molecules, Institute of Chemical Physics, Russian Academy of Sciences, Chernogolovka, 1984] at n = 5 is confirmed, and two additional “abnormal” intensities are found at n = 14 and 23. Criteria for the appearance of such “anomalies” are formulated. The results could be useful to revise the high-overtone molecular transition probabilities provided in spectroscopic databases.« less
Homer, Michael D.; Peterson, James T.; Jennings, Cecil A.
2015-01-01
Back-calculation of length-at-age from otoliths and spines is a common technique employed in fisheries biology, but few studies have compared the precision of data collected with this method for catfish populations. We compared precision of back-calculated lengths-at-age for an introducedIctalurus furcatus (Blue Catfish) population among 3 commonly used cross-sectioning techniques. We used gillnets to collect Blue Catfish (n = 153) from Lake Oconee, GA. We estimated ages from a basal recess, articulating process, and otolith cross-section from each fish. We employed the Frasier-Lee method to back-calculate length-at-age for each fish, and compared the precision of back-calculated lengths among techniques using hierarchical linear models. Precision in age assignments was highest for otoliths (83.5%) and lowest for basal recesses (71.4%). Back-calculated lengths were variable among fish ages 1–3 for the techniques compared; otoliths and basal recesses yielded variable lengths at age 8. We concluded that otoliths and articulating processes are adequate for age estimation of Blue Catfish.
NASA Astrophysics Data System (ADS)
Fan, X. W.; Chen, X. J.; Zhou, S. J.; Zheng, Y.; Brion, C. E.; Frey, R.; Davidson, E. R.
1997-09-01
A newly constructed energy dispersive multichannel electron momentum spectrometer has been used to image the electron density of the outer valence orbitals of CO with high precision. Binding energy spectra are obtained at a coincidence energy resolution of 1.2 eV fwhm. The measured electron density profiles in momentum space for the outer valence orbitals of CO are compared with cross sections calculated using SCF wavefunctions with basis sets of varying complexity up to near-Hartree-Fock limit in quality. The effects of correlation and electronic relaxation on the calculated momentum profiles are investigated using large MRSD-CI calculations of the full ion-neutral overlap distributions, as well as large basis set DFT calculations with local and non-local (gradient corrected) functionals.
NASA Astrophysics Data System (ADS)
Debras, F.; Chabrier, G.
2018-01-01
A few years ago, Hubbard (2012, ApJ, 756, L15; 2013, ApJ, 768, 43) presented an elegant, non-perturbative method, called concentric MacLaurin spheroid (CMS), to calculate with very high accuracy the gravitational moments of a rotating fluid body following a barotropic pressure-density relationship. Having such an accurate method is of great importance for taking full advantage of the Juno mission, and its extremely precise determination of Jupiter gravitational moments, to better constrain the internal structure of the planet. Recently, several authors have applied this method to the Juno mission with 512 spheroids linearly spaced in altitude. We demonstrate in this paper that such calculations lead to errors larger than Juno's error bars, invalidating the aforederived Jupiter models at the level required by Juno's precision. We show that, in order to fulfill Juno's observational constraints, at least 1500 spheroids must be used with a cubic, square or exponential repartition, the most reliable solutions. When using a realistic equation of state instead of a polytrope, we highlight the necessity to properly describe the outermost layers to derive an accurate boundary condition, excluding in particular a zero pressure outer condition. Providing all these constraints are fulfilled, the CMS method can indeed be used to derive Jupiter models within Juno's present observational constraints. However, we show that the treatment of the outermost layers leads to irreducible errors in the calculation of the gravitational moments and thus on the inferred physical quantities for the planet. We have quantified these errors and evaluated the maximum precision that can be reached with the CMS method in the present and future exploitation of Juno's data.
Kandegedara, R. M. E. B.; Bollen, G.; Eibach, M.; ...
2017-10-20
This manuscript describes a measurement of the Q value for the highly forbidden beta-decays of 50V and the double electron capture decay of 50Cr. The Q value corresponds to the total energy released during the decay and is equivalent to the mass difference between parent and daughter atoms. This mass difference was measured using high precision Penning trap mass spectrometry with the Low Energy Beam and Ion Trap facility at the National Superconducting Cyclotron Laboratory. The Q value enters into theoretical calculations of the half-life and beta-decay spectrum for the decay, so improves these calculations. In addition the Q valuemore » corresponds to the end point energy of the beta-decay spectrum, which has been precisely measured for several highly-forbidden decays using modern low background detector techniques. Hence, our Q value measurements provide a test of systematics for these detectors. In addition, we have measured the absolute atomic masses of 46,47,49,50Ti, 50,51V, and 50,52-52Cr, providing improvements in precision by factors of up to 3. These atomic masses help to strengthen global evaluations of all atomic mass data, such as the Atomic Mass Evaluation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kandegedara, R. M. E. B.; Bollen, G.; Eibach, M.
This manuscript describes a measurement of the Q value for the highly forbidden beta-decays of 50V and the double electron capture decay of 50Cr. The Q value corresponds to the total energy released during the decay and is equivalent to the mass difference between parent and daughter atoms. This mass difference was measured using high precision Penning trap mass spectrometry with the Low Energy Beam and Ion Trap facility at the National Superconducting Cyclotron Laboratory. The Q value enters into theoretical calculations of the half-life and beta-decay spectrum for the decay, so improves these calculations. In addition the Q valuemore » corresponds to the end point energy of the beta-decay spectrum, which has been precisely measured for several highly-forbidden decays using modern low background detector techniques. Hence, our Q value measurements provide a test of systematics for these detectors. In addition, we have measured the absolute atomic masses of 46,47,49,50Ti, 50,51V, and 50,52-52Cr, providing improvements in precision by factors of up to 3. These atomic masses help to strengthen global evaluations of all atomic mass data, such as the Atomic Mass Evaluation.« less
Buchner, Lena; Güntert, Peter
2015-02-03
Nuclear magnetic resonance (NMR) structures are represented by bundles of conformers calculated from different randomized initial structures using identical experimental input data. The spread among these conformers indicates the precision of the atomic coordinates. However, there is as yet no reliable measure of structural accuracy, i.e., how close NMR conformers are to the "true" structure. Instead, the precision of structure bundles is widely (mis)interpreted as a measure of structural quality. Attempts to increase precision often overestimate accuracy by tight bundles of high precision but much lower accuracy. To overcome this problem, we introduce a protocol for NMR structure determination with the software package CYANA, which produces, like the traditional method, bundles of conformers in agreement with a common set of conformational restraints but with a realistic precision that is, throughout a variety of proteins and NMR data sets, a much better estimate of structural accuracy than the precision of conventional structure bundles. Copyright © 2015 Elsevier Ltd. All rights reserved.
Improved algorithm of ray tracing in ICF cryogenic targets
NASA Astrophysics Data System (ADS)
Zhang, Rui; Yang, Yongying; Ling, Tong; Jiang, Jiabin
2016-10-01
The high precision ray tracing inside inertial confinement fusion (ICF) cryogenic targets plays an important role in the reconstruction of the three-dimensional density distribution by algebraic reconstruction technique (ART) algorithm. The traditional Runge-Kutta methods, which is restricted by the precision of the grid division and the step size of ray tracing, cannot make an accurate calculation in the case of refractive index saltation. In this paper, we propose an improved algorithm of ray tracing based on the Runge-Kutta methods and Snell's law of refraction to achieve high tracing precision. On the boundary of refractive index, we apply Snell's law of refraction and contact point search algorithm to ensure accuracy of the simulation. Inside the cryogenic target, the combination of the Runge-Kutta methods and self-adaptive step algorithm are employed for computation. The original refractive index data, which is used to mesh the target, can be obtained by experimental measurement or priori refractive index distribution function. A finite differential method is performed to calculate the refractive index gradient of mesh nodes, and the distance weighted average interpolation methods is utilized to obtain refractive index and gradient of each point in space. In the simulation, we take ideal ICF target, Luneberg lens and Graded index rod as simulation model to calculate the spot diagram and wavefront map. Compared the simulation results to Zemax, it manifests that the improved algorithm of ray tracing based on the fourth-order Runge-Kutta methods and Snell's law of refraction exhibits high accuracy. The relative error of the spot diagram is 0.2%, and the peak-to-valley (PV) error and the root-mean-square (RMS) error of the wavefront map is less than λ/35 and λ/100, correspondingly.
Analysis of photopole data reduction models
NASA Technical Reports Server (NTRS)
Cheek, James B.
1987-01-01
An analysis of the total impulse obtained from a buried explosive charge can be calculated from displacement versus time points taken from successive film frames of high speed motion pictures of the explosive event. The indicator of that motion is a pole and baseplate (photopole), which is placed on or within the soil overburden. Here, researchers are concerned with the precision of the impulse calculation and ways to improve that precision. Also examined here is the effect of each initial condition on the curve fitting process. It is shown that the zero initial velocity criteria should not be applied due to the linear acceleration versus time character of the cubic power series. The applicability of the new method to photopole data records whose early time motions are obscured is illustrated.
Development and simulation of microfluidic Wheatstone bridge for high-precision sensor
NASA Astrophysics Data System (ADS)
Shipulya, N. D.; Konakov, S. A.; Krzhizhanovskaya, V. V.
2016-08-01
In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.
Method for measuring retardation of infrared wave-plate by modulated-polarized visible light
NASA Astrophysics Data System (ADS)
Zhang, Ying; Song, Feijun
2012-11-01
A new method for precisely measuring the optical phase retardation of wave-plates in the infrared spectral region is presented by using modulated-polarized visible light. An electro-optic modulator is used to accurately determine the zero point by the frequency-doubled signal of the Modulated-polarized light. A Babinet-Soleil compensator is employed to make the phase delay compensation. Based on this method, an instrument is set up to measure the retardations of the infrared wave-plates with visible region laser. Measurement results with high accuracy and sound repetition are obtained by simple calculation. Its measurement precision is less than and repetitive precision is within 0.3%.
High Precision Edge Detection Algorithm for Mechanical Parts
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
High precision predictions for exclusive VH production at the LHC
Li, Ye; Liu, Xiaohui
2014-06-04
We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less
Bärnreuther, Peter; Czakon, Michał; Mitov, Alexander
2012-09-28
We compute the next-to-next-to-leading order QCD corrections to the partonic reaction that dominates top-pair production at the Tevatron. This is the first ever next-to-next-to-leading order calculation of an observable with more than two colored partons and/or massive fermions at hadron colliders. Augmenting our fixed order calculation with soft-gluon resummation through next-to-next-to-leading logarithmic accuracy, we observe that the predicted total inclusive cross section exhibits a very small perturbative uncertainty, estimated at ±2.7%. We expect that once all subdominant partonic reactions are accounted for, and work in this direction is ongoing, the perturbative theoretical uncertainty for this observable could drop below ±2%. Our calculation demonstrates the power of our computational approach and proves it can be successfully applied to all processes at hadron colliders for which high-precision analyses are needed.
NASA Astrophysics Data System (ADS)
Bärnreuther, Peter; Czakon, Michał; Mitov, Alexander
2012-09-01
We compute the next-to-next-to-leading order QCD corrections to the partonic reaction that dominates top-pair production at the Tevatron. This is the first ever next-to-next-to-leading order calculation of an observable with more than two colored partons and/or massive fermions at hadron colliders. Augmenting our fixed order calculation with soft-gluon resummation through next-to-next-to-leading logarithmic accuracy, we observe that the predicted total inclusive cross section exhibits a very small perturbative uncertainty, estimated at ±2.7%. We expect that once all subdominant partonic reactions are accounted for, and work in this direction is ongoing, the perturbative theoretical uncertainty for this observable could drop below ±2%. Our calculation demonstrates the power of our computational approach and proves it can be successfully applied to all processes at hadron colliders for which high-precision analyses are needed.
Equation of state of liquid Indium under high pressure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Huaming, E-mail: huamingli@gatech.edu, E-mail: mo.li@gatech.edu; Li, Mo, E-mail: huamingli@gatech.edu, E-mail: mo.li@gatech.edu; School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332
2015-09-15
We apply an equation of state of a power law form to liquid Indium to study its thermodynamic properties under high temperature and high pressure. Molar volume of molten indium is calculated along the isothermal line at 710K within good precision as compared with the experimental data in an externally heated diamond anvil cell. Bulk modulus, thermal expansion and internal pressure are obtained for isothermal compression. Other thermodynamic properties are also calculated along the fitted high pressure melting line. While our results suggest that the power law form may be a better choice for the equation of state of liquids,more » these detailed predictions are yet to be confirmed by further experiment.« less
Proposal for the determination of nuclear masses by high-precision spectroscopy of Rydberg states
NASA Astrophysics Data System (ADS)
Wundt, B. J.; Jentschura, U. D.
2010-06-01
The theoretical treatment of Rydberg states in one-electron ions is facilitated by the virtual absence of the nuclear-size correction, and fundamental constants like the Rydberg constant may be in the reach of planned high-precision spectroscopic experiments. The dominant nuclear effect that shifts transition energies among Rydberg states therefore is due to the nuclear mass. As a consequence, spectroscopic measurements of Rydberg transitions can be used in order to precisely deduce nuclear masses. A possible application of this approach to hydrogen and deuterium, and hydrogen-like lithium and carbon is explored in detail. In order to complete the analysis, numerical and analytic calculations of the quantum electrodynamic self-energy remainder function for states with principal quantum number n = 5, ..., 8 and with angular momentum ell = n - 1 and ell = n - 2 are described \\big(j = \\ell \\pm {\\textstyle {\\frac{1}{2}}}\\big).
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses
Das, Jayajit
2016-01-01
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. PMID:26958894
High-Precision Half-Life Measurement for the Superallowed β+ Emitter 22Mg
NASA Astrophysics Data System (ADS)
Dunlop, Michelle
2017-09-01
High precision measurements of the Ft values for superallowed Fermi beta transitions between 0+ isobaric analogue states allow for stringent tests of the electroweak interaction. These transitions provide an experimental probe of the Conserved-Vector-Current hypothesis, the most precise determination of the up-down element of the Cabibbo-Kobayashi-Maskawa matrix, and set stringent limits on the existence of scalar currents in the weak interaction. To calculate the Ft values several theoretical corrections must be applied to the experimental data, some of which have large model dependent variations. Precise experimental determinations of the ft values can be used to help constrain the different models. The uncertainty in the 22Mg superallowed Ft value is dominated by the uncertainty in the experimental ft value. The adopted half-life of 22Mg is determined from two measurements which disagree with one another, resulting in the inflation of the weighted-average half-life uncertainty by a factor of 2. The 22Mg half-life was measured with a precision of 0.02% via direct β counting at TRIUMF's ISAC facility, leading to an improvement in the world-average half-life by more than a factor of 3.
A fiducial skull marker for precise MRI-based stereotaxic surgery in large animal models.
Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura; Zaer, Hamed; Orlowski, Dariusz; Fitting, Lise Moberg; Ziedler, Dora; Geneser, Michael; Sangill, Ryan; Alstrup, Aage Kristian Olsen; Bjarkam, Carsten Reidies; Sørensen, Jens Christian Hedemann
2017-06-15
Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical and subcortical anatomical differences. We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulfate solution or MRI-visible paste from a commercially available cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. The use of skull bone based fiducial markers gives high precision for both targeting and evaluation of stereotaxic systems. There are no metal artifacts and the fiducial is easily removed after surgery. The fiducial marker can be used as a very precise reference point, either for direct targeting or in evaluation of other stereotaxic systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Prospects of photonic nanojets for precise exposure on microobjects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geints, Yu. E., E-mail: ygeints@iao.ru; Zuev Institute of Atmospheric Optics, SB Russian Academy of Sciences, Acad. Zuev Square 1, Tomsk, 634021; Panina, E. K., E-mail: pek@iao.ru
We report on the new optical tool for precise manipulation of various microobjects. This tool is referred to as a “photonic nanojet” (PJ) and corresponds to specific spatially localized and high-intensity area formed near micron-sized transparent spherical dielectric particles illuminated by a visible laser radiation The descriptive analysis of the morphological shapes of photonic nanojets is presented. The PJ shape characterization is based on the numerical calculations of the near-field distribution according to the Mie theory and accounts for jet dimensions and shape complexity.
Analyzing power Ay(θ) of n-3He elastic scattering between 1.60 and 5.54 MeV.
Esterline, J; Tornow, W; Deltuva, A; Fonseca, A C
2013-04-12
Comprehensive and high-accuracy n-3He elastic scattering analyzing power Ay(θ) angular distributions were obtained at five incident neutron energies between 1.60 and 5.54 MeV. The data are compared to rigorous four-nucleon calculations using high-precision nucleon-nucleon potential models; three-nucleon force effects are found to be very small. The agreement between data and calculations is fair at the lower energies and becomes less satisfactory with increasing neutron energy. Comparison to p-3He scattering over the same energy range exhibits unexpectedly large isospin effects.
NASA Astrophysics Data System (ADS)
Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping
2017-12-01
In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
Fine Structure in Helium-like Fluorine by Fast-Beam Laser Spectroscopy
NASA Astrophysics Data System (ADS)
Myers, E. G.; Thompson, J. K.; Silver, J. D.
1998-05-01
With the aim of providing an additional precise test of higher-order corrections to high precision calculations of fine structure in helium and helium-like ions(T. Zhang, Z.-C. Yan and G.W.F. Drake, Phys. Rev. Lett. 77), 1715 (1996)., a measurement of the 2^3P_2,F - 2^3P_1,F' fine structure in ^19F^7+ is in progress. The method involves doppler-tuned laser spectroscopy using a CO2 laser on a foil-stripped fluorine ion beam. We aim to achieve a higher precision, compared to an earlier measurement(E.G. Myers, P. Kuske, H.J. Andrae, I.A. Armour, H.A. Klein, J.D. Silver, and E. Traebert, Phys. Rev. Lett. 47), 87 (1981)., by using laser beams parallel and anti-parallel to the ion beam, to obtain partial cancellation of the doppler shift(J.K. Thompson, D.J.H. Howie and E.G. Myers, Phys. Rev. A 57), 180 (1998).. A calculation of the hyperfine structure, allowing for relativistic, QED and nuclear size effects, will be required to obtain the ``hyperfine-free'' fine structure interval from the measurements.
Consistent calculation of the screening and exchange effects in allowed β- transitions
NASA Astrophysics Data System (ADS)
Mougeot, X.; Bisch, C.
2014-07-01
The atomic exchange effect has previously been demonstrated to have a great influence at low energy on the Pu241 β- transition. The screening effect has been given as a possible explanation for a remaining discrepancy. Improved calculations have been made to consistently evaluate these two atomic effects, compared here to the recent high-precision measurements of Pu241 and Ni63 β spectra. In this paper a screening correction has been defined to account for the spatial extension of the electron wave functions. Excellent overall agreement of about 1% from 500 eV to the end-point energy has been obtained for both β spectra, which demonstrates that a rather simple β decay model for allowed transitions, including atomic effects within an independent-particle model, is sufficient to describe well the current most precise measurements.
Measurement of material mechanical properties in microforming
NASA Astrophysics Data System (ADS)
Yun, Wang; Xu, Zhenying; Hui, Huang; Zhou, Jianzhong
2006-02-01
As the rapid market need of micro-electro-mechanical systems engineering gives it the wide development and application ranging from mobile phones to medical apparatus, the need of metal micro-parts is increasing gradually. Microforming technology challenges the plastic processing technology. The findings have shown that if the grain size of the specimen remains constant, the flow stress changes with the increasing miniaturization, and also the necking elongation and the uniform elongation etc. It is impossible to get the specimen material properties in conventional tensile test machine, especially in the high precision demand. Therefore, one new measurement method for getting the specimen material-mechanical property with high precision is initiated. With this method, coupled with the high speed of Charge Coupled Device (CCD) camera and high precision of Coordinate Measuring Machine (CMM), the elongation and tensile strain in the gauge length are obtained. The elongation, yield stress and other mechanical properties can be calculated from the relationship between the images and CCD camera movement. This measuring method can be extended into other experiments, such as the alignment of the tool and specimen, micro-drawing process.
NASA Astrophysics Data System (ADS)
Bonesini, Maurizio
2017-12-01
The FAMU (Fisica degli Atomi Muonici) experiment has the goal to measure precisely the proton Zemach radius, thus contributing to the solution of the so-called proton radius "puzzle". To this aim, it makes use of a high-intensity pulsed muon beam at RIKEN-RAL impinging on a cryogenic hydrogen target with an high-Z gas admixture and a tunable mid-IR high power laser, to measure the hyperfine (HFS) splitting of the 1S state of the muonic hydrogen. From the value of the exciting laser frequency, the energy of the HFS transition may be derived with high precision ( 10-5) and thus, via QED calculations, the Zemach radius of the proton. The experimental apparatus includes a precise fiber-SiPMT beam hodoscope and a crown of eight LaBr3 crystals and a few HPGe detectors for detection of the emitted characteristic X-rays. Preliminary runs to optimize the gas target filling and its operating conditions have been taken in 2014 and 2015-2016. The final run, with the pump laser to drive the HFS transition, is expected in 2018.
NASA Astrophysics Data System (ADS)
Rennick, Chris; Bausi, Francesco; Arnold, Tim
2017-04-01
On the global scale methane (CH4) concentrations have more than doubled over the last 150 years, and the contribution to the enhanced greenhouse effect is almost half of that due to the increase in carbon dioxide (CO2) over the same period. Microbial, fossil fuel, biomass burning and landfill are dominant methane sources with differing annual variabilities; however, in the UK for example, mixing ratio measurements from a tall tower network and regional scale inversion modelling have thus far been unable to disaggregate emissions from specific source categories with any significant certainty. Measurement of the methane isotopologue ratios will provide the additional information needed for more robust sector attribution, which will be important for directing policy action Here we explore the potential for isotope ratio measurements to improve the interpretation of atmospheric mixing ratios beyond calculation of total UK emissions, and describe current analytical work at the National Physical Laboratory that will realise deployment of such measurements. We simulate isotopic variations at the four UK greenhouse gas tall tower network sites to understand where deployment of the first isotope analyser would be best situated. We calculate the levels of precision needed in both δ-13C and δ-D in order to detect particular scenarios of emissions. Spectroscopic measurement in the infrared by quantum cascade laser (QCL) absorption is a well-established technique to quantify the mixing ratios of trace species in atmospheric samples and, as has been demonstrated in 2016, if coupled to a suitable preconcentrator then high-precision measurements are possible. The current preconcentration system under development at NPL is designed to make the highest precision measurements yet of the standard isotope ratios via a new large-volume cryogenic trap design and controlled thermal desorption into a QCL spectrometer. Finally we explore the potential for the measurement of clumped isotopes at high frequency and precision. The doubly-substituted 13CH3D isotopologue is a tracer for methane formed at geological temperatures, and will provide additional information for identification of these sources.
NASA Astrophysics Data System (ADS)
Safronova, M. S.; Safronova, U. I.; Porsev, S. G.; Kozlov, M. G.; Ralchenko, Yu.
2018-01-01
Energy levels, wavelengths, magnetic-dipole and electric-quadrupole transition rates between the low-lying states are evaluated for W51 + to W54 + ions with 3 dn (n =2 to 5) electronic configurations by using an approach combining configuration interaction with the linearized coupled-cluster single-double method. The QED corrections are directly incorporated into the calculations and their effect is studied in detail. Uncertainties of the calculations are discussed. This study of such highly charged ions with the present method opens the way for future applications allowing an accurate prediction of properties for a very wide range of highly charged ions aimed at providing precision benchmarks for various applications.
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Recent progress of laser spectroscopy experiments on antiprotonic helium
NASA Astrophysics Data System (ADS)
Hori, Masaki
2018-03-01
The Atomic Spectroscopy and Collisions Using Slow Antiprotons (ASACUSA) collaboration is currently carrying out laser spectroscopy experiments on antiprotonic helium ? atoms at CERN's Antiproton Decelerator facility. Two-photon spectroscopic techniques have been employed to reduce the Doppler width of the measured ? resonance lines, and determine the atomic transition frequencies to a fractional precision of 2.3-5 parts in 109. More recently, single-photon spectroscopy of buffer-gas cooled ? has reached a similar precision. By comparing the results with three-body quantum electrodynamics calculations, the antiproton-to-electron mass ratio was determined as ?, which agrees with the known proton-to-electron mass ratio with a precision of 8×10-10. The high-quality antiproton beam provided by the future Extra Low Energy Antiproton Ring (ELENA) facility should enable further improvements in the experimental precision. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.
NASA Astrophysics Data System (ADS)
Komura, Yukihiro; Okabe, Yutaka
2016-04-01
We study the Ising models on the Penrose lattice and the dual Penrose lattice by means of the high-precision Monte Carlo simulation. Simulating systems up to the total system size N = 20633239, we estimate the critical temperatures on those lattices with high accuracy. For high-speed calculation, we use the generalized method of the single-GPU-based computation for the Swendsen-Wang multi-cluster algorithm of Monte Carlo simulation. As a result, we estimate the critical temperature on the Penrose lattice as Tc/J = 2.39781 ± 0.00005 and that of the dual Penrose lattice as Tc*/J = 2.14987 ± 0.00005. Moreover, we definitely confirm the duality relation between the critical temperatures on the dual pair of quasilattices with a high degree of accuracy, sinh (2J/Tc)sinh (2J/Tc*) = 1.00000 ± 0.00004.
NASA Astrophysics Data System (ADS)
Zou, Shuzhen; Chen, Han; Yu, Haijuan; Sun, Jing; Zhao, Pengfei; Lin, Xuechun
2017-12-01
We demonstrate a new method for fabricating a (6 + 1) × 1 pump-signal combiner based on the reduction of signal fiber diameter by corrosion. This method avoids the mismatch loss of the splice between the signal fiber and the output fiber caused by the signal fiber taper processing. The optimum radius of the corroded signal fiber was calculated according to the analysis of the influence of the cladding thickness on the laser propagating in the fiber core. Besides, we also developed a two-step splicing method to complete the high-precision alignment between the signal fiber core and the output fiber core. A high-efficiency (6 + 1) × 1 pump-signal combiner was produced with an average pump power transmission efficiency of 98.0% and a signal power transmission efficiency of 97.7%, which is well suitable for application to high-power fiber laser system.
Penning trap mass spectrometry Q-value determinations for highly forbidden β-decays
NASA Astrophysics Data System (ADS)
Sandler, Rachel; Bollen, Georg; Eibach, Martin; Gamage, Nadeesha; Gulyuz, Kerim; Hamaker, Alec; Izzo, Chris; Kandegedara, Rathnayake; Redshaw, Matt; Ringle, Ryan; Valverde, Adrian; Yandow, Isaac; Low Energy Beam Ion Trap Team
2017-09-01
Over the last several decades, extremely sensitive, ultra-low background beta and gamma detection techniques have been developed. These techniques have enabled the observation of very rare processes, such as highly forbidden beta decays e.g. of 113Cd, 50V and 138La. Half-life measurements of highly forbidden beta decays provide a testing ground for theoretical nuclear models, and the comparison of calculated and measured energy spectra could enable a determination of the values of the weak coupling constants. Precision Q-value measurements also allow for systematic tests of the beta-particle detection techniques. We will present the results and current status of Q value determinations for highly forbidden beta decays. The Q values, the mass difference between parent and daughter nuclides, are measured using the high precision Penning trap mass spectrometer LEBIT at the National Superconducting Cyclotron Laboratory.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
Precision determination of weak charge of {sup 133}Cs from atomic parity violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porsev, S. G.; School of Physics, University of New South Wales, Sydney, New South Wales 2052; Petersburg Nuclear Physics Institute, Gatchina, Leningrad District 188300
2010-08-01
We discuss results of the most accurate to-date test of the low-energy electroweak sector of the standard model of elementary particles. Combining previous measurements with our high-precision calculations we extracted the weak charge of the {sup 133}Cs nucleus, Q{sub W}=-73.16(29){sub exp}(20){sub th}[S. G. Porsev, K. Beloy, and A. Derevianko, Phys. Rev. Lett. 102, 181601 (2009)]. The result is in perfect agreement with Q{sub W}{sup SM} predicted by the standard model, Q{sub W}{sup SM}=-73.16(3), and confirms energy dependence (or running) of the electroweak interaction and places constraints on a variety of new physics scenarios beyond the standard model. In particular, wemore » increase the lower limit on the masses of extra Z-bosons predicted by models of grand unification and string theories. This paper provides additional details to the earlier paper. We discuss large-scale calculations in the framework of the coupled-cluster method, including full treatment of single, double, and valence triple excitations. To determine the accuracy of the calculations we computed energies, electric-dipole amplitudes, and hyperfine-structure constants. An extensive comparison with high-accuracy experimental data was carried out.« less
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less
Last Glacial Maximum Salinity Reconstruction
NASA Astrophysics Data System (ADS)
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were determined experimentally. We compare the high precision salinity profiles determined using our new method to profiles determined from the traditional chloride titrations of parallel samples. Our technique provides a more accurate reconstruction of past salinity, informing questions of water mass composition and distribution during the LGM.
Development of a Method to Assess the Precision Of the z-axis X-ray Beam Collimation in a CT Scanner
NASA Astrophysics Data System (ADS)
Kim, Yon-Min
2018-05-01
Generally X-ray equipment specifies the beam collimator for the accuracy measurement as a quality control item, but the computed tomography (CT) scanner with high dose has no collimator accuracy measurement item. If the radiation dose is to be reduced, an important step is to check if the beam precisely collimates at the body part for CT scan. However, few ways are available to assess how precisely the X-ray beam is collimated. In this regard, this paper provides a way to assess the precision of z-axis X-ray beam collimation in a CT scanner. After the image plate cassette had been exposed to the X-ray beam, the exposed width was automatically detected by using a computer program developed by the research team to calculate the difference between the exposed width and the imaged width (at isocenter). The result for the precision of z-axis X-ray beam collimation showed that the exposed width was 3.8 mm and the overexposure was high at 304% when a narrow beam of a 1.25 mm imaged width was used. In this study, the precision of the beam collimation of the CT scanner, which is frequently used for medical services, was measured in a convenient way by using the image plate (IP) cassette.
An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition
NASA Astrophysics Data System (ADS)
Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.
2018-04-01
Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.
Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.
Yang, Lu
2009-01-01
For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.
Quadratic electroweak corrections for polarized Moller scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov
2012-01-01
The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKenzie, IV, George Espy; Goda, Joetta Marie; Grove, Travis Justin
This paper examines the comparison of MCNP® code’s capability to calculate kinetics parameters effectively for a thermal system containing highly enriched uranium (HEU). The Rossi-α parameter was chosen for this examination because it is relatively easy to measure as well as easy to calculate using MCNP®’s kopts card. The Rossi-α also incorporates many other parameters of interest in nuclear kinetics most of which are more difficult to precisely measure. The comparison looks at two different nuclear data libraries for comparison to the experimental data. These libraries are ENDF/BVI (.66c) and ENDF/BVII (.80c).
NASA Astrophysics Data System (ADS)
Zhou, Yu; Wang, Tianyi; Dai, Bing; Li, Wenjun; Wang, Wei; You, Chengwu; Wang, Kejia; Liu, Jinsong; Wang, Shenglie; Yang, Zhengang
2018-02-01
Inspired by the extensive application of terahertz (THz) imaging technologies in the field of aerospace, we exploit a THz frequency modulated continuous-wave imaging method with continuous wavelet transform (CWT) algorithm to detect a multilayer heat shield made of special materials. This method uses the frequency modulation continuous-wave system to catch the reflected THz signal and then process the image data by the CWT with different basis functions. By calculating the sizes of the defects area in the final images and then comparing the results with real samples, a practical high-precision THz imaging method is demonstrated. Our method can be an effective tool for the THz nondestructive testing of composites, drugs, and some cultural heritages.
Extraction of the neutron magnetic form factor from quasielastic 3He→(e→,e') at Q2=0.1-0.6(GeV/c)2
NASA Astrophysics Data System (ADS)
Anderson, B.; Auberbach, L.; Averett, T.; Bertozzi, W.; Black, T.; Calarco, J.; Cardman, L.; Cates, G. D.; Chai, Z. W.; Chen, J. P.; Choi, Seonho; Chudakov, E.; Churchwell, S.; Corrado, G. S.; Crawford, C.; Dale, D.; Deur, A.; Djawotho, P.; Dutta, D.; Finn, J. M.; Gao, H.; Gilman, R.; Glamazdin, A. V.; Glashausser, C.; Glöckle, W.; Golak, J.; Gomez, J.; Gorbenko, V. G.; Hansen, J.-O.; Hersman, F. W.; Higinbotham, D. W.; Holmes, R.; Howell, C. R.; Hughes, E.; Humensky, B.; Incerti, S.; Jager, C. W. De; Jensen, J. S.; Jiang, X.; Jones, C. E.; Jones, M.; Kahl, R.; Kamada, H.; Kievsky, A.; Kominis, I.; Korsch, W.; Kramer, K.; Kumbartzki, G.; Kuss, M.; Lakuriqi, E.; Liang, M.; Liyanage, N.; Lerose, J.; Malov, S.; Margaziotis, D. J.; Martin, J. W.; McCormick, K.; McKeown, R. D.; McIlhany, K.; Meziani, Z.-E.; Michaels, R.; Miller, G. W.; Mitchell, J.; Nanda, S.; Pace, E.; Pavlin, T.; Petratos, G. G.; Pomatsalyuk, R. I.; Pripstein, D.; Prout, D.; Ransome, R. D.; Roblin, Y.; Rvachev, M.; Saha, A.; Salmè, G.; Schnee, M.; Seely, J.; Shin, T.; Slifer, K.; Souder, P. A.; Strauch, S.; Suleiman, R.; Sutter, M.; Tipton, B.; Todor, L.; Viviani, M.; Vlahovic, B.; Watson, J.; Williamson, C. F.; Witała, H.; Wojtsekhowski, B.; Xiong, F.; Xu, W.; Yeh, J.; Żołnierczuk, P.
2007-03-01
We have measured the transverse asymmetry AT' in the quasielastic 3He→(e→,e') process with high precision at Q2 values from 0.1 to 0.6(GeV/c)2. The neutron magnetic form factor GMn was extracted at Q2 values of 0.1 and 0.2(GeV/c)2 using a nonrelativistic Faddeev calculation which includes both final-state interactions (FSI) and meson-exchange currents (MEC). Theoretical uncertainties due to the FSI and MEC effects were constrained with a precision measurement of the spin-dependent asymmetry in the threshold region of 3He→(e→,e'). We also extracted the neutron magnetic form factor GMn at Q2 values of 0.3 to 0.6(GeV/c)2 based on plane wave impulse approximation calculations.
Potential Application of a Graphical Processing Unit to Parallel Computations in the NUBEAM Code
NASA Astrophysics Data System (ADS)
Payne, J.; McCune, D.; Prater, R.
2010-11-01
NUBEAM is a comprehensive computational Monte Carlo based model for neutral beam injection (NBI) in tokamaks. NUBEAM computes NBI-relevant profiles in tokamak plasmas by tracking the deposition and the slowing of fast ions. At the core of NUBEAM are vector calculations used to track fast ions. These calculations have recently been parallelized to run on MPI clusters. However, cost and interlink bandwidth limit the ability to fully parallelize NUBEAM on an MPI cluster. Recent implementation of double precision capabilities for Graphical Processing Units (GPUs) presents a cost effective and high performance alternative or complement to MPI computation. Commercially available graphics cards can achieve up to 672 GFLOPS double precision and can handle hundreds of thousands of threads. The ability to execute at least one thread per particle simultaneously could significantly reduce the execution time and the statistical noise of NUBEAM. Progress on implementation on a GPU will be presented.
Immobilisation precision in VMAT for oral cancer patients
NASA Astrophysics Data System (ADS)
Norfadilah, M. N.; Ahmad, R.; Heng, S. P.; Lam, K. S.; Radzi, A. B. Ahmad; John, L. S. H.
2017-05-01
A study was conducted to evaluate and quantify a precision of the interfraction setup with different immobilisation devices throughout the treatment time. Local setup accuracy was analysed for 8 oral cancer patients receiving radiotherapy; 4 with HeadFIX® mouthpiece moulded with wax (HFW) and 4 with 10 ml/cc syringe barrel (SYR). Each patients underwent Image Guided Radiotherapy (IGRT) with total of 209 cone-beam computed tomography (CBCT) data sets for position set up errors measurement. The setup variations in the mediolateral (ML), craniocaudal (CC), and anteroposterior (AP) dimensions were measured. Overall mean displacement (M), the population systematic (Σ) and random (σ) errors and the 3D vector length were calculated. Clinical target volume to planning target volume (CTV-PTV) margins were calculated according to the van Herk formula (2.5Σ+0.7σ). The M values for both group were < 1 mm and < 1° in all translational and rotational directions. This indicate there is no significant imprecision in the equipment (lasers) and during procedure. The interfraction translational 3 dimension vector for HFW and SYR were 1.93±0.66mm and 3.84±1.34mm, respectively. The interfraction average rotational error were 0.00°±0.65° and 0.34°±0.59°, respectively. CTV-PTV margins along the 3 translational axis (Right-Left, Superior-Inferior, Anterior-Posterior) calculated were 3.08, 2.22 and 0.81 mm for HFW and 3.76, 6.24 and 5.06 mm for SYR. The results of this study have demonstrated that HFW more precise in reproducing patient position compared to conventionally used SYR (p<0.001). All margin calculated did not exceed hospital protocol (5mm) except S-I and A-P axes using syringe. For this reason, a daily IGRT is highly recommended to improve the immobilisation precision.
Arithmetic Abilities in Children with Developmental Dyslexia: Performance on French ZAREKI-R Test
ERIC Educational Resources Information Center
De Clercq-Quaegebeur, Maryse; Casalis, Séverine; Vilette, Bruno; Lemaitre, Marie-Pierre; Vallée, Louis
2018-01-01
A high comorbidity between reading and arithmetic disabilities has already been reported. The present study aims at identifying more precisely patterns of arithmetic performance in children with developmental dyslexia, defined with severe and specific criteria. By means of a standardized test of achievement in mathematics ("Calculation and…
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
Fundamental limits of scintillation detector timing precision
NASA Astrophysics Data System (ADS)
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-07-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10 000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A-1/2 more than any other factor, we tabulated the parameter B, where R = BA-1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10 000 photoelectrons ns-1. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10 000 photoelectrons ns-1.
Fundamental Limits of Scintillation Detector Timing Precision
Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.
2014-01-01
In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216
NASA Astrophysics Data System (ADS)
Artem'ev, V. A.; Nezvanov, A. Yu.; Nesvizhevsky, V. V.
2016-01-01
We discuss properties of the interaction of slow neutrons with nano-dispersed media and their application for neutron reflectors. In order to increase the accuracy of model simulation of the interaction of neutrons with nanopowders, we perform precise quantum mechanical calculation of potential scattering of neutrons on single nanoparticles using the method of phase functions. We compare results of precise calculations with those performed within first Born approximation for nanodiamonds with the radius of 2-5 nm and for neutron energies 3 × 10-7-10-3 eV. Born approximation overestimates the probability of scattering to large angles, while the accuracy of evaluation of integral characteristics (cross sections, albedo) is acceptable. Using Monte-Carlo method, we calculate albedo of neutrons from different layers of piled up diamond nanopowder.
Determining Energy Expenditure during Some Household and Garden Tasks.
ERIC Educational Resources Information Center
Gunn, Simon M.; Brooks, Anthony G.; Withers, Robert T.; Gore, Christopher J.; Owen, Neville; Booth, Michael L.; Bauman, Adrian E.
2002-01-01
Calculated the reproducibility and precision for VO2 during moderate paced walking and four housework and gardening activities, examining which rated at least 3.0 when calculating exercise intensity in METs and multiples of measured resting metabolic rate (MRM). VO2 was measured with reproducibility and precision. Expressing energy expenditure in…
NASA Astrophysics Data System (ADS)
Jiang, Shanchao; Wang, Jing; Sui, Qingmei
2018-03-01
In order to achieve rotation angle measurement, one novel type of miniaturization fiber Bragg grating (FBG) rotation angle sensor with high measurement precision and temperature self-compensation is proposed and studied in this paper. The FBG rotation angle sensor mainly contains two core sensitivity elements (FBG1 and FBG2), triangular cantilever beam, and rotation angle transfer element. In theory, the proposed sensor can achieve temperature self-compensation by complementation of the two core sensitivity elements (FBG1 and FBG2), and it has a boundless angel measurement range with 2π rad period duo to the function of the rotation angle transfer element. Based on introducing the joint working processes, the theory calculation model of the FBG rotation angel sensor is established, and the calibration experiment on one prototype is also carried out to obtain its measurement performance. After experimental data analyses, the measurement precision of the FBG rotation angle sensor prototype is 0.2 ° with excellent linearity, and the temperature sensitivities of FBG1 and FBG2 are 10 pm/° and 10.1 pm/°, correspondingly. All these experimental results confirm that the FBG rotation angle sensor can achieve large-range angle measurement with high precision and temperature self-compensation.
Bischoff, James L.; Williams, Ross W.; Rosenbauer, Robert J.; Aramburu, Arantza; Arsuaga, Juan Luis; Garcia, Nuria; Cuenca-Bescos, Gloria
2007-01-01
The Sima de los Huesos site of the Atapuerca complex near Burgos, Spain contains the skeletal remains of at least 28 individuals in a mud-breccia underlying an accumulation of the Middle Pleistocene cave bear (Ursus deningeri). We report here on new high-precision dates on the recently discovered speleothem SRA-3 overlaying human bones within the Sima de los Huesos. Earlier analyses of this speleothem by TIMS (thermal-ionization mass-spectrometry) showed the lower part to be indistinguishable from internal isotopic equilibrium at the precision of the TIMS instrumentation used, yielding minimum age of 350 kyr (kyr = 103 yr before present). Reanalysis of six samples of SRA-3 by inductively-coupled plasma-multicollector mass-spectrometry (ICP-MS) produced high-precision analytical results allowing calculation of finite dates. The new dates cluster around 600 kyr. A conservative conclusion takes the lower error limit ages as the minimum age of the speleothem, or 530 kyr. This places the SH hominids at the very beginnings of the Neandertal evolutionary lineage.
Submillisecond fireball timing using de Bruijn timecodes
NASA Astrophysics Data System (ADS)
Howie, Robert M.; Paxman, Jonathan; Bland, Philip A.; Towner, Martin C.; Sansom, Eleanor K.; Devillepoix, Hadrien A. R.
2017-08-01
Long-exposure fireball photographs have been used to systematically record meteoroid trajectories, calculate heliocentric orbits, and determine meteorite fall positions since the mid-20th century. Periodic shuttering is used to determine meteoroid velocity, but up until this point, a separate method of precisely determining the arrival time of a meteoroid was required. We show it is possible to encode precise arrival times directly into the meteor image by driving the periodic shutter according to a particular pattern—a de Bruijn sequence—and eliminate the need for a separate subsystem to record absolute fireball timing. The Desert Fireball Network has implemented this approach using a microcontroller driven electro-optic shutter synchronized with GNSS UTC time to create small, simple, and cost-effective high-precision fireball observatories with submillisecond timing accuracy.
The structure of the proton in the LHC precision era
NASA Astrophysics Data System (ADS)
Gao, Jun; Harland-Lang, Lucian; Rojo, Juan
2018-05-01
We review recent progress in the determination of the parton distribution functions (PDFs) of the proton, with emphasis on the applications for precision phenomenology at the Large Hadron Collider (LHC). First of all, we introduce the general theoretical framework underlying the global QCD analysis of the quark and gluon internal structure of protons. We then present a detailed overview of the hard-scattering measurements, and the corresponding theory predictions, that are used in state-of-the-art PDF fits. We emphasize here the role that higher-order QCD and electroweak corrections play in the description of recent high-precision collider data. We present the methodology used to extract PDFs in global analyses, including the PDF parametrization strategy and the definition and propagation of PDF uncertainties. Then we review and compare the most recent releases from the various PDF fitting collaborations, highlighting their differences and similarities. We discuss the role that QED corrections and photon-initiated contributions play in modern PDF analysis. We provide representative examples of the implications of PDF fits for high-precision LHC phenomenological applications, such as Higgs coupling measurements and searches for high-mass New Physics resonances. We conclude this report by discussing some selected topics relevant for the future of PDF determinations, including the treatment of theoretical uncertainties, the connection with lattice QCD calculations, and the role of PDFs at future high-energy colliders beyond the LHC.
Algorithm of dynamic regulation of a system of duct, for a high accuracy climatic system
NASA Astrophysics Data System (ADS)
Arbatskiy, A. A.; Afonina, G. N.; Glazov, V. S.
2017-11-01
Currently, major part of climatic system, are stationary in projected mode only. At the same time, many modern industrial sites, require constant or periodical changes in technological process. That is 80% of the time, the industrial site is not require ventilation system in projected mode and high precision of climatic parameters must maintain. While that not constantly is in use for climatic systems, which use in parallel for different rooms, we will be have a problem for balance of duct system. For this problem, was created the algorithm for quantity regulation, with minimal changes. Dynamic duct system: Developed of parallel control system of air balance, with high precision of climatic parameters. The Algorithm provide a permanent pressure in main duct, in different a flow of air. Therefore, the ending devises air flow have only one parameter for regulation - flaps open area. Precision of regulation increase and the climatic system provide high precision for temperature and humidity (0,5C for temperature, 5% for relative humidity). Result: The research has been made in CFD-system - PHOENICS. Results for velocity of air in duct, for pressure of air in duct for different operation mode, has been obtained. Equation for air valves positions, with different parameters for climate in room’s, has been obtained. Energy saving potential for dynamic duct system, for different types of a rooms, has been calculated.
Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav
2015-07-01
Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jansen, Paul; Semeria, Luca; Scheidegger, Simon; Merkt, Frederic
2015-06-01
Having only three electrons, He_2^+ represents a system for which highly accurate ab initio calculations are possible. The latest calculation of rovibrational energies in He_2^+ do not include relativistic or QED corrections but claim an accuracy of about 120 MHz The available experimental data on He_2^+, though accurate to 300 MHz, are not precise enough to rigorously test these calculations or reveal the magnitude of the relativistic and QED corrections. We have performed high-resolution Rydberg spectroscopy of metastable He_2 molecules and employed multichannel-quantum-defect-theory extrapolation techniques to determine the rotational energy-level structure in the He_2^+ ion. To this end we have produced samples of helium molecules in the a ^3σ_u^+ state in supersonic beams with velocities tunable down to 100 m/s by combining a cryogenic supersonic-beam source with a multistage Zeeman decelerator. The metastable He_2 molecules are excited to np Rydberg states using the frequency doubled output of a pulse-amplified ring dye laser. Although the bandwidth of the laser systems is too large to observe the reduction of the Doppler width resulting from deceleration, the deceleration greatly simplifies the spectral assignments because of its spin-rotational state selectivity. Our approach enabled us to determine the rotational structure of He_2^+ with unprecedented accuracy, to determine the size of the relativistic and QED corrections by comparison with the results of Ref.~a and to precisely measure the rotational structure of the metastable state for comparison with the results of Focsa~et al. W.-C. Tung, M. Pavanello, L. Adamowicz, J. Chem. Phys. 136, 104309 (2012). D. Sprecher, J. Liu, T. Krähenmann, M. Schäfer, and F. Merkt, J. Chem. Phys. 140, 064304 (2014). M. Motsch, P. Jansen, J. A. Agner, H. Schmutz, and F. Merkt, Phys. Rev. A 89, 043420 (2014). C. Focsa, P. F. Bernath, and R. Colin, J. Mol. Spectrosc. 191, 209 (1998).
Evaluation and attribution of OCO-2 XCO2 uncertainties
NASA Astrophysics Data System (ADS)
Worden, John R.; Doran, Gary; Kulawik, Susan; Eldering, Annmarie; Crisp, David; Frankenberg, Christian; O'Dell, Chris; Bowman, Kevin
2017-07-01
Evaluating and attributing uncertainties in total column atmospheric CO2 measurements (XCO2) from the OCO-2 instrument is critical for testing hypotheses related to the underlying processes controlling XCO2 and for developing quality flags needed to choose those measurements that are usable for carbon cycle science.Here we test the reported uncertainties of version 7 OCO-2 XCO2 measurements by examining variations of the XCO2 measurements and their calculated uncertainties within small regions (˜ 100 km × 10.5 km) in which natural CO2 variability is expected to be small relative to variations imparted by noise or interferences. Over 39 000 of these small neighborhoods
comprised of approximately 190 observations per neighborhood are used for this analysis. We find that a typical ocean measurement has a precision and accuracy of 0.35 and 0.24 ppm respectively for calculated precisions larger than ˜ 0.25 ppm. These values are approximately consistent with the calculated errors of 0.33 and 0.14 ppm for the noise and interference error, assuming that the accuracy is bounded by the calculated interference error. The actual precision for ocean data becomes worse as the signal-to-noise increases or the calculated precision decreases below 0.25 ppm for reasons that are not well understood. A typical land measurement, both nadir and glint, is found to have a precision and accuracy of approximately 0.75 and 0.65 ppm respectively as compared to the calculated precision and accuracy of approximately 0.36 and 0.2 ppm. The differences in accuracy between ocean and land suggests that the accuracy of XCO2 data is likely related to interferences such as aerosols or surface albedo as they vary less over ocean than land. The accuracy as derived here is also likely a lower bound as it does not account for possible systematic biases between the regions used in this analysis.
High Precision Rovibrational Spectroscopy of OH+
NASA Astrophysics Data System (ADS)
Markus, Charles R.; Hodges, James N.; Perry, Adam J.; Kocheril, G. Stephen; Müller, Holger S. P.; McCall, Benjamin J.
2016-02-01
The molecular ion OH+ has long been known to be an important component of the interstellar medium. Its relative abundance can be used to indirectly measure cosmic ray ionization rates of hydrogen, and it is the first intermediate in the interstellar formation of water. To date, only a limited number of pure rotational transitions have been observed in the laboratory making it necessary to indirectly calculate rotational levels from high-precision rovibrational spectroscopy. We have remeasured 30 transitions in the fundamental band with MHz-level precision, in order to enable the prediction of a THz spectrum of OH+. The ions were produced in a water cooled discharge of O2, H2, and He, and the rovibrational transitions were measured with the technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy. These values have been included in a global fit of field free data to a 3Σ- linear molecule effective Hamiltonian to determine improved spectroscopic parameters which were used to predict the pure rotational transition frequencies.
Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses.
Das, Jayajit
2016-03-08
Single cells often generate precise responses by involving dissipative out-of-thermodynamic-equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high-precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early-time T cell signaling. Using exact analytical calculations and numerical simulations, I show that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in determining single-cell kinetics from cell-population results. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Smart and precise alignment of optical systems
NASA Astrophysics Data System (ADS)
Langehanenberg, Patrik; Heinisch, Josef; Stickler, Daniel
2013-09-01
For the assembly of any kind of optical systems the precise centration of every single element is of particular importance. Classically the precise alignment of optical components is based on the precise centering of all components to an external axis (usually a high-precision rotary spindle axis). Main drawback of this timeconsuming process is that it is significantly sensitive to misalignments of the reference (e.g. the housing) axis. In order to facilitate process in this contribution we present a novel alignment strategy for the TRIOPTICS OptiCentric® instrument family that directly aligns two elements with respect to each other by measuring the first element's axis and using this axis as alignment reference without the detour of considering an external reference. According to the optical design any axis in the system can be chosen as target axis. In case of the alignment to a barrel this axis is measured by using a distance sensor (e.g., the classically used dial indicator). Instead of fine alignment the obtained data is used for the calculation of its orientation within the setup. Alternatively, the axis of an optical element (single lens or group of lenses) whose orientation is measured with the standard OptiCentric MultiLens concept can be used as a reference. In the instrument's software the decentering of the adjusting element to the calculated axis is displayed in realtime and indicated by a target mark that can be used for the manual alignment. In addition, the obtained information can also be applied for active and fully automated alignment of lens assemblies with the help of motorized actuators.
PRECISE ANGLE MONITOR BASED ON THE CONCEPT OF PENCIL-BEAM INTERFEROMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
QIAN,S.; TAKACS,P.
2000-07-30
The precise angle monitoring is a very important metrology task for research, development and industrial applications. Autocollimator is one of the most powerful and widely applied instruments for small angle monitoring, which is based on the principle of geometric optics. In this paper the authors introduce a new precise angle monitoring system, Pencil-beam Angle Monitor (PAM), base on pencil beam interferometry. Its principle of operation is a combination of physical and geometrical optics. The angle calculation method is similar to the autocollimator. However, the autocollimator creates a cross image but the precise pencil-beam angle monitoring system produces an interference fringemore » on the focal plane. The advantages of the PAM are: high angular sensitivity, long-term stability character making angle monitoring over long time periods possible, high measurement accuracy in the order of sub-microradian, simultaneous measurement ability in two perpendicular directions or on two different objects, dynamic measurement possibility, insensitive to the vibration and air turbulence, automatic display, storage and analysis by use of the computer, small beam diameter making the alignment extremely easy and longer test distance. Some test examples are presented.« less
NASA Astrophysics Data System (ADS)
Beminiwattha, Rakitha; Moller Collaboration
2017-09-01
Parity Violating Electron Scattering (PVES) is an extremely successful precision frontier tool that has been used for testing the Standard Model (SM) and understanding nucleon structure. Several generations of highly successful PVES programs at SLAC, MIT-Bates, MAMI-Mainz, and Jefferson Lab have contributed to the understanding of nucleon structure and testing the SM. But missing phenomena like matter-antimatter asymmetry, neutrino flavor oscillations, and dark matter and energy suggest that the SM is only a `low energy' effective theory. The MOLLER experiment at Jefferson Lab will measure the weak charge of the electron, QWe = 1 - 4sin2θW , with a precision of 2.4 % by measuring the parity violating asymmetry in electron-electron () scattering and will be sensitive to subtle but measurable deviations from precisely calculable predictions from the SM. The MOLLER experiment will provide the best contact interaction search for leptons at low OR high energy makes it a probe of physics beyond the Standard Model with sensitivities to mass-scales of new PV physics up to 7.5 TeV. Overview of the experiment and recent pre-R&D progress will be reported.
Isotalo, Aarno E.; Wieselquist, William A.
2015-05-15
A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less
Schellenberg, François; Mennetrey, Louise; Girre, Catherine; Nalpas, Bertrand; Pagès, Jean Christophe
2008-01-01
In this study, we evaluated the new %CDT by the HPLC method (Bio-Rad, Germany) on a Varianttrade mark HPLC system (Bio-Rad), checked the correlation with well-known methods and calculated the diagnostic value of the test. Intra-run and day-to-day precision values were calculated for samples with extreme serum transferrin concentrations, high trisialotransferrin and interfering conditions (haemolysed, lactescent and icteric samples). The method was compared with two routine procedures, the %CDT TIA (Bio-Rad, Hercules, CA, USA) and the Capillarystrade mark CDT (Sebia, France). A total of 350 clinical sera samples were used for a case-control study. Precision values were better in high CDT and medium CDT pools than in low CDT pools. The serum transferrin concentration had no effect on CDT measurement, except in samples with serum transferrin <1 g/L. Haemolysis was the only interfering situation. The method showed high correlation (r(2) > 0.95) with the two other methods (%CDT TIA and CZE %CDT). The global predictive value of the test was >0.90 at 1.9% cut-off. These results demonstrate that the %CDT by the HPLC test is suitable for CDT routine measurement; the results from the high-throughput Varianttrade mark system are well correlated with other methods and are of high diagnostic value.
Mechanism and experimental research on ultra-precision grinding of ferrite
NASA Astrophysics Data System (ADS)
Ban, Xinxing; Zhao, Huiying; Dong, Longchao; Zhu, Xueliang; Zhang, Chupeng; Gu, Yawen
2017-02-01
Ultra-precision grinding of ferrite is conducted to investigate the removal mechanism. Effect of the accuracy of machine tool key components on grinding surface quality is analyzed. The surface generation model of ferrite ultra-precision grinding machining is established. In order to reveal the surface formation mechanism of ferrite in the process of ultraprecision grinding, furthermore, the scientific and accurate of the calculation model are taken into account to verify the grinding surface roughness, which is proposed. Orthogonal experiment is designed using the high precision aerostatic turntable and aerostatic spindle for ferrite which is a typical hard brittle materials. Based on the experimental results, the influence factors and laws of ultra-precision grinding surface of ferrite are discussed through the analysis of the surface roughness. The results show that the quality of ferrite grinding surface is the optimal parameters, when the wheel speed of 20000r/mm, feed rate of 10mm/min, grinding depth of 0.005mm, and turntable rotary speed of 5r/min, the surface roughness Ra can up to 75nm.
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
High precision measurements of 26Naβ- decay
NASA Astrophysics Data System (ADS)
Grinyer, G. F.; Svensson, C. E.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A.; Ball, G. C.; Chakrawarthy, R. S.; Finlay, P.; Garrett, P. E.; Hackman, G.; Hardy, J. C.; Hyland, B.; Iacob, V. E.; Koopmans, K. A.; Kulp, W. D.; Leslie, J. R.; MacDonald, J. A.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Phillips, A. A.; Sarazin, F.; Schumaker, M. A.; Scraggs, H. C.; Schwarzenberg, J.; Smith, M. B.; Valiente-Dobón, J. J.; Waddington, J. C.; Wood, J. L.; Zganjar, E. F.
2005-04-01
High-precision measurements of the half-life and β-branching ratios for the β- decay of 26Na to 26Mg have been measured in β-counting and γ-decay experiments, respectively. A 4π proportional counter and fast tape transport system were employed for the half-life measurement, whereas the γ rays emitted by the daughter nucleus 26Mg were detected with the 8π γ-ray spectrometer, both located at TRIUMF's isotope separator and accelerator radioactive beam facility. The half-life of 26Na was determined to be T1/2=1.07128±0.00013±0.00021s, where the first error is statistical and the second systematic. The logft values derived from these experiments are compared with theoretical values from a full sd-shell model calculation.
Boehm, K. -J.; Gibson, C. R.; Hollaway, J. R.; ...
2016-09-01
This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses andmore » beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.« less
NNLO corrections to top-pair production at hadron colliders: the all-fermionic scattering channels
NASA Astrophysics Data System (ADS)
Czakon, Michal; Mitov, Alexander
2012-12-01
This is a second paper in our ongoing calculation of the next-to-next-to-leading order (NNLO) QCD correction to the total inclusive top-pair production cross-section at hadron colliders. In this paper we calculate the reaction qoverline{q}to toverline{t}+qoverline{q} which was not considered in our previous work on qoverline{q}to toverline{t}+X [1] due to its phenomenologically negligible size. We also calculate all remaining fermion-pair-initiated partonic channels q{q^' }} , q{{overline{q}}^' }} and qq that contribute to top-pair production starting from NNLO. The contributions of these reactions to the total cross-section for top-pair production at the Tevatron and LHC are small, at the permil level. The most interesting feature of these reactions is their characteristic logarithmic rise in the high energy limit. We compute the constant term in the leading power behavior in this limit, and achieve precision that is an order of magnitude better than the precision of a recent theoretical prediction for this constant. All four partonic reactions computed in this paper are included in our numerical program Top++. The calculation of the NNLO corrections to the two remaining partonic reactions, qgto toverline{t}+X and ggto toverline{t}+X , is ongoing.
NASA Astrophysics Data System (ADS)
Zhou, Shudao; Ma, Zhongliang; Wang, Min; Peng, Shuling
2018-05-01
This paper proposes a novel alignment system based on the measurement of optical path using a light beam scanning mode in a transmissometer. The system controls both the probe beam and the receiving field of view while scanning in two vertical directions. The system then calculates the azimuth angle of the transmitter and the receiver to determine the precise alignment of the optical path. Experiments show that this method can determine the alignment angles in less than 10 min with errors smaller than 66 μrad in the azimuth. This system also features high collimation precision, process automation and simple installation.
Yan, Liping; Chen, Benyong; Zhang, Enzheng; Zhang, Shihua; Yang, Ye
2015-08-01
A novel method for the precision measurement of refractive index of air (n(air)) based on the combining of the laser synthetic wavelength interferometry with the Edlén equation estimation is proposed. First, a n(air_e) is calculated from the modified Edlén equation according to environmental parameters measured by low precision sensors with an uncertainty of 10(-6). Second, a unique integral fringe number N corresponding to n(air) is determined based on the calculated n(air_e). Then, a fractional fringe ε corresponding to n(air) with high accuracy can be obtained according to the principle of fringe subdivision of laser synthetic wavelength interferometry. Finally, high accurate measurement of n(air) is achieved according to the determined fringes N and ε. The merit of the proposed method is that it not only solves the problem of the measurement accuracy of n(air) being limited by the accuracies of environmental sensors, but also avoids adopting complicated vacuum pumping to measure the integral fringe N in the method of conventional laser interferometry. To verify the feasibility of the proposed method, comparison experiments with Edlén equations in short time and in long time were performed. Experimental results show that the measurement accuracy of n(air) is better than 2.5 × 10(-8) in short time tests and 6.2 × 10(-8) in long time tests.
Smilowitz, Jennifer T; Gho, Deborah S; Mirmiran, Majid; German, J Bruce; Underwood, Mark A
2014-05-01
Although it is well established that human milk varies widely in macronutrient content, it remains common for human milk fortification for premature infants to be based on historic mean values. As a result, those caring for premature infants often underestimate protein intake. Rapid precise measurement of human milk protein, fat, and lactose to allow individualized fortification has been proposed for decades but remains elusive due to technical challenges. This study aimed to evaluate the accuracy and precision of a Fourier transform (FT) mid-infrared (IR) spectroscope in the neonatal intensive care unit to measure human milk fat, total protein, lactose, and calculated energy compared with standard chemical analyses. One hundred sixteen breast milk samples across lactation stages from women who delivered at term (n = 69) and preterm (n = 5) were analyzed with the FT mid-IR spectroscope and with standard chemical methods. Ten of the samples were tested in replicate using the FT mid-IR spectroscope to determine repeatability. The agreement between the FT mid-IR spectroscope analysis and reference methods was high for protein and fat and moderate for lactose and energy. The intra-assay coefficients of variation for all outcomes were less than 3%. The FT mid-IR spectroscope demonstrated high accuracy in measurement of total protein and fat of preterm and term milk with high precision.
NASA Astrophysics Data System (ADS)
Hełminiak, K. G.; Konacki, M.; Muterspaugh, M. W.; Browne, S. E.; Howard, A. W.; Kulkarni, S. R.
2012-01-01
We present the most precise to date orbital and physical parameters of the well-known short period (P= 5.975 d), eccentric (e= 0.3) double-lined spectroscopic binary BY Draconis (BY Dra), a prototype of a class of late-type, active, spotted flare stars. We calculate the full spectroscopic/astrometric orbital solution by combining our precise radial velocities (RVs) and the archival astrometric measurements from the Palomar Testbed Interferometer (PTI). The RVs were derived based on the high-resolution echelle spectra taken between 2004 and 2008 with the Keck I/high-resolution echelle spectrograph, Shane/CAT/HamSpec and TNG/SARG telescopes/spectrographs using our novel iodine-cell technique for double-lined binary stars. The RVs and available PTI astrometric data spanning over eight years allow us to reach 0.2-0.5 per cent level of precision in Msin 3i and the parallax but the geometry of the orbit (i≃ 154°) hampers the absolute mass precision to 3.3 per cent, which is still an order of magnitude better than for previous studies. We compare our results with a set of Yonsei-Yale theoretical stellar isochrones and conclude that BY Dra is probably a main-sequence system more metal rich than the Sun. Using the orbital inclination and the available rotational velocities of the components, we also conclude that the rotational axes of the components are likely misaligned with the orbital angular momentum. Given BY Dra's main-sequence status, late spectral type and the relatively short orbital period, its high orbital eccentricity and probable spin-orbit misalignment are not in agreement with the tidal theory. This disagreement may possibly be explained by smaller rotational velocities of the components and the presence of a substellar mass companion to BY Dra AB.
QSL Squasher: A Fast Quasi-separatrix Layer Map Calculator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin; Savcheva, Antonia, E-mail: svetlin.tassev@cfa.harvard.edu
Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps, even for two-dimensional slices through three-dimensional models of coronal magnetic fields, is a non-trivial task, as it usually involves tracing out millions of magnetic field lines with immense precision. Thus, extending QSL calculations to three dimensions has rarely been done until now. In order to address this challenge, we present QSL Squasher—a public, open-source code, which is optimized for calculating QSL maps in both twomore » and three dimensions on graphics processing units. The code achieves large processing speeds for three reasons, each of which results in an order-of-magnitude speed-up. (1) The code is parallelized using OpenCL. (2) The precision requirements for the QSL calculation are drastically reduced by using perturbation theory. (3) A new boundary detection criterion between quasi-connectivity domains is used, which quickly identifies possible QSL locations that need to be finely sampled by the code. That boundary detection criterion relies on finding the locations of abrupt field-line length changes, which we do by introducing a new Field-line Length Edge (FLEDGE) map. We find FLEDGE maps useful on their own as a quick-and-dirty substitute for QSL maps. QSL Squasher allows construction of high-resolution 3D FLEDGE maps in a matter of minutes, which is two orders of magnitude faster than calculating the corresponding 3D QSL maps. We include a sample of calculations done using QSL Squasher to demonstrate its capabilities as a QSL calculator, as well as to compare QSL and FLEDGE maps.« less
QSL Squasher: A Fast Quasi-separatrix Layer Map Calculator
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Savcheva, Antonia
2017-05-01
Quasi-Separatrix Layers (QSLs) are a useful proxy for the locations where current sheets can develop in the solar corona, and give valuable information about the connectivity in complicated magnetic field configurations. However, calculating QSL maps, even for two-dimensional slices through three-dimensional models of coronal magnetic fields, is a non-trivial task, as it usually involves tracing out millions of magnetic field lines with immense precision. Thus, extending QSL calculations to three dimensions has rarely been done until now. In order to address this challenge, we present QSL Squasher—a public, open-source code, which is optimized for calculating QSL maps in both two and three dimensions on graphics processing units. The code achieves large processing speeds for three reasons, each of which results in an order-of-magnitude speed-up. (1) The code is parallelized using OpenCL. (2) The precision requirements for the QSL calculation are drastically reduced by using perturbation theory. (3) A new boundary detection criterion between quasi-connectivity domains is used, which quickly identifies possible QSL locations that need to be finely sampled by the code. That boundary detection criterion relies on finding the locations of abrupt field-line length changes, which we do by introducing a new Field-line Length Edge (FLEDGE) map. We find FLEDGE maps useful on their own as a quick-and-dirty substitute for QSL maps. QSL Squasher allows construction of high-resolution 3D FLEDGE maps in a matter of minutes, which is two orders of magnitude faster than calculating the corresponding 3D QSL maps. We include a sample of calculations done using QSL Squasher to demonstrate its capabilities as a QSL calculator, as well as to compare QSL and FLEDGE maps.
Neutron density profile in the lunar subsurface produced by galactic cosmic rays
NASA Astrophysics Data System (ADS)
Ota, Shuya; Sihver, Lembit; Kobayashi, Shingo; Hasebe, Nobuyuki
Neutron production by galactic cosmic rays (GCR) in the lunar subsurface is very important when performing lunar and planetary nuclear spectroscopy and space dosimetry. Further im-provements to estimate the production with increased accuracy is therefore required. GCR, which is a main contributor to the neutron production in the lunar subsurface, consists of not only protons but also of heavy components such as He, C, N, O, and Fe. Because of that, it is important to precisely estimate the neutron production from such components for the lunar spectroscopy and space dosimetry. Therefore, the neutron production from GCR particles in-cluding heavy components in the lunar subsurface was simulated with the Particle and Heavy ion Transport code System (PHITS), using several heavy ion interaction models. This work presents PHITS simulations of the neutron density as a function of depth (neutron density profile) in the lunar subsurface and the results are compared with experimental data obtained by Apollo 17 Lunar Neutron Probe Experiment (LNPE). From our previous study, it has been found that the accuracy of the proton-induced neutron production models is the most influen-tial factor when performing precise calculations of neutron production in the lunar subsurface. Therefore, a benchmarking of proton-induced neutron production models against experimental data was performed to estimate and improve the precision of the calculations. It was found that the calculated neutron production using the best model of Cugnon Old (E < 3 GeV) and JAM (E > 3 GeV) gave up to 30% higher values than experimental results. Therefore, a high energy nuclear data file (JENDL-HE) was used instead of the Cugnon Old model at the energies below 3 GeV. Then, the calculated neutron density profile successfully reproduced the experimental data from LNPE within experimental errors of 15% (measurement) + 30% (systematic). In this presentation, we summarize and discuss our calculated results of neutron production in the lunar subsurface.
3D Printing of Preoperative Simulation Models of a Splenic Artery Aneurysm: Precision and Accuracy.
Takao, Hidemasa; Amemiya, Shiori; Shibata, Eisuke; Ohtomo, Kuni
2017-05-01
Three-dimensional (3D) printing is attracting increasing attention in the medical field. This study aimed to apply 3D printing to the production of hollow splenic artery aneurysm models for use in the simulation of endovascular treatment, and to evaluate the precision and accuracy of the simulation model. From 3D computed tomography (CT) angiography data of a splenic artery aneurysm, 10 hollow models reproducing the vascular lumen were created using a fused deposition modeling-type desktop 3D printer. After filling with water, each model was scanned using T2-weighted magnetic resonance imaging for the evaluation of the lumen. All images were coregistered, binarized, and then combined to create an overlap map. The cross-sectional area of the splenic artery aneurysm and its standard deviation (SD) were calculated perpendicular to the x- and y-axes. Most voxels overlapped among the models. The cross-sectional areas were similar among the models, with SDs <0.05 cm 2 . The mean cross-sectional areas of the splenic artery aneurysm were slightly smaller than those calculated from the original mask images. The maximum mean cross-sectional areas calculated perpendicular to the x- and y-axes were 3.90 cm 2 (SD, 0.02) and 4.33 cm 2 (SD, 0.02), whereas those calculated from the original mask images were 4.14 cm 2 and 4.66 cm 2 , respectively. The mean cross-sectional areas of the afferent artery were, however, almost the same as those calculated from the original mask images. The results suggest that 3D simulation modeling of a visceral artery aneurysm using a fused deposition modeling-type desktop 3D printer and computed tomography angiography data is highly precise and accurate. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Gonza´lez-Go´mez, David; Rodríguez, Diego Airado; Can~ada-Can~ada, Florentina; Jeong, Jin Su
2015-01-01
Currently, there are a number of educational applications that allow students to reinforce theoretical or numerical concepts through an interactive way. More precisely, in the field of the analytical chemistry, MATLAB has been widely used to write easy-to-implement code, facilitating complex performances and/or tedious calculations. The main…
Special-purpose computer for holography HORN-2
NASA Astrophysics Data System (ADS)
Ito, Tomoyoshi; Eldeib, Hesham; Yoshida, Kenji; Takahashi, Shinya; Yabe, Takashi; Kunugi, Tomoaki
1996-01-01
We designed and built a special-purpose computer for holography, HORN-2 (HOlographic ReconstructioN). HORN-2 calculates light intensity at high speed of 0.3 Gflops per one board with single (32-bit floating point) precision. The cost of the board is 500 000 Japanese yen (5000 US dollar). We made three boards. Operating them in parallel, we get about 1 Gflops.
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
Dynamical scales for multi-TeV top-pair production at the LHC
NASA Astrophysics Data System (ADS)
Czakon, Michał; Heymes, David; Mitov, Alexander
2017-04-01
We calculate all major differential distributions with stable top-quarks at the LHC. The calculation covers the multi-TeV range that will be explored during LHC Run II and beyond. Our results are in the form of high-quality binned distributions. We offer predictions based on three different parton distribution function (pdf) sets. In the near future we will make our results available also in the more flexible fastNLO format that allows fast re-computation with any other pdf set. In order to be able to extend our calculation into the multi-TeV range we have had to derive a set of dynamic scales. Such scales are selected based on the principle of fastest perturbative convergence applied to the differential and inclusive cross-section. Many observations from our study are likely to be applicable and useful to other precision processes at the LHC. With scale uncertainty now under good control, pdfs arise as the leading source of uncertainty for TeV top production. Based on our findings, true precision in the boosted regime will likely only be possible after new and improved pdf sets appear. We expect that LHC top-quark data will play an important role in this process.
Cygnus OB2 DANCe: A high-precision proper motion study of the Cygnus OB2 association
NASA Astrophysics Data System (ADS)
Wright, Nicholas J.; Bouy, Herve; Drew, Janet E.; Sarro, Luis Manuel; Bertin, Emmanuel; Cuillandre, Jean-Charles; Barrado, David
2016-08-01
We present a high-precision proper motion study of 873 X-ray and spectroscopically selected stars in the massive OB association Cygnus OB2 as part of the DANCe project. These were calculated from images spanning a 15 yr baseline and have typical precisions <1 mas yr-1. We calculate the velocity dispersion in the two axes to be σ _α (c) = 13.0^{+0.8}_{-0.7} and σ _δ (c) = 9.1^{+0.5}_{-0.5} km s-1, using a two-component, two-dimensional model that takes into account the uncertainties on the measurements. This gives a three-dimensional velocity dispersion of σ3D = 17.8 ± 0.6 km s-1 implying a virial mass significantly larger than the observed stellar mass, confirming that the association is gravitationally unbound. The association appears to be dynamically unevolved, as evidenced by considerable kinematic substructure, non-isotropic velocity dispersions and a lack of energy equipartition. The proper motions show no evidence for a global expansion pattern, with approximately the same amount of kinetic energy in expansion as there is in contraction, which argues against the association being an expanded star cluster disrupted by process such as residual gas expulsion or tidal heating. The kinematic substructures, which appear to be close to virial equilibrium and have typical masses of 40-400 M⊙, also do not appear to have been affected by the expulsion of the residual gas. We conclude that Cyg OB2 was most likely born highly substructured and globally unbound, with the individual subgroups born in (or close to) virial equilibrium, and that the OB association has not experienced significant dynamical evolution since then.
Activation measurement of the 3He(alpha,gamma)7Be cross section at low energy.
Bemmerer, D; Confortola, F; Costantini, H; Formicola, A; Gyürky, Gy; Bonetti, R; Broggini, C; Corvisiero, P; Elekes, Z; Fülöp, Zs; Gervino, G; Guglielmetti, A; Gustavino, C; Imbriani, G; Junker, M; Laubenstein, M; Lemut, A; Limata, B; Lozza, V; Marta, M; Menegazzo, R; Prati, P; Roca, V; Rolfs, C; Alvarez, C Rossi; Somorjai, E; Straniero, O; Strieder, F; Terrasi, F; Trautvetter, H P
2006-09-22
The nuclear physics input from the 3He(alpha,gamma)7Be cross section is a major uncertainty in the fluxes of 7Be and 8B neutrinos from the Sun predicted by solar models and in the 7Li abundance obtained in big-bang nucleosynthesis calculations. The present work reports on a new precision experiment using the activation technique at energies directly relevant to big-bang nucleosynthesis. Previously such low energies had been reached experimentally only by the prompt-gamma technique and with inferior precision. Using a windowless gas target, high beam intensity, and low background gamma-counting facilities, the 3He(alpha,gamma)7Be cross section has been determined at 127, 148, and 169 keV center-of-mass energy with a total uncertainty of 4%. The sources of systematic uncertainty are discussed in detail. The present data can be used in big-bang nucleosynthesis calculations and to constrain the extrapolation of the 3He(alpha,gamma)7Be astrophysical S factor to solar energies.
Study of a high-precision SAW-MOEMS strain sensor with laser optics
NASA Astrophysics Data System (ADS)
Liu, Xinwei; Chen, Shufen; Li, Honglang; Zou, Zhengfeng; Fu, Lei; Meng, Yanbin
2015-02-01
A novel structure design of a surface acoustic wave (SAW) micro-optic-electro-mechanical-system (MOEMS) strain sensor with a light readout unit is presented in this paper. By measuring the polarization intensity ratio of the TE/TM mode outputted from the waveguide, the strain produced from an object can be measured precisely. The basic working principle of the SAW MOEMS strain sensor is introduced and the mathematical model of the strain sensor system is established. The SAW characteristics effected by the strain sensor are mathematically deduced. The coupling coefficient between the SAW modes and light modes can be calculated based on the theory of coupling modes. The conversion coefficient of polarized light modes is obtained. Due to the restrictions of the specific parameters of the device, the level of technology and the material characteristics, the sensitivity of the strain sensor system is calculated through simulation as 0.1 μɛ, with a dynamic range of 0 ~ ±50 μɛ.
Prototype design of singles processing unit for the small animal PET
NASA Astrophysics Data System (ADS)
Deng, P.; Zhao, L.; Lu, J.; Li, B.; Dong, R.; Liu, S.; An, Q.
2018-05-01
Position Emission Tomography (PET) is an advanced clinical diagnostic imaging technique for nuclear medicine. Small animal PET is increasingly used for studying the animal model of disease, new drugs and new therapies. A prototype of Singles Processing Unit (SPU) for a small animal PET system was designed to obtain the time, energy, and position information. The energy and position is actually calculated through high precison charge measurement, which is based on amplification, shaping, A/D conversion and area calculation in digital signal processing domian. Analysis and simulations were also conducted to optimize the key parameters in system design. Initial tests indicate that the charge and time precision is better than 3‰ FWHM and 350 ps FWHM respectively, while the position resolution is better than 3.5‰ FWHM. Commination tests of the SPU prototype with the PET detector indicate that the system time precision is better than 2.5 ns, while the flood map and energy spectra concored well with the expected.
NASA Astrophysics Data System (ADS)
Nitadori, Keigo; Makino, Junichiro; Hut, Piet
2006-12-01
The main performance bottleneck of gravitational N-body codes is the force calculation between two particles. We have succeeded in speeding up this pair-wise force calculation by factors between 2 and 10, depending on the code and the processor on which the code is run. These speed-ups were obtained by writing highly fine-tuned code for x86_64 microprocessors. Any existing N-body code, running on these chips, can easily incorporate our assembly code programs. In the current paper, we present an outline of our overall approach, which we illustrate with one specific example: the use of a Hermite scheme for a direct N2 type integration on a single 2.0 GHz Athlon 64 processor, for which we obtain an effective performance of 4.05 Gflops, for double-precision accuracy. In subsequent papers, we will discuss other variations, including the combinations of N log N codes, single-precision implementations, and performance on other microprocessors.
Microhartree precision in density functional theory calculations
NASA Astrophysics Data System (ADS)
Gulans, Andris; Kozhevnikov, Anton; Draxl, Claudia
2018-04-01
To address ultimate precision in density functional theory calculations we employ the full-potential linearized augmented plane-wave + local-orbital (LAPW + lo) method and justify its usage as a benchmark method. LAPW + lo and two completely unrelated numerical approaches, the multiresolution analysis (MRA) and the linear combination of atomic orbitals, yield total energies of atoms with mean deviations of 0.9 and 0.2 μ Ha , respectively. Spectacular agreement with the MRA is reached also for total and atomization energies of the G2-1 set consisting of 55 molecules. With the example of α iron we demonstrate the capability of LAPW + lo to reach μ Ha /atom precision also for periodic systems, which allows also for the distinction between the numerical precision and the accuracy of a given functional.
Tian, Xiaochun; Chen, Jiabin; Han, Yongqiang; Shang, Jianyu; Li, Nan
2016-01-01
Zero velocity update (ZUPT) plays an important role in pedestrian navigation algorithms with the premise that the zero velocity interval (ZVI) should be detected accurately and effectively. A novel adaptive ZVI detection algorithm based on a smoothed pseudo Wigner–Ville distribution to remove multiple frequencies intelligently (SPWVD-RMFI) is proposed in this paper. The novel algorithm adopts the SPWVD-RMFI method to extract the pedestrian gait frequency and to calculate the optimal ZVI detection threshold in real time by establishing the function relationships between the thresholds and the gait frequency; then, the adaptive adjustment of thresholds with gait frequency is realized and improves the ZVI detection precision. To put it into practice, a ZVI detection experiment is carried out; the result shows that compared with the traditional fixed threshold ZVI detection method, the adaptive ZVI detection algorithm can effectively reduce the false and missed detection rate of ZVI; this indicates that the novel algorithm has high detection precision and good robustness. Furthermore, pedestrian trajectory positioning experiments at different walking speeds are carried out to evaluate the influence of the novel algorithm on positioning precision. The results show that the ZVI detected by the adaptive ZVI detection algorithm for pedestrian trajectory calculation can achieve better performance. PMID:27669266
Precise predictions for V+jets dark matter backgrounds
NASA Astrophysics Data System (ADS)
Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.
2017-12-01
High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.
Coello Pérez, Eduardo A.; Papenbrock, Thomas F.
2015-07-27
In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoreticalmore » uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 0 2 + band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.« less
A Novel Method for Measuring Electrical Conductivity of High Insulating Oil Using Charge Decay
NASA Astrophysics Data System (ADS)
Wang, Z. Q.; Qi, P.; Wang, D. S.; Wang, Y. D.; Zhou, W.
2016-05-01
For the high insulating oil, it is difficult to measure the conductivity precisely using voltammetry method. A high-precision measurementis proposed for measuring bulk electrical conductivity of high insulating oils (about 10-9--10-15S/m) using charge decay. The oil is insulated and charged firstly, and then grounded fully. During the experimental procedure, charge decay is observed to show an exponential law according to "Ohm" theory. The data of time dependence of charge density is automatically recorded using an ADAS and a computer. Relaxation time constant is fitted from the data using Gnuplot software. The electrical conductivity is calculated using relaxation time constant and dielectric permittivity. Charge density is substituted by electric potential, considering charge density is difficult to measure. The conductivity of five kinds of oils is measured. Using this method, the conductivity of diesel oil is easily measured to beas low as 0.961 pS/m, as shown in Fig. 5.
Precision Branching Ratio Measurement for the Superallowed &+circ; Emitter ^62Ga
NASA Astrophysics Data System (ADS)
Finlay, Paul; Svensson, C. E.; Austin, R. A. E.; Ball, G. C.; Bandyopadhyay, D.; Chaffey, A.; Chakrawarthy, R. S.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hyland, B.; Kanungo, R.; Leslie, J. R.; Mattoon, C.; Morton, A. C.; Pearson, C. J.; Ressler, J. J.; Sarazin, F.; Savajols, H.
2007-10-01
A high-precision branching ratio measurement for the superallowed &+circ; emitter ^62Ga has been made using the 8π γ-ray spectrometer in conjunction with the SCintillating Electron-Positron Tagging ARray (SCEPTAR) as part of an ongoing experimental program in superallowed Fermi beta decay studies at the Isotope Separator and Accelerator (ISAC) facility at TRIUMF in Vancouver, Canada, which delivered a high-purity beam of ˜10^4 ^62Ga/s in December 2005. The present work represents the highest statistics measurement of the ^62Ga superallowed branching ratio to date. 25 γ rays emitted following non-superallowed decay branches of ^62Ga have been identified and their intensities determined. These data yield a superallowed branching ratio with 10-4 precision, and our observed branch to the first nonanalogue 0^+ state sets a new upper limit on the isospin-mixing correction δC1^1. By comparing our ft value with the world average Ft, we make stringent tests of the different calculations for the isospin-symmetry-breaking correction δC, which is predicted to be large for ^62Ga.
2010-01-01
Background Cell motility is a critical parameter in many physiological as well as pathophysiological processes. In time-lapse video microscopy, manual cell tracking remains the most common method of analyzing migratory behavior of cell populations. In addition to being labor-intensive, this method is susceptible to user-dependent errors regarding the selection of "representative" subsets of cells and manual determination of precise cell positions. Results We have quantitatively analyzed these error sources, demonstrating that manual cell tracking of pancreatic cancer cells lead to mis-calculation of migration rates of up to 410%. In order to provide for objective measurements of cell migration rates, we have employed multi-target tracking technologies commonly used in radar applications to develop fully automated cell identification and tracking system suitable for high throughput screening of video sequences of unstained living cells. Conclusion We demonstrate that our automatic multi target tracking system identifies cell objects, follows individual cells and computes migration rates with high precision, clearly outperforming manual procedures. PMID:20377897
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
NASA Astrophysics Data System (ADS)
Li, Huaming; Tian, Yanting; Sun, Yongli; Li, Mo; Nonequilibrium materials; physics Team; Computational materials science Team
In this work, we apply a general equation of state of liquid and Ab initio molecular-dynamics method to study thermodynamic properties in liquid potassium under high pressure. Isothermal bulk modulus and molar volume of molten sodium are calculated within good precision as compared with the experimental data. The calculated internal energy data and the calculated values of isobaric heat capacity of molten potassium show the minimum along the isothermal lines as the previous result obtained in liquid sodium. The expressions for acoustical parameter and nonlinearity parameter are obtained based on thermodynamic relations from the equation of state. Both parameters for liquid potassium are calculated under high pressure along the isothermal lines by using the available thermodynamic data and numeric derivations. Furthermore, Ab initio molecular-dynamics simulations are used to calculate some thermodynamic properties of liquid potassium along the isothermal lines. Scientific Research Starting Foundation from Taiyuan university of Technology, Shanxi Provincial government (``100-talents program''), China Scholarship Council and National Natural Science Foundation of China (NSFC) under Grant No. 51602213.
Radiative decay rate of excitons in square quantum wells: Microscopic modeling and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khramtsov, E. S.; Grigoryev, P. S.; Ignatiev, I. V.
The binding energy and the corresponding wave function of excitons in GaAs-based finite square quantum wells (QWs) are calculated by the direct numerical solution of the three-dimensional Schrödinger equation. The precise results for the lowest exciton state are obtained by the Hamiltonian discretization using the high-order finite-difference scheme. The microscopic calculations are compared with the results obtained by the standard variational approach. The exciton binding energies found by two methods coincide within 0.1 meV for the wide range of QW widths. The radiative decay rate is calculated for QWs of various widths using the exciton wave functions obtained by direct andmore » variational methods. The radiative decay rates are confronted with the experimental data measured for high-quality GaAs/AlGaAs and InGaAs/GaAs QW heterostructures grown by molecular beam epitaxy. The calculated and measured values are in good agreement, though slight differences with earlier calculations of the radiative decay rate are observed.« less
Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II
NASA Astrophysics Data System (ADS)
Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.
2014-03-01
The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.
Refraction corrections for surveying
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.
Polarization observables in deuteron photodisintegration below 360 MeV
Glister, J.; Ron, G.; Lee, B. W.; ...
2011-02-03
We performed high precision measurements of induced and transferred recoil proton polarization in d(more » $$\\vec{γ}$$, $$\\vec{p}$$)n for photon energies of 277--357 MeV and θ cm = 20 ° -- 120 °. The measurements were motivated by a longstanding discrepancy between meson-baryon model calculations and data at higher energies. Moreover, at the low energies of this experiment, theory continues to fail to reproduce the data, indicating that either something is missing in the calculations and/or there is a problem with the accuracy of the nucleon-nucleon potential being used.« less
Design and Analysis of Hydrostatic Transmission System
NASA Astrophysics Data System (ADS)
Mistry, Kayzad A.; Patel, Bhaumikkumar A.; Patel, Dhruvin J.; Parsana, Parth M.; Patel, Jitendra P.
2018-02-01
This study develops a hydraulic circuit to drive a conveying system dealing with heavy and delicate loads. Various safety circuits have been added in order to ensure stable working at high pressure and precise controlling. Here we have shown the calculation procedure based on an arbitrarily selected load. Also the circuit design and calculations of various components used is depicted along with the system simulation. The results show that the system is stable and efficient enough to transmit heavy loads by functioning of the circuit. By this information, one can be able to design their own hydrostatic circuits for various heavy loading conditions.
Technical Note: The determination of enclosed water volume in large flexible-wall mesocosms "KOSMOS"
NASA Astrophysics Data System (ADS)
Czerny, J.; Schulz, K. G.; Krug, S. A.; Ludwig, A.; Riebesell, U.
2013-03-01
The volume of water enclosed inside flexible-wall mesocosm bags is hard to estimate using geometrical calculations and can be strongly variable among bags of the same dimensions. Here we present a method for precise water volume determination in mesocosms using salinity as a tracer. Knowledge of the precise volume of water enclosed allows establishment of exactly planned treatment concentrations and calculation of elemental budgets.
NASA Astrophysics Data System (ADS)
Weinheimer, Oliver; Wielpütz, Mark O.; Konietzke, Philip; Heussel, Claus P.; Kauczor, Hans-Ulrich; Brochhausen, Christoph; Hollemann, David; Savage, Dasha; Galbán, Craig J.; Robinson, Terry E.
2017-02-01
Cystic Fibrosis (CF) results in severe bronchiectasis in nearly all cases. Bronchiectasis is a disease where parts of the airways are permanently dilated. The development and the progression of bronchiectasis is not evenly distributed over the entire lungs - rather, individual functional units are affected differently. We developed a fully automated method for the precise calculation of lobe-based airway taper indices. To calculate taper indices, some preparatory algorithms are needed. The airway tree is segmented, skeletonized and transformed to a rooted acyclic graph. This graph is used to label the airways. Then a modified version of the previously validated integral based method (IBM) for airway geometry determination is utilized. The rooted graph, the airway lumen and wall information are then used to calculate the airway taper indices. Using a computer-generated phantom simulating 10 cross sections of airways we present results showing a high accuracy of the modified IBM. The new taper index calculation method was applied to 144 volumetric inspiratory low-dose MDCT scans. The scans were acquired from 36 children with mild CF at 4 time-points (baseline, 3 month, 1 year, 2 years). We found a moderate correlation with the visual lobar Brody bronchiectasis scores by three raters (r2 = 0.36, p < .0001). The taper index has the potential to be a precise imaging biomarker but further improvements are needed. In combination with other imaging biomarkers, taper index calculation can be an important tool for monitoring the progression and the individual treatment of patients with bronchiectasis.
Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation
NASA Astrophysics Data System (ADS)
Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.
2017-05-01
In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.
Precision direct photon spectra at high energy and comparison to the 8 TeV ATLAS data
Schwartz, Matthew D.
2016-09-01
The direct photon spectrum is computed to the highest currently available precision and compared to ATLAS data from 8 TeV collisions at the LHC. The prediction includes threshold resummation at next-to-next-to-next-to-leading logarithmic order through the program PeTeR, matched to next-to-leading fixed order with fragmentation effects using JetPhox and includes the resummation of leading-logarithmic electroweak Sudakov effects. Remarkably, improved agreement with data can be seen when each component of the calculation is added successively. This comparison demonstrates the importance of both threshold logs and electroweak Sudakov effects. Numerical values for the predictions are included.
Summary of the Topical Workshop on Top Quark Differential Distributions 2014
NASA Astrophysics Data System (ADS)
Czakon, Michal; Mitov, Alexander; Rojo, Juan
2016-01-01
We summarize the Topical Workshop on Top Quark Differential Distributions 2014, which took place in Cannes immediately before the annual Top2014 conference. The workshop was motivated by the availability of top quark differential distributions at next-to-next-to-leading order and the forthcoming Large Hadron Collider (LHC) 13 TeV data. The main goal of the workshop was to explore the impact of improved calculations of top quark production on precision LHC measurements, PDF determinations and searches for physics beyond the Standard Model, as well as finding ways in which the high precision data from ATLAS, CMS and LHCb can be used to further refine theoretical predictions for top production.
NASA Technical Reports Server (NTRS)
Flock, W. L.
1981-01-01
When high precision is required for range measurement on Earth space paths, it is necessary to correct as accurately as possible for excess range delays due to the dry air, water vapor, and liquid water content of the atmosphere. Calculations based on representative values of atmospheric parameters are useful for illustrating the order of magnitude of the expected delays. Range delay, time delay, and phase delay are simply and directly related. Doppler frequency variations or noise are proportional to the time rate of change of excess range delay. Tropospheric effects were examined as part of an overall consideration of the capability of precision two way ranging and Doppler systems.
Garay-Avendaño, Roger L; Zamboni-Rached, Michel
2014-07-10
In this paper, we propose a method that is capable of describing in exact and analytic form the propagation of nonparaxial scalar and electromagnetic beams. The main features of the method presented here are its mathematical simplicity and the fast convergence in the cases of highly nonparaxial electromagnetic beams, enabling us to obtain high-precision results without the necessity of lengthy numerical simulations or other more complex analytical calculations. The method can be used in electromagnetism (optics, microwaves) as well as in acoustics.
The quantitative control and matching of an optical false color composite imaging system
NASA Astrophysics Data System (ADS)
Zhou, Chengxian; Dai, Zixin; Pan, Xizhe; Li, Yinxi
1993-10-01
Design of an imaging system for optical false color composite (OFCC) capable of high-precision density-exposure time control and color balance is presented. The system provides high quality FCC image data that can be analyzed using a quantitative calculation method. The quality requirement to each part of the image generation system is defined, and the distribution of satellite remote sensing image information is analyzed. The proposed technology makes it possible to present the remote sensing image data more effectively and accurately.
Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu
2015-02-01
To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.
Accuracy evaluation of intraoral optical impressions: A clinical study using a reference appliance.
Atieh, Mohammad A; Ritter, André V; Ko, Ching-Chang; Duqum, Ibrahim
2017-09-01
Trueness and precision are used to evaluate the accuracy of intraoral optical impressions. Although the in vivo precision of intraoral optical impressions has been reported, in vivo trueness has not been evaluated because of limitations in the available protocols. The purpose of this clinical study was to compare the accuracy (trueness and precision) of optical and conventional impressions by using a novel study design. Five study participants consented and were enrolled. For each participant, optical and conventional (vinylsiloxanether) impressions of a custom-made intraoral Co-Cr alloy reference appliance fitted to the mandibular arch were obtained by 1 operator. Three-dimensional (3D) digital models were created for stone casts obtained from the conventional impression group and for the reference appliances by using a validated high-accuracy reference scanner. For the optical impression group, 3D digital models were obtained directly from the intraoral scans. The total mean trueness of each impression system was calculated by averaging the mean absolute deviations of the impression replicates from their 3D reference model for each participant, followed by averaging the obtained values across all participants. The total mean precision for each impression system was calculated by averaging the mean absolute deviations between all the impression replicas for each participant (10 pairs), followed by averaging the obtained values across all participants. Data were analyzed using repeated measures ANOVA (α=.05), first to assess whether a systematic difference in trueness or precision of replicate impressions could be found among participants and second to assess whether the mean trueness and precision values differed between the 2 impression systems. Statistically significant differences were found between the 2 impression systems for both mean trueness (P=.010) and mean precision (P=.007). Conventional impressions had higher accuracy with a mean trueness of 17.0 ±6.6 μm and mean precision of 16.9 ±5.8 μm than optical impressions with a mean trueness of 46.2 ±11.4 μm and mean precision of 61.1 ±4.9 μm. Complete arch (first molar-to-first molar) optical impressions were less accurate than conventional impressions but may be adequate for quadrant impressions. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Tang, Tao; Stevenson, R Jan; Infante, Dana M
2016-10-15
Regional variation in both natural environment and human disturbance can influence performance of ecological assessments. In this study we calculated 5 types of benthic diatom multimetric indices (MMIs) with 3 different approaches to account for variation in ecological assessments. We used: site groups defined by ecoregions or diatom typologies; the same or different sets of metrics among site groups; and unmodeled or modeled MMIs, where models accounted for natural variation in metrics within site groups by calculating an expected reference condition for each metric and each site. We used data from the USEPA's National Rivers and Streams Assessment to calculate the MMIs and evaluate changes in MMI performance. MMI performance was evaluated with indices of precision, bias, responsiveness, sensitivity and relevancy which were respectively measured as MMI variation among reference sites, effects of natural variables on MMIs, difference between MMIs at reference and highly disturbed sites, percent of highly disturbed sites properly classified, and relation of MMIs to human disturbance and stressors. All 5 types of MMIs showed considerable discrimination ability. Using different metrics among ecoregions sometimes reduced precision, but it consistently increased responsiveness, sensitivity, and relevancy. Site specific metric modeling reduced bias and increased responsiveness. Combined use of different metrics among site groups and site specific modeling significantly improved MMI performance irrespective of site grouping approach. Compared to ecoregion site classification, grouping sites based on diatom typologies improved precision, but did not improve overall performance of MMIs if we accounted for natural variation in metrics with site specific models. We conclude that using different metrics among ecoregions and site specific metric modeling improve MMI performance, particularly when used together. Applications of these MMI approaches in ecological assessments introduced a tradeoff with assessment consistency when metrics differed across site groups, but they justified the convenient and consistent use of ecoregions. Copyright © 2016 Elsevier B.V. All rights reserved.
High-Precision Sub-Doppler Infrared Spectroscopy of HeH^+
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Hodges, James N.; Markus, Charles; Kocheril, G. Stephen; Jenkins, Paul A., II; McCall, Benjamin J.
2014-06-01
The helium hydride ion, HeH^+, is the simplest heteronuclear diatomic, and is composed of the two most abundant elements in the universe. It is widely believed that this ion was among the first molecules to be formed; thus it has been of great interest to scientists studying the chemistry of the early universe. HeH^+ is also isoelectronic to H_2 which makes it a great target ion for theorists to include adiabatic and non-adiabatic corrections to its Born-Oppenheimer potential energy surface. The accuracy of such calculations is further improved by incorporating electron relativistic and quantum electrodynamic effects. Using the highly sensitive spectroscopic technique of Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy (NICE-OHVMS) we are able to perform sub-Doppler spectroscopy on ions of interest. When combined with frequency calibration from an optical frequency comb we fit line centers with sub-MHz precision as has previously been shown for the H3^+, HCO+, and CH5+ ions. Here we report a list of the most precisely measured rovibrational transitions of HeH^+ to date. These measurements should allow theorists to continue to push the boundaries of ab initio calculations in order to further study this important fundamental species. S. Lepp, P. C. Stancil, A. Dalgarno J. Phys. B (2002), 35, R57. S. Lepp, Astrophys. Space Sci. (2003), 285, 737. K. Pachucki, J. Komasa, J. Chem. Phys (2012), 137, 204314. J. N. Hodges, A. J. Perry, P. A. Jenkins II, B. M. Siller, B. J. McCall J. Chem. Phys. (2013), 139, 164201.
NASA Technical Reports Server (NTRS)
Sackett, L. L.; Edelbaum, T. N.; Malchow, H. L.
1974-01-01
This manual is a guide for using a computer program which calculates time optimal trajectories for high-and low-thrust geocentric transfers. Either SEP or NEP may be assumed and a one or two impulse, fixed total delta V, initial high thrust phase may be included. Also a single impulse of specified delta V may be included after the low thrust state. The low thrust phase utilizes equinoctial orbital elements to avoid the classical singularities and Kryloff-Boguliuboff averaging to help insure more rapid computation time. The program is written in FORTRAN 4 in double precision for use on an IBM 360 computer. The manual includes a description of the problem treated, input/output information, examples of runs, and source code listings.
Multi-spectral temperature measurement method for gas turbine blade
NASA Astrophysics Data System (ADS)
Gao, Shan; Feng, Chi; Wang, Lixin; Li, Dong
2016-02-01
One of the basic methods to improve both the thermal efficiency and power output of a gas turbine is to increase the firing temperature. However, gas turbine blades are easily damaged in harsh high-temperature and high-pressure environments. Therefore, ensuring that the blade temperature remains within the design limits is very important. There are unsolved problems in blade temperature measurement, relating to the emissivity of the blade surface, influences of the combustion gases, and reflections of radiant energy from the surroundings. In this study, the emissivity of blade surfaces has been measured, with errors reduced by a fitting method, influences of the combustion gases have been calculated for different operational conditions, and a reflection model has been built. An iterative computing method is proposed for calculating blade temperatures, and the experimental results show that this method has high precision.
Experimental Guidance for Isospin Symmetry Breaking Calculations via Single Neutron Pickup Reactions
NASA Astrophysics Data System (ADS)
Leach, K. G.; Garrett, P. E.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.; Towner, I. S.
2013-03-01
Recent activity in superallowed isospin-symmetry-breaking correction calculations has prompted interest in experimental confirmation of these calculation techniques. The shellmodel set of Towner and Hardy (2008) include the opening of specific core orbitals that were previously frozen. This has resulted in significant shifts in some of the δC values, and an improved agreement of the individual corrected {F}t values with the adopted world average of the 13 cases currently included in the high-precision evaluation of Vud. While the nucleus-to-nucleus variation of {F}t is consistent with the conserved-vector-current (CVC) hypothesis of the Standard Model, these new calculations must be thoroughly tested, and guidance must be given for their improvement. Presented here are details of a 64Zn(ěcd, t)63Zn experiment, undertaken to provide such guidance.
Precision gravity studies at Cerro Prieto: a progress report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grannell, R.B.; Kroll, R.C.; Wyman, R.M.
A third and fourth year of precision gravity data collection and reduction have now been completed at the Cerro Prieto geothermal field. In summary, 66 permanently monumented stations were occupied between December and April of 1979 to 1980 and 1980 to 1981 by a LaCoste and Romberg gravity meter (G300) at least twice, with a minimum of four replicate values obtained each time. Station 20 alternate, a stable base located on Cerro Prieto volcano, was used as the reference base for the third year and all the stations were tied to this base, using four to five hour loops. Themore » field data were reduced to observed gravity values by (1) multiplication with the appropriate calibration factor; (2) removal of calculated tidal effects; (3) calculation of average values at each station, and (4) linear removal of accumulated instrumental drift which remained after carrying out the first three reductions. Following the reduction of values and calculation of gravity differences between individual stations and the base stations, standard deviations were calculated for the averaged occupation values (two to three per station). In addition, pooled variance calculations were carried out to estimate precision for the surveys as a whole.« less
Non-rigid Earth rotation series
NASA Astrophysics Data System (ADS)
Pashkevich, V. V.
2008-04-01
The last years a lot of attempts to derive a high-precision theory of the non-rigid Earth rotation was carried out. For these purposes the different transfer functions are used. Usually these transfer func- tions are applied to the series representing the nutation in longitude and in obliquity of the rigid Earth rotation with respect to the ecliptic of date. The aim of this investigation is a construction of the new high- precision non-rigid Earth rotation series (SN9000), dynamically adequate to the DE404/LE404 ephemeris over 2000 years, which are expressed as a function of Euler angles ψ, θ and φ with respect to the fixed ecliptic plane and equinox J2000.0. The early stages of the previous investigation: 1. The high-precision numerical solution of the rigid Earth rotation have been constructed (V.V.Pashkevich, G.I.Eroshkin and A.Brzezinski, 2004), (V.V.Pashkevich and G.I.Eroshkin, Proceedings of Journees 2004). The initial con- ditions have been calculated from SMART97 (P.Bretagnon, G.Francou, P.Rocher, J.L.Simon,1998). The discrepancies between the numerical solution and the semi-analytical solution SMART97 were obtained in Euler angles over 2000 years with one-day spacing. 2. Investigation of the discrepancies is carried out by the least squares and by the spectral analysis algorithms (V.V.Pashkevich and G.I.Eroshkin, Proceedings of Journees 2005). The high-precision rigid Earth rotation series S9000 are determined (V.V.Pashkevich and G.I.Eroshkin, 2005 ). The next stage of this investigation: 3. The new high-precision non-rigid Earth rotation series (SN9000), which are expressed as a function of Euler angles, are constructed by using the method (P.Bretagnon, P.M.Mathews, J.-L.Simon: 1999) and the transfer function MHB2002 (Mathews, P. M., Herring, T. A., and Buffett B. A., 2002).
The research of adaptive-exposure on spot-detecting camera in ATP system
NASA Astrophysics Data System (ADS)
Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu
2013-08-01
High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.
Centroiding Experiment for Determining the Positions of Stars with High Precision
NASA Astrophysics Data System (ADS)
Yano, T.; Araki, H.; Hanada, H.; Tazawa, S.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Niwa, Y.
2010-12-01
We have experimented with the determination of the positions of star images on a detector with high precision such as 10 microarcseconds, required by a space astrometry satellite, JASMINE. In order to accomplish such a precision, we take the following two procedures. (1) We determine the positions of star images on the detector with the precision of about 0.01 pixel for one measurement, using an algorithm for estimating them from photon weighted means of the star images. (2) We determine the positions of star images with the precision of about 0.0001-0.00001 pixel, which corresponds to that of 10 microarcseconds, using a large amount of data over 10000 measurements, that is, the error of the positions decreases according to the amount of data. Here, we note that the procedure 2 is not accomplished when the systematic error in our data is not excluded adequately even if we use a large amount of data. We first show the method to determine the positions of star images on the detector using photon weighted means of star images. This algorithm, used in this experiment, is very useful because it is easy to calculate the photon weighted mean from the data. This is very important in treating a large amount of data. Furthermore, we need not assume the shape of the point spread function in deriving the centroid of star images. Second, we show the results in the laboratory experiment for precision of determining the positions of star images. We obtain that the precision of estimation of positions of star images on the detector is under a variance of 0.01 pixel for one measurement (procedure 1). We also obtain that the precision of the positions of star images becomes a variance of about 0.0001 pixel using about 10000 measurements (procedure 2).
NASA Astrophysics Data System (ADS)
Staier, Florian; Eipel, Heinz; Matula, Petr; Evsikov, Alexei V.; Kozubek, Michal; Cremer, Christoph; Hausmann, Michael
2011-09-01
With the development of novel fluorescence techniques, high resolution light microscopy has become a challenging technique for investigations of the three-dimensional (3D) micro-cosmos in cells and sub-cellular components. So far, all fluorescence microscopes applied for 3D imaging in biosciences show a spatially anisotropic point spread function resulting in an anisotropic optical resolution or point localization precision. To overcome this shortcoming, micro axial tomography was suggested which allows object tilting on the microscopic stage and leads to an improvement in localization precision and spatial resolution. Here, we present a miniaturized device which can be implemented in a motor driven microscope stage. The footprint of this device corresponds to a standard microscope slide. A special glass fiber can manually be adjusted in the object space of the microscope lens. A stepwise fiber rotation can be controlled by a miniaturized stepping motor incorporated into the device. By means of a special mounting device, test particles were fixed onto glass fibers, optically localized with high precision, and automatically rotated to obtain views from different perspective angles under which distances of corresponding pairs of objects were determined. From these angle dependent distance values, the real 3D distance was calculated with a precision in the ten nanometer range (corresponding here to an optical resolution of 10-30 nm) using standard microscopic equipment. As a proof of concept, the spindle apparatus of a mature mouse oocyte was imaged during metaphase II meiotic arrest under different perspectives. Only very few images registered under different rotation angles are sufficient for full 3D reconstruction. The results indicate the principal advantage of the micro axial tomography approach for many microscopic setups therein and also those of improved resolutions as obtained by high precision localization determination.
Precise, High-throughput Analysis of Bacterial Growth.
Kurokawa, Masaomi; Ying, Bei-Wen
2017-09-19
Bacterial growth is a central concept in the development of modern microbial physiology, as well as in the investigation of cellular dynamics at the systems level. Recent studies have reported correlations between bacterial growth and genome-wide events, such as genome reduction and transcriptome reorganization. Correctly analyzing bacterial growth is crucial for understanding the growth-dependent coordination of gene functions and cellular components. Accordingly, the precise quantitative evaluation of bacterial growth in a high-throughput manner is required. Emerging technological developments offer new experimental tools that allow updates of the methods used for studying bacterial growth. The protocol introduced here employs a microplate reader with a highly optimized experimental procedure for the reproducible and precise evaluation of bacterial growth. This protocol was used to evaluate the growth of several previously described Escherichia coli strains. The main steps of the protocol are as follows: the preparation of a large number of cell stocks in small vials for repeated tests with reproducible results, the use of 96-well plates for high-throughput growth evaluation, and the manual calculation of two major parameters (i.e., maximal growth rate and population density) representing the growth dynamics. In comparison to the traditional colony-forming unit (CFU) assay, which counts the cells that are cultured in glass tubes over time on agar plates, the present method is more efficient and provides more detailed temporal records of growth changes, but has a stricter detection limit at low population densities. In summary, the described method is advantageous for the precise and reproducible high-throughput analysis of bacterial growth, which can be used to draw conceptual conclusions or to make theoretical observations.
Gjerde, Hallvard; Verstraete, Alain
2010-02-25
To study several methods for estimating the prevalence of high blood concentrations of tetrahydrocannabinol and amphetamine in a population of drug users by analysing oral fluid (saliva). Five methods were compared, including simple calculation procedures dividing the drug concentrations in oral fluid by average or median oral fluid/blood (OF/B) drug concentration ratios or linear regression coefficients, and more complex Monte Carlo simulations. Populations of 311 cannabis users and 197 amphetamine users from the Rosita-2 Project were studied. The results of a feasibility study suggested that the Monte Carlo simulations might give better accuracies than simple calculations if good data on OF/B ratios is available. If using only 20 randomly selected OF/B ratios, a Monte Carlo simulation gave the best accuracy but not the best precision. Dividing by the OF/B regression coefficient gave acceptable accuracy and precision, and was therefore the best method. None of the methods gave acceptable accuracy if the prevalence of high blood drug concentrations was less than 15%. Dividing the drug concentration in oral fluid by the OF/B regression coefficient gave an acceptable estimation of high blood drug concentrations in a population, and may therefore give valuable additional information on possible drug impairment, e.g. in roadside surveys of drugs and driving. If good data on the distribution of OF/B ratios are available, a Monte Carlo simulation may give better accuracy. 2009 Elsevier Ireland Ltd. All rights reserved.
Real-time and high accuracy frequency measurements for intermediate frequency narrowband signals
NASA Astrophysics Data System (ADS)
Tian, Jing; Meng, Xiaofeng; Nie, Jing; Lin, Liwei
2018-01-01
Real-time and accurate measurements of intermediate frequency signals based on microprocessors are difficult due to the computational complexity and limited time constraints. In this paper, a fast and precise methodology based on the sigma-delta modulator is designed and implemented by first generating the twiddle factors using the designed recursive scheme. This scheme requires zero times of multiplications and only half amounts of addition operations by using the discrete Fourier transform (DFT) and the combination of the Rife algorithm and Fourier coefficient interpolation as compared with conventional methods such as DFT and Fast Fourier Transform. Experimentally, when the sampling frequency is 10 MHz, the real-time frequency measurements with intermediate frequency and narrowband signals have a measurement mean squared error of ±2.4 Hz. Furthermore, a single measurement of the whole system only requires approximately 0.3 s to achieve fast iteration, high precision, and less calculation time.
NASA Astrophysics Data System (ADS)
Wu, Tao; Li, Yan
2017-10-01
Asteroseismology is a powerful tool for probing stellar interiors and determining stellar fundamental parameters. In the present work, we adopt the χ2-minimization method but only use the observed high-precision seismic observations (i.e., oscillation frequencies) to constrain theoretical models for analyzing solar-like oscillator KIC 6225718. Finally, we find the acoustic radius τ0 is the only global parameter that can be accurately measured by the χ2-matching method between observed frequencies and theoretical model calculations for a pure p-mode oscillation star. We obtain seconds for KIC 6225718. It leads that the mass and radius of the CMMs are degenerate with each other. In addition, we find that the distribution range of acoustic radius is slightly enlarged by some extreme cases, which posses both a larger mass and a higher (or lower) metal abundance, at the lower acoustic radius end.
Technology of focus detection for 193nm projection lithographic tool
NASA Astrophysics Data System (ADS)
Di, Chengliang; Yan, Wei; Hu, Song; Xu, Feng; Li, Jinglong
2012-10-01
With the shortening printing wavelength and increasing numerical aperture of lithographic tool, the depth of focus(DOF) sees a rapidly drop down trend, reach a scale of several hundred nanometers while the repeatable accuracy of focusing and leveling must be one-tenth of DOF, approximately several dozen nanometers. For this feature, this article first introduces several focusing technology, Obtained the advantages and disadvantages of various methods by comparing. Then get the accuracy of dual-grating focusing method through theoretical calculation. And the dual-grating focusing method based on photoelastic modulation is divided into coarse focusing and precise focusing method to analyze, establishing image processing model of coarse focusing and photoelastic modulation model of accurate focusing. Finally, focusing algorithm is simulated with MATLAB. In conclusion dual-grating focusing method shows high precision, high efficiency and non-contact measurement of the focal plane, meeting the demands of focusing in 193nm projection lithography.
Nucleon Charges from 2+1+1-flavor HISQ and 2+1-flavor clover lattices
Gupta, Rajan
2016-07-24
Precise estimates of the nucleon charges g A, g S and g T are needed in many phenomenological analyses of SM and BSM physics. In this talk, we present results from two sets of calculations using clover fermions on 9 ensembles of 2+1+1-flavor HISQ and 4 ensembles of 2+1-flavor clover lattices. In addition, we show that high statistics can be obtained cost-effectively using the truncated solver method with bias correction and the coherent source sequential propagator technique. By performing simulations at 4–5 values of the source-sink separation t sep, we demonstrate control over excited-state contamination using 2- and 3-state fits.more » Using the high-precision 2+1+1-flavor data, we perform a simultaneous fit in a, M π and M πL to obtain our final results for the charges.« less
A two-ply polymer-based flexible tactile sensor sheet using electric capacitance.
Guo, Shijie; Shiraoka, Takahisa; Inada, Seisho; Mukai, Toshiharu
2014-01-29
Traditional capacitive tactile sensor sheets usually have a three-layered structure, with a dielectric layer sandwiched by two electrode layers. Each electrode layer has a number of parallel ribbon-like electrodes. The electrodes on the two electrode layers are oriented orthogonally and each crossing point of the two perpendicular electrode arrays makes up a capacitive sensor cell on the sheet. It is well known that compatibility between measuring precision and resolution is difficult, since decreasing the width of the electrodes is required to obtain a high resolution, however, this may lead to reduction of the area of the sensor cells, and as a result, lead to a low Signal/Noise (S/N) ratio. To overcome this problem, a new multilayered structure and related calculation procedure are proposed. This new structure stacks two or more sensor sheets with shifts in position. Both a high precision and a high resolution can be obtained by combining the signals of the stacked sensor sheets. Trial production was made and the effect was confirmed.
A high precision position sensor design and its signal processing algorithm for a maglev train.
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run.
A High Precision Position Sensor Design and Its Signal Processing Algorithm for a Maglev Train
Xue, Song; Long, Zhiqiang; He, Ning; Chang, Wensen
2012-01-01
High precision positioning technology for a kind of high speed maglev train with an electromagnetic suspension (EMS) system is studied. At first, the basic structure and functions of the position sensor are introduced and some key techniques to enhance the positioning precision are designed. Then, in order to further improve the positioning signal quality and the fault-tolerant ability of the sensor, a new kind of discrete-time tracking differentiator (TD) is proposed based on nonlinear optimal control theory. This new TD has good filtering and differentiating performances and a small calculation load. It is suitable for real-time signal processing. The stability, convergence property and frequency characteristics of the TD are studied and analyzed thoroughly. The delay constant of the TD is figured out and an effective time delay compensation algorithm is proposed. Based on the TD technology, a filtering process is introduced in to improve the positioning signal waveform when the sensor is under bad working conditions, and a two-sensor switching algorithm is designed to eliminate the positioning errors caused by the joint gaps of the long stator. The effectiveness and stability of the sensor and its signal processing algorithms are proved by the experiments on a test train during a long-term test run. PMID:22778582
Navigation Constellation Design Using a Multi-Objective Genetic Algorithm
2015-03-26
programs. This specific tool not only offers high fidelity simulations, but it also offers the visual aid provided by STK . The ability to...MATLAB and STK . STK is a program that allows users to model, analyze, and visualize space systems. Users can create objects such as satellites and...position dilution of precision (PDOP) and system cost. This thesis utilized Satellite Tool Kit ( STK ) to calculate PDOP values of navigation
Dissecting Reactor Antineutrino Flux Calculations
NASA Astrophysics Data System (ADS)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-01
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.
Dissecting Reactor Antineutrino Flux Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.
2017-09-15
Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235 U , 239 Pu , 241 Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In our present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238 U contribution as wellmore » as the effective charge and the allowed shape assumption used in the conversion method. Here, we observe that including a shape correction of about + 6 % MeV - 1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.« less
Accurate wavelengths for X-ray spectroscopy and the NIST hydrogen-like ion database
NASA Astrophysics Data System (ADS)
Kotochigova, S. A.; Kirby, K. P.; Brickhouse, N. S.; Mohr, P. J.; Tupitsyn, I. I.
2005-06-01
We have developed an ab initio multi-configuration Dirac-Fock-Sturm method for the precise calculation of X-ray emission spectra, including energies, transition wavelengths and transition probabilities. The calculations are based on non-orthogonal basis sets, generated by solving the Dirac-Fock and Dirac-Fock-Sturm equations. Inclusion of Sturm functions into the basis set provides an efficient description of correlation effects in highly charged ions and fast convergence of the configuration interaction procedure. A second part of our study is devoted to developing a theoretical procedure and creating an interactive database to generate energies and transition frequencies for hydrogen-like ions. This procedure is highly accurate and based on current knowledge of the relevant theory, which includes relativistic, quantum electrodynamic, recoil, and nuclear size effects.
Calculation of skiving cutter blade
NASA Astrophysics Data System (ADS)
Xu, Lei; Lao, Qicheng; Shang, Zhiyi
2018-05-01
The gear skiving method is a kind of gear machining technology with high efficiency and high precision. According to the method of gear machining, a method for calculating the blade of skiving cutter in machining an involute gear is proposed. Based on the principle of meshing gear and the kinematic relationship between the machined flank and the gear skiving, the mathematical model of skiving for machining the internal gear is built and the gear tooth surface is obtained by solving the meshing equation. The mathematical model of the gear blade curve of the skiving cutter is obtained by choosing the proper rake face and the cutter tooth surface for intersection. Through the analysis of the simulation of the skiving gear, the feasibility and correctness of the skiving cutter blade design are verified.
Tune-out wavelength for the 1 s 2 s3 S - 1 s 3 p 3 P transition of helium: relativistic effects
NASA Astrophysics Data System (ADS)
Drake, Gordon W. F.; Manalo, Jacob
2017-04-01
The tune-out wavelength is the wavelength at which the frequency dependent polarizability of an atom vanishes. It can be measured to very high precision by means of an interferometric comparison between two beams. This paper is part of a joint theoretical/ experimental project with K. Baldwin et al. (Australian National University) and L.-Y. Tang et al. (Wuhan Institute of Physics and Mathematics) to perform a high precision comparison between theory and experiment as a probe of atomic structure, including relativistic and quantum electrodynamic effects. We will report the results of calculations for the tune-out wavelength that is closest to the 1 s 2 s3 S - 1 s 3 p3 P transition of 4He. Our result for the M = 0 magnetic substate, obtained with a fully correlated Hylleraas basis set, is 413 . 079 958 51 (12) nm. This includes a leading relativistic contribution of - 0 . 059 218 5 (16) nm from the Breit interaction as a perturbation, and a relativistic recoil contribution of - 0 . 000 044 47 (17) nm. The results will be compared with recent relativistic CI calculations. Research supported by tha Natural Sciences and Engineering Research Council of Canada.
Calculating Trajectories And Orbits
NASA Technical Reports Server (NTRS)
Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.;
1989-01-01
Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, Melvin D.
1994-01-01
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position.
Precisely and Accurately Inferring Single-Molecule Rate Constants
Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.
2017-01-01
The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280
High-precision photometry by telescope defocusing - VII. The ultrashort period planet WASP-103
NASA Astrophysics Data System (ADS)
Southworth, John; Mancini, L.; Ciceri, S.; Budaj, J.; Dominik, M.; Figuera Jaimes, R.; Haugbølle, T.; Jørgensen, U. G.; Popovas, A.; Rabus, M.; Rahvar, S.; von Essen, C.; Schmidt, R. W.; Wertz, O.; Alsubai, K. A.; Bozza, V.; Bramich, D. M.; Calchi Novati, S.; D'Ago, G.; Hinse, T. C.; Henning, Th.; Hundertmark, M.; Juncher, D.; Korhonen, H.; Skottfelt, J.; Snodgrass, C.; Starkey, D.; Surdej, J.
2015-02-01
We present 17 transit light curves of the ultrashort period planetary system WASP-103, a strong candidate for the detection of tidally-induced orbital decay. We use these to establish a high-precision reference epoch for transit timing studies. The time of the reference transit mid-point is now measured to an accuracy of 4.8 s, versus 67.4 s in the discovery paper, aiding future searches for orbital decay. With the help of published spectroscopic measurements and theoretical stellar models, we determine the physical properties of the system to high precision and present a detailed error budget for these calculations. The planet has a Roche lobe filling factor of 0.58, leading to a significant asphericity; we correct its measured mass and mean density for this phenomenon. A high-resolution Lucky Imaging observation shows no evidence for faint stars close enough to contaminate the point spread function of WASP-103. Our data were obtained in the Bessell RI and the SDSS griz passbands and yield a larger planet radius at bluer optical wavelengths, to a confidence level of 7.3σ. Interpreting this as an effect of Rayleigh scattering in the planetary atmosphere leads to a measurement of the planetary mass which is too small by a factor of 5, implying that Rayleigh scattering is not the main cause of the variation of radius with wavelength.
Direct visualization of atomically precise nitrogen-doped graphene nanoribbons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yi; Zhang, Yanfang; Li, Geng
2014-07-14
We have fabricated atomically precise nitrogen-doped chevron-type graphene nanoribbons by using the on-surface synthesis technique combined with the nitrogen substitution of the precursors. Scanning tunneling microscopy and spectroscopy indicate that the well-defined nanoribbons tend to align with the neighbors side-by-side with a band gap of 1.02 eV, which is in good agreement with the density functional theory calculation result. The influence of the high precursor coverage on the quality of the nanoribbons is also studied. We find that graphene nanoribbons with sufficient aspect ratios can only be fabricated at sub-monolayer precursor coverage. This work provides a way to construct atomically precisemore » nitrogen-doped graphene nanoribbons.« less
CCD centroiding analysis for Nano-JASMINE observation data
NASA Astrophysics Data System (ADS)
Niwa, Yoshito; Yano, Taihei; Araki, Hiroshi; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Tazawa, Seiichi; Hanada, Hideo
2010-07-01
Nano-JASMINE is a very small satellite mission for global space astrometry with milli-arcsecond accuracy, which will be launched in 2011. In this mission, centroids of stars in CCD image frames are estimated with sub-pixel accuracy. In order to realize such a high precision centroiding an algorithm utilizing a least square method is employed. One of the advantages is that centroids can be calculated without explicit assumption of the point spread functions of stars. CCD centroiding experiment has been performed to investigate whether this data analysis is available, and centroids of artificial star images on a CCD are determined with a precision of less than 0.001 pixel. This result indicates parallaxes of stars within 300 pc from Sun can be observed in Nano-JASMINE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Khalil J.; Rim, Jung Ho; Porterfield, Donivan R.
2015-06-29
In this study, we re-analyzed late-1940’s, Manhattan Project era Plutonium-rich sludge samples recovered from the ''General’s Tanks'' located within the nation’s oldest Plutonium processing facility, Technical Area 21. These samples were initially characterized by lower accuracy, and lower precision mass spectrometric techniques. We report here information that was previously not discernable: the two tanks contain isotopically distinct Pu not only for the major (i.e., 240Pu, 239Pu) but trace ( 238Pu , 241Pu, 242Pu) isotopes. Revised isotopics slightly changed the calculated 241Am- 241Pu model ages and interpretations.
NASA Astrophysics Data System (ADS)
Pelc, Andrzej; Hałas, Stanisław; Niedźwiedzki, Robert
2011-01-01
We report the results of high-precision (±0.05‰) oxygen isotope analysis of phosphates in 6 teeth of fossil sharks from the Mangyshlak peninsula. This precision was achieved by the offline preparation of CO2 which was then analyzed on a dual-inlet and triple-collector IRMS. The teeth samples were separated from Middle- and Late Bartonian sediments cropping out in two locations, Usak and Kuilus. Seawater temperatures calculated from the δ18O data vary from 23-41°C. However, these temperatures are probably overestimated due to freshwater inflow. The data point at higher temperature in the Late Bartonian than in the Middle Bartonian and suggest differences in the depth habitats of the shark species studied.
Radiographic absorptiometry method in measurement of localized alveolar bone density changes.
Kuhl, E D; Nummikoski, P V
2000-03-01
The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.
Superallowed Beta Decay Studies at TRIUMF --- Nuclear Structure and Fundamental Symmetries
NASA Astrophysics Data System (ADS)
Zganjar, E. F.; Achtzehn, T.; Albers, D.; Andreoiu, C.; Andreyev, A. N.; Austin, R. A. E.; Ball, G. C.; Behr, J. A.; Biosvert, G. C.; Bricault, P.; Bishop, S.; Chakrawarthy, R. S.; Churchman, R.; Cross, D.; Cunningham, E.; D'Auria, J. M.; Dombsky, M.; Finlay, P.; Garrett, P. E.; Grinyer, G. F.; Hackman, G.; Hanemaayer, V.; Hardy, J. C.; Hodgson, D. F.; Hyland, B.; Iacob, V.; Klages, P.; Koopmans, K. A.; Kulp, W. D.; Lassen, J.; Lavoie, J. P.; Leslie, J. R.; Linder, T.; MacDonald, J. A.; Mak, H.-B.; Melconian, D.; Morton, A. C.; Ormand, W. E.; Osborne, C. J.; Pearson, C. J.; Pearson, M. R.; Phillips, A. A.; Piechaczek, A.; Ressler, J.; Sarazin, F.; Savard, G.; Schumaker, M. A.; Scraggs, H. C.; Svensson, C. E.; Valiente-Dobon, J. J.; Towner, I. S.; Waddington, J. C.; Walker, P. M.; Wendt, K.; Wood, J. L.
2007-04-01
Precision measurement of the beta -decay half-life, Q-value, and branching ratio between nuclear analog states of Jpi = 0+ and T=1 can provide critical and fundamental tests of the Standard Model's description of electroweak interactions. A program has been initiated at TRIUMF-ISAC to measure the ft values of these superallowed beta transitions. Two Tz = 0, A > 60 cases, 74Rb and 62Ga, are presented. These are particularly relevant because they can provide critical tests of the calculated nuclear structure and isospin-symmetry breaking corrections that are predicted to be larger for heavier nuclei, and because they demonstrate the advance in the experimental precision on ft at TRIUMF-ISAC from 0.26% for 74Rb in 2002 to 0.05% for 62Ga in 2006. The high precision world data on experimental ft and corrected Ft values are discussed and shown to be consistent with CVC at the 10-4 level, yielding an average Ft = 3073.70(74) s. This Ft leads to Vud = 0.9737(4) for the up-down element of the Standard Model's CKM matrix. With this value and the Particle Data Group's 2006 values for Vus and Vub, the unitarity condition for the CKM matrix is met. Additional measurements and calculations are needed, however, to reduce the uncertainties in that evaluation. That objective is the focus of the continuing program on superallowed-beta decay at TRIUMF-ISAC.
General post-Minkowskian expansion and application of the phase function
NASA Astrophysics Data System (ADS)
Qin, Cheng-Gang; Shao, Cheng-Gang
2017-07-01
The phase function is a useful tool to study all observations of space missions, since it can give all the information about light propagation in a gravitational field. For the extreme accuracy of the modern space missions, a precise relativistic modeling of observations is required. So, we develop a recursive procedure enabling us to expand the phase function into a perturbative series of ascending powers of the Newtonian gravitational constant. Any n th-order perturbation of the phase function can be determined by the integral along the straight line connecting two point events. To illustrate the result, we carry out the calculation of the phase function outside a static, spherically symmetric body up to the order of G2. Then, we develop a precise relativistic model that is able to calculate the phase function and the derivatives of the phase function in the gravitational field of rotating and uniformly moving bodies. This model allows the computing of the Doppler, radio science, and astrometric observables of the space missions in the Solar System. With the development of space technology, the relativistic corrections due to the motion of a planet's spin must be considered in the high-precision space missions in the near future. As an example, we give the estimates of the relativistic corrections on the observables about the space missions TianQin and BEACON.
Ayiku, Lynda; Levay, Paul; Hudson, Tom; Craven, Jenny; Barrett, Elizabeth; Finnegan, Amy; Adams, Rachel
2017-07-13
A validated geographic search filter for the retrieval of research about the United Kingdom (UK) from bibliographic databases had not previously been published. To develop and validate a geographic search filter to retrieve research about the UK from OVID medline with high recall and precision. Three gold standard sets of references were generated using the relative recall method. The sets contained references to studies about the UK which had informed National Institute for Health and Care Excellence (NICE) guidance. The first and second sets were used to develop and refine the medline UK filter. The third set was used to validate the filter. Recall, precision and number-needed-to-read (NNR) were calculated using a case study. The validated medline UK filter demonstrated 87.6% relative recall against the third gold standard set. In the case study, the medline UK filter demonstrated 100% recall, 11.4% precision and a NNR of nine. A validated geographic search filter to retrieve research about the UK with high recall and precision has been developed. The medline UK filter can be applied to systematic literature searches in OVID medline for topics with a UK focus. © 2017 Crown copyright. Health Information and Libraries Journal © 2017 Health Libraries GroupThis article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.
Huang Hua-Lin; Mo Ling-Fei; Liu Ying-Jie; Li Cheng-Yang; Xu Qi-Meng; Wu Zhi-Tong
2015-08-01
The number of the apoplectic people is increasing while population aging is quickening its own pace. The precise measurement of walking speed is very important to the rehabilitation guidance of the apoplectic people. The precision of traditional measuring methods on speed such as stopwatch is relatively low, and high precision measurement instruments because of the high cost cannot be used widely. What's more, these methods have difficulty in measuring the walking speed of the apoplectic people accurately. UHF RFID tag has the advantages of small volume, low price, long reading distance etc, and as a wearable sensor, it is suitable to measure walking speed accurately for the apoplectic people. In order to measure the human walking speed, this paper uses four reader antennas with a certain distance to reads the signal strength of RFID tag. Because RFID tag has different RSSI (Received Signal Strength Indicator) in different distances away from the reader, researches on the changes of RSSI with time have been done by this paper to calculate walking speed. The verification results show that the precise measurement of walking speed can be realized by signal processing method with Gaussian Fitting-Kalman Filter. Depending on the variance of walking speed, doctors can predict the rehabilitation training result of the apoplectic people and give the appropriate rehabilitation guidance.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Fast and efficient indexing approach for object recognition
NASA Astrophysics Data System (ADS)
Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi
1999-08-01
This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.
A method to calculate the volume of palatine tonsils.
Prim, M P; De Diego, J I; García-Bermúdez, C; Pérez-Fernández, E; Hardisson, D
2010-12-01
The purpose of this study was to obtain a mathematical formula to calculate the tonsillar volume out of its measurements assessed on surgical specimens. Thirty consecutive surgical specimens of pediatric tonsils were studied. The maximum lengths ("a"), widths ("b"), and depths ("c") of the dissected specimens were measured in millimeters, and the volume of each tonsil was measured in milliliters. One-sample Kolmogorov-Smirnov test was used to check the normality of the sample. To calculate the reproducibility of the quantitative variables, intraclass correlation coefficients were used. Two formulas with high reproducibility (coefficient R between 0.75 and 1) were obtained: 1) [a*b*c* 0.5236] with R = 0.8688; and 2) [a*b*b* 0.3428] with R = 0.9073. It is possible to calculate the volume of the palatine tonsils in surgical specimens precisely enough based on their three measures, or their two main measures (length and width).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faddegon, B.A.; Villarreal-Barajas, J.E.; Mt. Diablo Regional Cancer Center, 2450 East Street, Concord, California
2005-11-15
The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for amore » particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm{sup 2} inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm{sup 3} voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum of 5.6% at 21 MeV. Contributions from the collimator effect were largest for the large field size, high beam energy, and shallow depths, reaching a maximum of 4.7% at 21 MeV. Both shielding contributions and the collimator effect need to be taken into account to achieve an accuracy of 2%. FAST takes explicit account of the shielding contributions. With the collimator effect set to that of the largest field in the FAST calculation, the difference in dose on the central axis (product of ROF and PDD) between FAST and full simulation was generally under 2%. The maximum difference of 2.5% exceeded the statistical precision of the calculation by four standard deviations. This occurred at 18 MeV for the 2.5x2.5 cm{sup 2} field. The differences are due to the method used to account for the collimator effect.« less
NASA Astrophysics Data System (ADS)
Adkins, Gregory
2016-03-01
Positronium spectroscopy is of continuing interest as a high-precision test of our understanding of binding in QED. Positronium-the electron-positron bound state-represents the purest example of binding in QFT as the constituents are structureless and their interactions are dominated by QED with only negligible contributions from strong or weak effects. Positronium differs from other Coulombic bound systems such as hydrogen or muonium in having maximal recoil (the constituent mass ratio m / M is one) and being subject to real and virtual annihilation into photons. Positronium spectroscopy (n = 1 hyperfine splitting, n = 2 fine structure, and the 2 S - 1 S interval) has reached a precision of order 1MHz , and ongoing experimental efforts may lead to improved results. Theoretical calculations of positronium energies at order mα6 ~ 18 . 7MHz are complete, but only partial results are known at order mα7 ~ 0 . 14MHz . I will report on the status of the positronium energy calculations and present new results for order mα7 contributions. Support provided by the NSF through Grant No. PHY-1404268.
Electrode Models for Electric Current Computed Tomography
CHENG, KUO-SHENG; ISAACSON, DAVID; NEWELL, J. C.; GISSER, DAVID G.
2016-01-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 Ω · cm were studied. Values of “effective” contact impedance z used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 Ω · cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an “effective” contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field. PMID:2777280
Electrode models for electric current computed tomography.
Cheng, K S; Isaacson, D; Newell, J C; Gisser, D G
1989-09-01
This paper develops a mathematical model for the physical properties of electrodes suitable for use in electric current computed tomography (ECCT). The model includes the effects of discretization, shunt, and contact impedance. The complete model was validated by experiment. Bath resistivities of 284.0, 139.7, 62.3, 29.5 omega.cm were studied. Values of "effective" contact impedance zeta used in the numerical approximations were 58.0, 35.0, 15.0, and 7.5 omega.cm2, respectively. Agreement between the calculated and experimentally measured values was excellent throughout the range of bath conductivities studied. It is desirable in electrical impedance imaging systems to model the observed voltages to the same precision as they are measured in order to be able to make the highest resolution reconstructions of the internal conductivity that the measurement precision allows. The complete electrode model, which includes the effects of discretization of the current pattern, the shunt effect due to the highly conductive electrode material, and the effect of an "effective" contact impedance, allows calculation of the voltages due to any current pattern applied to a homogeneous resistivity field.
Two-port connecting-layer-based sandwiched grating by a polarization-independent design.
Li, Hongtao; Wang, Bo
2017-05-02
In this paper, a two-port connecting-layer-based sandwiched beam splitter grating with polarization-independent property is reported and designed. Such the grating can separate the transmission polarized light into two diffraction orders with equal energies, which can realize the nearly 50/50 output with good uniformity. For the given wavelength of 800 nm and period of 780 nm, a simplified modal method can design a optimal duty cycle and the estimation value of the grating depth can be calculated based on it. In order to obtain the precise grating parameters, a rigorous coupled-wave analysis can be employed to optimize grating parameters by seeking for the precise grating depth and the thickness of connecting layer. Based on the optimized design, a high-efficiency two-port output grating with the wideband performances can be gained. Even more important, diffraction efficiencies are calculated by using two analytical methods, which are proved to be coincided well with each other. Therefore, the grating is significant for practical optical photonic element in engineering.
Toward 1-mm depth precision with a solid state full-field range imaging system
NASA Astrophysics Data System (ADS)
Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.
2006-02-01
Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.
Application of high-precision two-way ranging to Galileo Earth-1 encounter navigation
NASA Technical Reports Server (NTRS)
Pollmeier, V. M.; Thurman, S. W.
1992-01-01
The application of precision two-way ranging to orbit determination with relatively short data arcs is investigated for the Galileo spacecraft's approach to its first Earth encounter (December 8, 1990). Analysis of previous S-band (2.3-GHz) ranging data acquired from Galileo indicated that under good signal conditions submeter precision and 10-m ranging accuracy were achieved. It is shown that ranging data of sufficient accuracy, when acquired from multiple stations, can sense the geocentric angular position of a distant spacecraft. A range data filtering technique, in which explicit modeling of range measurement bias parameters for each station pass is utilized, is shown to largely remove the systematic ground system calibration errors and transmission media effects from the Galileo range measurements, which would otherwise corrupt the angle-finding capabilities of the data. The accuracy of the Galileo orbit solutions obtained with S-band Doppler and precision ranging were found to be consistent with simple theoretical calculations, which predicted that angular accuracies of 0.26-0.34 microrad were achievable. In addition, the navigation accuracy achieved with precision ranging was marginally better than that obtained using delta-differenced one-way range (delta DOR), the principal data type that was previously used to obtain spacecraft angular position measurements operationally.
NASA Technical Reports Server (NTRS)
Smith, R. L.; Lyubomirsky, A. S.
1981-01-01
Two techniques were analyzed. The first is a representation using Chebyshev expansions in three-dimensional cells. The second technique employs a temporary file for storing the components of the nonspherical gravity force. Computer storage requirements and relative CPU time requirements are presented. The Chebyshev gravity representation can provide a significant reduction in CPU time in precision orbit calculations, but at the cost of a large amount of direct-access storage space, which is required for a global model.
Ultracold Anions for High-Precision Antihydrogen Experiments
NASA Astrophysics Data System (ADS)
Cerchiari, G.; Kellerbauer, A.; Safronova, M. S.; Safronova, U. I.; Yzombard, P.
2018-03-01
Experiments with antihydrogen (H ¯) for a study of matter-antimatter symmetry and antimatter gravity require ultracold H ¯ to reach ultimate precision. A promising path towards antiatoms much colder than a few kelvin involves the precooling of antiprotons by laser-cooled anions. Because of the weak binding of the valence electron in anions—dominated by polarization and correlation effects—only few candidate systems with suitable transitions exist. We report on a combination of experimental and theoretical studies to fully determine the relevant binding energies, transition rates, and branching ratios of the most promising candidate La- . Using combined transverse and collinear laser spectroscopy, we determined the resonant frequency of the laser cooling transition to be ν =96.592 713 (91 ) THz and its transition rate to be A =4.90 (50 )×104 s-1 . Using a novel high-precision theoretical treatment of La- we calculated yet unmeasured energy levels, transition rates, branching ratios, and lifetimes to complement experimental information on the laser cooling cycle of La- . The new data establish the suitability of La- for laser cooling and show that the cooling transition is significantly stronger than suggested by a previous theoretical study.
NASA Astrophysics Data System (ADS)
Sikora, Mark; Compton@HIGS Team
2017-01-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at an incident photon energy of 65 MeV and discuss the sensitivity of these data to the polarizabilities.
NASA Astrophysics Data System (ADS)
Sikora, Mark
2016-09-01
The electric (αn) and magnetic (βn) polarizabilities of the neutron are fundamental properties arising from its internal structure which describe the nucleon's response to applied electromagnetic fields. Precise measurements of the polarizabilities provide crucial constraints on models of Quantum Chromodynamics (QCD) in the low energy regime such as Chiral Effective Field Theories as well as emerging ab initio calculations from lattice-QCD. These values also contribute the most uncertainty to theoretical determinations of the proton-neutron mass difference. Historically, the experimental challenges to measuring αn and βn have been due to the difficulty in obtaining suitable targets and sufficiently intense beams, leading to significant statistical uncertainties. To address these issues, a program of Compton scattering experiments on the deuteron is underway at the High Intensity Gamma Source (HI γS) at Duke University with the aim of providing the world's most precise measurement of αn and βn. We report measurements of the Compton scattering differential cross section obtained at incident photon energies of 65 and 85 MeV and discuss the sensitivity of these data to the polarizabilities.
NASA Astrophysics Data System (ADS)
Inochkin, F. M.; Kruglov, S. K.; Bronshtein, I. G.; Kompan, T. A.; Kondratjev, S. V.; Korenev, A. S.; Pukhov, N. F.
2017-06-01
A new method for precise subpixel edge estimation is presented. The principle of the method is the iterative image approximation in 2D with subpixel accuracy until the appropriate simulated is found, matching the simulated and acquired images. A numerical image model is presented consisting of three parts: an edge model, object and background brightness distribution model, lens aberrations model including diffraction. The optimal values of model parameters are determined by means of conjugate-gradient numerical optimization of a merit function corresponding to the L2 distance between acquired and simulated images. Computationally-effective procedure for the merit function calculation along with sufficient gradient approximation is described. Subpixel-accuracy image simulation is performed in a Fourier domain with theoretically unlimited precision of edge points location. The method is capable of compensating lens aberrations and obtaining the edge information with increased resolution. Experimental method verification with digital micromirror device applied to physically simulate an object with known edge geometry is shown. Experimental results for various high-temperature materials within the temperature range of 1000°C..2400°C are presented.
Ultracold Anions for High-Precision Antihydrogen Experiments.
Cerchiari, G; Kellerbauer, A; Safronova, M S; Safronova, U I; Yzombard, P
2018-03-30
Experiments with antihydrogen (H[over ¯]) for a study of matter-antimatter symmetry and antimatter gravity require ultracold H[over ¯] to reach ultimate precision. A promising path towards antiatoms much colder than a few kelvin involves the precooling of antiprotons by laser-cooled anions. Because of the weak binding of the valence electron in anions-dominated by polarization and correlation effects-only few candidate systems with suitable transitions exist. We report on a combination of experimental and theoretical studies to fully determine the relevant binding energies, transition rates, and branching ratios of the most promising candidate La^{-}. Using combined transverse and collinear laser spectroscopy, we determined the resonant frequency of the laser cooling transition to be ν=96.592 713(91) THz and its transition rate to be A=4.90(50)×10^{4} s^{-1}. Using a novel high-precision theoretical treatment of La^{-} we calculated yet unmeasured energy levels, transition rates, branching ratios, and lifetimes to complement experimental information on the laser cooling cycle of La^{-}. The new data establish the suitability of La^{-} for laser cooling and show that the cooling transition is significantly stronger than suggested by a previous theoretical study.
Understanding the many-body expansion for large systems. I. Precision considerations
NASA Astrophysics Data System (ADS)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M.
2014-07-01
Electronic structure methods based on low-order "n-body" expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H_2O)_{47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program, we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.
High-Precision Mass Measurement of
NASA Astrophysics Data System (ADS)
Valverde, A. A.; Brodeur, M.; Bollen, G.; Eibach, M.; Gulyuz, K.; Hamaker, A.; Izzo, C.; Ong, W.-J.; Puentes, D.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Surbrook, J.; Villari, A. C. C.; Yandow, I. T.
2018-01-01
We report the mass measurement of
Understanding the many-body expansion for large systems. I. Precision considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard, Ryan M.; Lao, Ka Un; Herbert, John M., E-mail: herbert@chemistry.ohio-state.edu
2014-07-07
Electronic structure methods based on low-order “n-body” expansions are an increasingly popular means to defeat the highly nonlinear scaling of ab initio quantum chemistry calculations, taking advantage of the inherently distributable nature of the numerous subsystem calculations. Here, we examine how the finite precision of these subsystem calculations manifests in applications to large systems, in this case, a sequence of water clusters ranging in size up to (H{sub 2}O){sub 47}. Using two different computer implementations of the n-body expansion, one fully integrated into a quantum chemistry program and the other written as a separate driver routine for the same program,more » we examine the reproducibility of total binding energies as a function of cluster size. The combinatorial nature of the n-body expansion amplifies subtle differences between the two implementations, especially for n ⩾ 4, leading to total energies that differ by as much as several kcal/mol between two implementations of what is ostensibly the same method. This behavior can be understood based on a propagation-of-errors analysis applied to a closed-form expression for the n-body expansion, which is derived here for the first time. Discrepancies between the two implementations arise primarily from the Coulomb self-energy correction that is required when electrostatic embedding charges are implemented by means of an external driver program. For reliable results in large systems, our analysis suggests that script- or driver-based implementations should read binary output files from an electronic structure program, in full double precision, or better yet be fully integrated in a way that avoids the need to compute the aforementioned self-energy. Moreover, four-body and higher-order expansions may be too sensitive to numerical thresholds to be of practical use in large systems.« less
Absolute Helmholtz free energy of highly anharmonic crystals: theory vs Monte Carlo.
Yakub, Lydia; Yakub, Eugene
2012-04-14
We discuss the problem of the quantitative theoretical prediction of the absolute free energy for classical highly anharmonic solids. Helmholtz free energy of the Lennard-Jones (LJ) crystal is calculated accurately while accounting for both the anharmonicity of atomic vibrations and the pair and triple correlations in displacements of the atoms from their lattice sites. The comparison with most precise computer simulation data on sublimation and melting lines revealed that theoretical predictions are in excellent agreement with Monte Carlo simulation data in the whole range of temperatures and densities studied.
Spectroscopic diagnostics of solar flares
NASA Astrophysics Data System (ADS)
Bely-Dubau, F.; Dubau, J.; Faucher, P.; Loulergue, M.; Steenman-Clarke, L.
Observations made with the X-ray polychromator (XRP) on board the Solar Maximum Mission satellite were analyzed. Data from the bent crystal spectrometer portion of the XRP experiment, in the spectral domain 1 to 3 A, with high spectral and temporal resolution, were used. Results for the spectrum analysis of iron are given. The possibility of polarization effects is considered. Although it is demonstrated that hyperfine analyses of a given spectrum are obtainable, provided calculations include large quantities of high precision atomic data, the interpretation is limited by the hypothesis of homogeneity of the emitting plasma.
Zhao, Jing-Xin; Su, Xiu-Yun; Xiao, Ruo-Xiu; Zhao, Zhe; Zhang, Li-Hai; Zhang, Li-Cheng; Tang, Pei-Fu
2016-11-01
We established a mathematical method to precisely calculate the radiographic anteversion (RA) and radiographic inclination (RI) angles of the acetabular cup based on anterior-posterior (AP) pelvic radiographs after total hip arthroplasty. Using Mathematica software, a mathematical model for an oblique cone was established to simulate how AP pelvic radiographs are obtained and to address the relationship between the two-dimensional and three-dimensional geometry of the opening circle of the cup. In this model, the vertex was the X-ray beam source, and the generatrix was the ellipse in radiographs projected from the opening circle of the acetabular cup. Using this model, we established a series of mathematical formulas to reveal the differences between the true RA and RI cup angles and the measurements results achieved using traditional methods and AP pelvic radiographs and to precisely calculate the RA and RI cup angles based on post-operative AP pelvic radiographs. Statistical analysis indicated that traditional methods should be used with caution if traditional measurements methods are used to calculate the RA and RI cup angles with AP pelvic radiograph. The entire calculation process could be performed by an orthopedic surgeon with mathematical knowledge of basic matrix and vector equations. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Bassanese, Danielle N; Conlan, Xavier A; Barnett, Neil W; Stevenson, Paul G
2015-05-01
This paper explores the analytical figures of merit of two-dimensional high-performance liquid chromatography for the separation of antioxidant standards. The cumulative two-dimensional high-performance liquid chromatography peak area was calculated for 11 antioxidants by two different methods--the areas reported by the control software and by fitting the data with a Gaussian model; these methods were evaluated for precision and sensitivity. Both methods demonstrated excellent precision in regards to retention time in the second dimension (%RSD below 1.16%) and cumulative second dimension peak area (%RSD below 3.73% from the instrument software and 5.87% for the Gaussian method). Combining areas reported by the high-performance liquid chromatographic control software displayed superior limits of detection, in the order of 1 × 10(-6) M, almost an order of magnitude lower than the Gaussian method for some analytes. The introduction of the countergradient eliminated the strong solvent mismatch between dimensions, leading to a much improved peak shape and better detection limits for quantification. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dual-band plasmonic resonator based on Jerusalem cross-shaped nanoapertures
NASA Astrophysics Data System (ADS)
Cetin, Arif E.; Kaya, Sabri; Mertiri, Alket; Aslan, Ekin; Erramilli, Shyamsunder; Altug, Hatice; Turkmen, Mustafa
2015-06-01
In this paper, we both experimentally and numerically introduce a dual-resonant metamaterial based on subwavelength Jerusalem cross-shaped apertures. We numerically investigate the physical origin of the dual-resonant behavior, originating from the constituting aperture elements, through finite difference time domain calculations. Our numerical calculations show that at the dual-resonances, the aperture system supports large and easily accessible local electromagnetic fields. In order to experimentally realize the aperture system, we utilize a high-precision and lift-off free fabrication method based on electron-beam lithography. We also introduce a fine-tuning mechanism for controlling the dual-resonant spectral response through geometrical device parameters. Finally, we show the aperture system's highly advantageous far- and near-field characteristics through numerical calculations on refractive index sensitivity. The quantitative analyses on the availability of the local fields supported by the aperture system are employed to explain the grounds behind the sensitivity of each spectral feature within the dual-resonant behavior. Possessing dual-resonances with large and accessible electromagnetic fields, Jerusalem cross-shaped apertures can be highly advantageous for wide range of applications demanding multiple spectral features with strong nearfield characteristics.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Seismic displacements monitoring for 2015 Mw 7.8 Nepal earthquake with GNSS data
NASA Astrophysics Data System (ADS)
Geng, T.; Su, X.; Xie, X.
2017-12-01
The high-rate Global Positioning Satellite System (GNSS) has been recognized as one of the powerful tools for monitoring ground motions generated by seismic events. The high-rate GPS and BDS data collected during the 2015 Mw 7.8 Nepal earthquake have been analyzed using two methods, that are the variometric approach and Precise point positioning (PPP). The variometric approach is based on time differenced technique using only GNSS broadcast products to estimate velocity time series from tracking observations in real time, followed by an integration procedure on the velocities to derive the seismic event induced displacements. PPP is a positioning method to calculate precise positions at centimeter- or even millimeter-level accuracy with a single GNSS receiver using precise satellite orbit and clock products. The displacement motions with accuracy of 2 cm at far-field stations and 5 cm at near-field stations with great ground motions and static offsets up to 1-2 m could be achieved. The multi-GNSS, GPS + BDS, could provide higher accuracy displacements with the increasing of satellite numbers and the improvement of the Position Dilution of Precision (PDOP) values. Considering the time consumption of clock estimates and the precision of PPP solutions, 5 s GNSS satellite clock interval is suggested. In addition, the GNSS-derived displacements are in good agreement with those from strong motion data. These studies demonstrate the feasibility of real-time capturing seismic waves with multi-GNSS observations, which is of great promise for the purpose of earthquake early warning and rapid hazard assessment.
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Kvellestad, Anders; McKay, James; Putze, Antje; Rogan, Chris; Scott, Pat; Weniger, Christoph; White, Martin
2018-01-01
We present the GAMBIT modules SpecBit, DecayBit and PrecisionBit. Together they provide a new framework for linking publicly available spectrum generators, decay codes and other precision observable calculations in a physically and statistically consistent manner. This allows users to automatically run various combinations of existing codes as if they are a single package. The modular design allows software packages fulfilling the same role to be exchanged freely at runtime, with the results presented in a common format that can easily be passed to downstream dark matter, collider and flavour codes. These modules constitute an essential part of the broader GAMBIT framework, a major new software package for performing global fits. In this paper we present the observable calculations, data, and likelihood functions implemented in the three modules, as well as the conventions and assumptions used in interfacing them with external codes. We also present 3-BIT-HIT, a command-line utility for computing mass spectra, couplings, decays and precision observables in the MSSM, which shows how the three modules can easily be used independently of GAMBIT.
NASA Technical Reports Server (NTRS)
Hubbard, W. B.; Dewitt, H. E.
1985-01-01
A model free energy is presented which accurately represents results from 45 high-precision Monte Carlo calculations of the thermodynamics of hydrogen-helium mixtures at pressures of astrophysical and planetophysical interest. The free energy is calculated using free-electron perturbation theory (dielectric function theory), and is an extension of the expression given in an earlier paper in this series. However, it fits the Monte Carlo results more accurately, and is valid for the full range of compositions from pure hydrogen to pure helium. Using the new free energy, the phase diagram of mixtures of liquid metallic hydrogen and helium is calculated and compared with earlier results. Sample results for mixing volumes are also presented, and the new free energy expression is used to compute a theoretical Jovian adiabat and compare the adiabat with results from three-dimensional Thomas-Fermi-Dirac theory. The present theory gives slightly higher densities at pressures of about 10 megabars.
NASA Astrophysics Data System (ADS)
Xu, Weimin; Chen, Shi; Lu, Hongyan
2016-04-01
Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.
Analysis of RDSS positioning accuracy based on RNSS wide area differential technique
NASA Astrophysics Data System (ADS)
Xing, Nan; Su, RanRan; Zhou, JianHua; Hu, XiaoGong; Gong, XiuQiang; Liu, Li; He, Feng; Guo, Rui; Ren, Hui; Hu, GuangMing; Zhang, Lei
2013-10-01
The BeiDou Navigation Satellite System (BDS) provides Radio Navigation Service System (RNSS) as well as Radio Determination Service System (RDSS). RDSS users can obtain positioning by responding the Master Control Center (MCC) inquiries to signal transmitted via GEO satellite transponder. The positioning result can be calculated with elevation constraint by MCC. The primary error sources affecting the RDSS positioning accuracy are the RDSS signal transceiver delay, atmospheric trans-mission delay and GEO satellite position error. During GEO orbit maneuver, poor orbit forecast accuracy significantly impacts RDSS services. A real-time 3-D orbital correction method based on wide-area differential technique is raised to correct the orbital error. Results from the observation shows that the method can successfully improve positioning precision during orbital maneuver, independent from the RDSS reference station. This improvement can reach 50% in maximum. Accurate calibration of the RDSS signal transceiver delay precision and digital elevation map may have a critical role in high precise RDSS positioning services.
NASA Astrophysics Data System (ADS)
Allen, G.; Shah, A.; Williams, P. I.; Ricketts, H.; Hollingsworth, P.; Kabbabe, K.; Bourn, M.; Pitt, J. R.; Helmore, J.; Lowry, D.; Robinson, R. A.; Finlayson, A.
2017-12-01
Emission controls for CH4are a part of the Paris Agreement and other national emissions strategies. This work represents a new method for precise quantification of point-source and facility-level methane emissions flux rates to inform both the climate science community and policymakers. In this paper, we describe the development of an integrated Unmanned Aerial System (UAS) for the measurement of high-precision in-situ CH4 concentrations. We also describe the development of a mass balance flux calculation model tailored to UAS plume sampling downwind; and the validation of this method using a known emission flux from a controlled release facility. A validation field trial was conducted at the UK Met Office site in Cardington, UK, between 31 Oct and 4 Nov 2016 using the UK National Physical Laboratory's Controlled Release Facility (CRF). A modified DJI-S900 hexrotor UAS was tethered via an inlet to a ground-based Los Gatos Ultraportable Greenhouse Gas Analyser to record geospatially-referenced methane (and carbon dioxide) concentrations. Methane fluxes from the CRF were emitted at 5 kg/hr and 10 kg/hr in a series of blind trials (fluxes were not reported to the team prior to the calculation of UAS-derived flux) for a total of 7 UAS flights, which sampled 200 m downwind of source(s), each lasting around 20 minutes. The flux calculation method was adapted for sampling considerations downwind of an emission source that has not had sufficient time to develop a Gaussian morphology. The UAS-measured methane fluxes, and representative flux uncertainty (derived from an error propagation model), were found to compare well with the controlled CH4 emission rate. For the 7 experiments, the standard error between the measured and emitted CH4 flux was found to be +/-6% with a mean bias of +0.4 kg/hr. Limits of flux sensitivity (to within 25% uncertainty) were found to extend to as little as 0.12 kg/h. Further improvements to the accuracy of flux calculation could be made by appropriate onboard measurement of wind speed and direction. This system would yield highly precise flux snapshots (case studies) of methane sources of interest such as oil and gas infrastructure, landfill, and urban environments, to help in the validation of bottom-up emission inventories and in identifying and mitigating so-called super-emitter facilities.
Snyder, David A; Montelione, Gaetano T
2005-06-01
An important open question in the field of NMR-based biomolecular structure determination is how best to characterize the precision of the resulting ensemble of structures. Typically, the RMSD, as minimized in superimposing the ensemble of structures, is the preferred measure of precision. However, the presence of poorly determined atomic coordinates and multiple "RMSD-stable domains"--locally well-defined regions that are not aligned in global superimpositions--complicate RMSD calculations. In this paper, we present a method, based on a novel, structurally defined order parameter, for identifying a set of core atoms to use in determining superimpositions for RMSD calculations. In addition we present a method for deciding whether to partition that core atom set into "RMSD-stable domains" and, if so, how to determine partitioning of the core atom set. We demonstrate our algorithm and its application in calculating statistically sound RMSD values by applying it to a set of NMR-derived structural ensembles, superimposing each RMSD-stable domain (or the entire core atom set, where appropriate) found in each protein structure under consideration. A parameter calculated by our algorithm using a novel, kurtosis-based criterion, the epsilon-value, is a measure of precision of the superimposition that complements the RMSD. In addition, we compare our algorithm with previously described algorithms for determining core atom sets. The methods presented in this paper for biomolecular structure superimposition are quite general, and have application in many areas of structural bioinformatics and structural biology.
Lanchon, Cecilia; Custillon, Guillaume; Moreau-Gaudry, Alexandre; Descotes, Jean-Luc; Long, Jean-Alexandre; Fiard, Gaelle; Voros, Sandrine
2016-07-01
To guide the surgeon during laparoscopic or robot-assisted radical prostatectomy an innovative laparoscopic/ultrasound fusion platform was developed using a motorized 3-dimensional transurethral ultrasound probe. We present what is to our knowledge the first preclinical evaluation of 3-dimensional prostate visualization using transurethral ultrasound and the preliminary results of this new augmented reality. The transurethral probe and laparoscopic/ultrasound registration were tested on realistic prostate phantoms made of standard polyvinyl chloride. The quality of transurethral ultrasound images and the detection of passive markers placed on the prostate surface were evaluated on 2-dimensional dynamic views and 3-dimensional reconstructions. The feasibility, precision and reproducibility of laparoscopic/transurethral ultrasound registration was then determined using 4, 5, 6 and 7 markers to assess the optimal amount needed. The root mean square error was calculated for each registration and the median root mean square error and IQR were calculated according to the number of markers. The transurethral ultrasound probe was easy to manipulate and the prostatic capsule was well visualized in 2 and 3 dimensions. Passive markers could precisely be localized in the volume. Laparoscopic/transurethral ultrasound registration procedures were performed on 74 phantoms of various sizes and shapes. All were successful. The median root mean square error of 1.1 mm (IQR 0.8-1.4) was significantly associated with the number of landmarks (p = 0.001). The highest accuracy was achieved using 6 markers. However, prostate volume did not affect registration precision. Transurethral ultrasound provided high quality prostate reconstruction and easy marker detection. Laparoscopic/ultrasound registration was successful with acceptable mm precision. Further investigations are necessary to achieve sub mm accuracy and assess feasibility in a human model. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID.
Pasias, Ioannis N; Kiriakou, Ioannis K; Papakonstantinou, Lila; Proestos, Charalampos
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as "superfoods". The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID
Kiriakou, Ioannis K.; Papakonstantinou, Lila
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as “superfoods”. The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%. PMID:29301245
Apparatus for in-situ calibration of instruments that measure fluid depth
Campbell, M.D.
1994-01-11
The present invention provides a method and apparatus for in-situ calibration of distance measuring equipment. The method comprises obtaining a first distance measurement in a first location, then obtaining at least one other distance measurement in at least one other location of a precisely known distance from the first location, and calculating a calibration constant. The method is applied specifically to calculating a calibration constant for obtaining fluid level and embodied in an apparatus using a pressure transducer and a spacer of precisely known length. The calibration constant is used to calculate the depth of a fluid from subsequent single pressure measurements at any submerged position. 8 figures.
NASA Astrophysics Data System (ADS)
Aboulbanine, Zakaria; El Khayati, Naïma
2018-04-01
The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, , , and for squared fields, and for an asymmetric rectangular field. Good agreement in terms of formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM’s precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.
High-precision Penning trap mass measurements of 9,10Be and the one-neutron halo nuclide 11Be
NASA Astrophysics Data System (ADS)
Ringle, R.; Brodeur, M.; Brunner, T.; Ettenauer, S.; Smith, M.; Lapierre, A.; Ryjkov, V. L.; Delheij, P.; Drake, G. W. F.; Lassen, J.; Lunney, D.; Dilling, J.
2009-05-01
Penning trap mass measurements of 9Be, 10Be (t1 / 2 = 1.51 My), and the one-neutron halo nuclide 11Be (t1 / 2 = 13.8 s) have been performed using TITAN at TRIUMF. The resulting 11Be mass excess (ME = 20 177.60 (58) keV) is in agreement with the current Atomic Mass Evaluation (AME03) [G. Audi, et al., Nucl. Phys. A 729 (2003) 337] value, but is over an order of magnitude more precise. The precision of the mass values of 9,10Be have been improved by about a factor of four and reveal a ≈ 2 σ deviation from the AME mass values. Results of new atomic physics calculations are presented for the isotope shift of 11Be relative to 9Be, and it is shown that the new mass values essentially remove atomic mass uncertainties as a contributing factor in determining the relative nuclear charge radius from the isotope shift. The new mass values of 10,11Be also allow for a more precise determination of the single-neutron binding energy of the halo neutron in 11Be.
Precise CCD positions of Himalia using Gaia DR1 in 2015-2016
NASA Astrophysics Data System (ADS)
Peng, H. W.; Peng, Q. Y.; Wang, N.
2017-05-01
In order to obtain high-precision CCD positions of Himalia, the sixth Jovian satellite, a total of 598 CCD observations have been obtained during the years 2015-2016. The observations were made by using the 2.4 and 1 m telescopes administered by Yunnan Observatories over 27 nights. Several factors that would influence the positional precision of Himalia were analysed, including the reference star catalogue used, the geometric distortion and the phase effect. By taking advantage of its unprecedented positional precision, the recently released catalogue Gaia Data Release 1 was chosen to match reference stars in the CCD frames of both Himalia and open clusters, which were observed for deriving the geometric distortion. The latest version of sofa library was used to calculate the positions of reference stars. The theoretical positions of Himalia were retrieved from the Jet Propulsion Laboratory Horizons System that includes the satellite ephemeris JUP300, while the positions of Jupiter were based on the planetary ephemeris DE431. Our results showed that the means of observed minus computed (O - C) residuals are 0.071 and -0.001 arcsec in right ascension and declination, respectively. Their standard deviations are estimated at about 0.03 arcsec in each direction.
NASA Astrophysics Data System (ADS)
Sun, Jiwen; Wei, Ling; Fu, Danying
2002-01-01
resolution and wide swath. In order to assure its high optical precision smoothly passing the rigorous dynamic load of launch, it should be of high structural rigidity. Therefore, a careful study of the dynamic features of the camera structure should be performed. Pro/E. An interference examination is performed on the precise CAD model of the camera for mending the structural design. for the first time in China, and the analysis of structural dynamic of the camera is accomplished by applying the structural analysis code PATRAN and NASTRAN. The main research programs include: 1) the comparative calculation of modes analysis of the critical structure of the camera is achieved by using 4 nodes and 10 nodes tetrahedral elements respectively, so as to confirm the most reasonable general model; 2) through the modes analysis of the camera from several cases, the inherent frequencies and modes are obtained and further the rationality of the structural design of the camera is proved; 3) the static analysis of the camera under self gravity and overloads is completed and the relevant deformation and stress distributions are gained; 4) the response calculation of sine vibration of the camera is completed and the corresponding response curve and maximum acceleration response with corresponding frequencies are obtained. software technique is accurate and efficient. sensitivity, the dynamic design and engineering optimization of the critical structure of the camera are discussed. fundamental technology in design of forecoming space optical instruments.
Directly measuring of thermal pulse transfer in one-dimensional highly aligned carbon nanotubes
Zhang, Guang; Liu, Changhong; Fan, Shoushan
2013-01-01
Using a simple and precise instrument system, we directly measured the thermo-physical properties of one-dimensional highly aligned carbon nanotubes (CNTs). A kind of CNT-based macroscopic materials named super aligned carbon nanotube (SACNT) buckypapers was measured in our experiment. We defined a new one-dimensional parameter, the “thermal transfer speed” to characterize the thermal damping mechanisms in the SACNT buckypapers. Our results indicated that the SACNT buckypapers with different densities have obviously different thermal transfer speeds. Furthermore, we found that the thermal transfer speed of high-density SACNT buckypapers may have an obvious damping factor along the CNTs aligned direction. The anisotropic thermal diffusivities of SACNT buckypapers could be calculated by the thermal transfer speeds. The thermal diffusivities obviously increase as the buckypaper-density increases. For parallel SACNT buckypapers, the thermal diffusivity could be as high as 562.2 ± 55.4 mm2/s. The thermal conductivities of these SACNT buckypapers were also calculated by the equation k = Cpαρ. PMID:23989589
Directly measuring of thermal pulse transfer in one-dimensional highly aligned carbon nanotubes.
Zhang, Guang; Liu, Changhong; Fan, Shoushan
2013-01-01
Using a simple and precise instrument system, we directly measured the thermo-physical properties of one-dimensional highly aligned carbon nanotubes (CNTs). A kind of CNT-based macroscopic materials named super aligned carbon nanotube (SACNT) buckypapers was measured in our experiment. We defined a new one-dimensional parameter, the "thermal transfer speed" to characterize the thermal damping mechanisms in the SACNT buckypapers. Our results indicated that the SACNT buckypapers with different densities have obviously different thermal transfer speeds. Furthermore, we found that the thermal transfer speed of high-density SACNT buckypapers may have an obvious damping factor along the CNTs aligned direction. The anisotropic thermal diffusivities of SACNT buckypapers could be calculated by the thermal transfer speeds. The thermal diffusivities obviously increase as the buckypaper-density increases. For parallel SACNT buckypapers, the thermal diffusivity could be as high as 562.2 ± 55.4 mm(2)/s. The thermal conductivities of these SACNT buckypapers were also calculated by the equation k = Cpαρ.
Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms
NASA Astrophysics Data System (ADS)
Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie
2006-02-01
This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.
Removing function model and experiments on ultrasonic polishing molding die
NASA Astrophysics Data System (ADS)
Huang, Qitai; Ni, Ying; Yu, Jingchi
2010-10-01
Low temperature glass molding technology is the main method on volume-producing high precision middle and small diameter optical cells in the future. While the accuracy of the molding die will effect the cell precision, so the high precision molding die development is one of the most important part of the low temperature glass molding technology. The molding die is manufactured from high rigid and crisp metal alloy, with the ultrasonic vibration character of high vibration frequency and concentrative energy distribution; abrasive particles will impact the rigid metal alloy surface with very high speed that will remove the material from the work piece. Ultrasonic can make the rigid metal alloy molding die controllable polishing and reduce the roughness and surface error. Different from other ultrasonic fabrication method, untouched ultrasonic polishing is applied on polish the molding die, that means the tool does not touch the work piece in the process of polishing. The abrasive particles vibrate around the balance position with high speed and frequency under the drive of ultrasonic vibration in the liquid medium and impact the workspace surface, the energy of abrasive particles come from ultrasonic vibration, while not from the direct hammer blow of the tool. So a nummular vibrator simple harmonic vibrates on an infinity plane surface is considered as a model of ultrasonic polishing working condition. According to Huygens theory the sound field distribution on a plane surface is analyzed and calculated, the tool removing function is also deduced from this distribution. Then the simple point ultrasonic polishing experiment is proceeded to certificate the theory validity.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
Calculations of dose distributions using a neural network model
NASA Astrophysics Data System (ADS)
Mathieu, R.; Martin, E.; Gschwind, R.; Makovicka, L.; Contassot-Vivier, S.; Bahi, J.
2005-03-01
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journées Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.
Calculations of dose distributions using a neural network model.
Mathieu, R; Martin, E; Gschwind, R; Makovicka, L; Contassot-Vivier, S; Bahi, J
2005-03-07
The main goal of external beam radiotherapy is the treatment of tumours, while sparing, as much as possible, surrounding healthy tissues. In order to master and optimize the dose distribution within the patient, dosimetric planning has to be carried out. Thus, for determining the most accurate dose distribution during treatment planning, a compromise must be found between the precision and the speed of calculation. Current techniques, using analytic methods, models and databases, are rapid but lack precision. Enhanced precision can be achieved by using calculation codes based, for example, on Monte Carlo methods. However, in spite of all efforts to optimize speed (methods and computer improvements), Monte Carlo based methods remain painfully slow. A newer way to handle all of these problems is to use a new approach in dosimetric calculation by employing neural networks. Neural networks (Wu and Zhu 2000 Phys. Med. Biol. 45 913-22) provide the advantages of those various approaches while avoiding their main inconveniences, i.e., time-consumption calculations. This permits us to obtain quick and accurate results during clinical treatment planning. Currently, results obtained for a single depth-dose calculation using a Monte Carlo based code (such as BEAM (Rogers et al 2003 NRCC Report PIRS-0509(A) rev G)) require hours of computing. By contrast, the practical use of neural networks (Mathieu et al 2003 Proceedings Journees Scientifiques Francophones, SFRP) provides almost instant results and quite low errors (less than 2%) for a two-dimensional dosimetric map.
Calculation of precision satellite orbits with nonsingular elements /VOP formulation/
NASA Technical Reports Server (NTRS)
Velez, C. E.; Cefola, P. J.; Long, A. C.; Nimitz, K. S.
1974-01-01
Review of some results obtained in an effort to develop efficient, high-precision trajectory computation processes for artificial satellites by optimum selection of the form of the equations of motion of the satellite and the numerical integration method. In particular, the matching of a Gaussian variation-of-parameter (VOP) formulation is considered which is expressed in terms of equinoctial orbital elements and partially decouples the motion of the orbital frame from motion within the orbital frame. The performance of the resulting orbit generators is then compared with the popular classical Cowell/Gauss-Jackson formulation/integrator pair for two distinctly different orbit types - namely, the orbit of the ATS satellite at near-geosynchronous conditions and the near-circular orbit of the GEOS-C satellite at 1000 km.
Covariant spectator theory of np scattering: Deuteron quadrupole moment
Gross, Franz
2015-01-26
The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less
Shock Hugoniot of single crystal copper
NASA Astrophysics Data System (ADS)
Chau, R.; Stölken, J.; Asoka-Kumar, P.; Kumar, M.; Holmes, N. C.
2010-01-01
The shock Hugoniot of single crystal copper is reported for stresses below 66 GPa. Symmetric impact experiments were used to measure the Hugoniots of three different crystal orientations of copper, [100], [110], and [111]. The photonic doppler velocimetry (PDV) diagnostic was adapted into a very high precision time of arrival detector for these experiments. The measured Hugoniots along all three crystal directions were nearly identical to the experimental Hugoniot for polycrystalline Cu. The predicted orientation dependence of the Hugoniot from molecular dynamics calculations was not observed. At the lowest stresses, the sound speed in Cu was extracted from the PDV data. The measured sound speeds are in agreement with values calculated from the elastic constants for Cu.
Calculations Supporting Management Zones
USDA-ARS?s Scientific Manuscript database
Since the early 1990’s the tools of precision farming (GPS, yield monitors, soil sensors, etc.) have documented how spatial and temporal variability are important factors impacting crop yield response. For precision farming, variability can be measured then used to divide up a field so that manageme...
Jeong, Jeho; Chen, Qing; Febo, Robert; Yang, Jie; Pham, Hai; Xiong, Jian-Ping; Zanzonico, Pat B.; Deasy, Joseph O.; Humm, John L.; Mageras, Gig S.
2016-01-01
Although spatially precise systems are now available for small-animal irradiations, there are currently limited software tools available for treatment planning for such irradiations. We report on the adaptation, commissioning, and evaluation of a 3-dimensional treatment planning system for use with a small-animal irradiation system. The 225-kV X-ray beam of the X-RAD 225Cx microirradiator (Precision X-Ray) was commissioned using both ion-chamber and radiochromic film for 10 different collimators ranging in field size from 1 mm in diameter to 40 × 40 mm2. A clinical 3-dimensional treatment planning system (Metropolis) developed at our institution was adapted to small-animal irradiation by making it compatible with the dimensions of mice and rats, modeling the microirradiator beam orientations and collimators, and incorporating the measured beam data for dose calculation. Dose calculations in Metropolis were verified by comparison with measurements in phantoms. Treatment plans for irradiation of a tumor-bearing mouse were generated with both the Metropolis and the vendor-supplied software. The calculated beam-on times and the plan evaluation tools were compared. The dose rate at the central axis ranges from 74 to 365 cGy/min depending on the collimator size. Doses calculated with Metropolis agreed with phantom measurements within 3% for all collimators. The beam-on times calculated by Metropolis and the vendor-supplied software agreed within 1% at the isocenter. The modified 3-dimensional treatment planning system provides better visualization of the relationship between the X-ray beams and the small-animal anatomy as well as more complete dosimetric information on target tissues and organs at risk. It thereby enhances the potential of image-guided microirradiator systems for evaluation of dose–response relationships and for preclinical experimentation generally. PMID:25948321
The NANOGrav 11-year Data Set: High-precision Timing of 45 Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Arzoumanian, Zaven; Brazier, Adam; Burke-Spolaor, Sarah; Chamberlin, Sydney; Chatterjee, Shami; Christy, Brian; Cordes, James M.; Cornish, Neil J.; Crawford, Fronefield; Thankful Cromartie, H.; Crowter, Kathryn; DeCesar, Megan E.; Demorest, Paul B.; Dolch, Timothy; Ellis, Justin A.; Ferdman, Robert D.; Ferrara, Elizabeth C.; Fonseca, Emmanuel; Garver-Daniels, Nathan; Gentile, Peter A.; Halmrast, Daniel; Huerta, E. A.; Jenet, Fredrick A.; Jessup, Cody; Jones, Glenn; Jones, Megan L.; Kaplan, David L.; Lam, Michael T.; Lazio, T. Joseph W.; Levin, Lina; Lommen, Andrea; Lorimer, Duncan R.; Luo, Jing; Lynch, Ryan S.; Madison, Dustin; Matthews, Allison M.; McLaughlin, Maura A.; McWilliams, Sean T.; Mingarelli, Chiara; Ng, Cherry; Nice, David J.; Pennucci, Timothy T.; Ransom, Scott M.; Ray, Paul S.; Siemens, Xavier; Simon, Joseph; Spiewak, Renée; Stairs, Ingrid H.; Stinebring, Daniel R.; Stovall, Kevin; Swiggum, Joseph K.; Taylor, Stephen R.; Vallisneri, Michele; van Haasteren, Rutger; Vigeland, Sarah J.; Zhu, Weiwei; The NANOGrav Collaboration
2018-04-01
We present high-precision timing data over time spans of up to 11 years for 45 millisecond pulsars observed as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project, aimed at detecting and characterizing low-frequency gravitational waves. The pulsars were observed with the Arecibo Observatory and/or the Green Bank Telescope at frequencies ranging from 327 MHz to 2.3 GHz. Most pulsars were observed with approximately monthly cadence, and six high-timing-precision pulsars were observed weekly. All were observed at widely separated frequencies at each observing epoch in order to fit for time-variable dispersion delays. We describe our methods for data processing, time-of-arrival (TOA) calculation, and the implementation of a new, automated method for removing outlier TOAs. We fit a timing model for each pulsar that includes spin, astrometric, and (for binary pulsars) orbital parameters; time-variable dispersion delays; and parameters that quantify pulse-profile evolution with frequency. The timing solutions provide three new parallax measurements, two new Shapiro delay measurements, and two new measurements of significant orbital-period variations. We fit models that characterize sources of noise for each pulsar. We find that 11 pulsars show significant red noise, with generally smaller spectral indices than typically measured for non-recycled pulsars, possibly suggesting a different origin. A companion paper uses these data to constrain the strength of the gravitational-wave background.
NASA Astrophysics Data System (ADS)
Wang, Yubing; Yin, Weihong; Han, Qin; Yang, Xiaohong; Ye, Han; Lü, Qianqian; Yin, Dongdong
2017-04-01
Graphene field-effect transistors have been intensively studied. However, in order to fabricate devices with more complicated structures, such as the integration with waveguide and other two-dimensional materials, we need to transfer the exfoliated graphene samples to a target position. Due to the small area of exfoliated graphene and its random distribution, the transfer method requires rather high precision. In this paper, we systematically study a method to selectively transfer mechanically exfoliated graphene samples to a target position with a precision of sub-micrometer. To characterize the doping level of this method, we transfer graphene flakes to pre-patterned metal electrodes, forming graphene field-effect transistors. The hole doping of graphene is calculated to be 2.16 × {10}12{{{cm}}}-2. In addition, we fabricate a waveguide-integrated multilayer graphene photodetector to demonstrate the viability and accuracy of this method. A photocurrent as high as 0.4 μA is obtained, corresponding to a photoresponsivity of 0.48 mA/W. The device performs uniformly in nine illumination cycles. Project supported by the National Key Research and Development Program of China (No. 2016YFB0402404), the High-Tech Research and Development Program of China (Nos. 2013AA031401, 2015AA016902, 2015AA016904), and the National Natural Foundation of China (Nos. 61674136, 61176053, 61274069, 61435002).
Schwientek, Marc; Guillet, Gaëlle; Rügner, Hermann; Kuch, Bertram; Grathwohl, Peter
2016-01-01
Increasing numbers of organic micropollutants are emitted into rivers via municipal wastewaters. Due to their persistence many pollutants pass wastewater treatment plants without substantial removal. Transport and fate of pollutants in receiving waters and export to downstream ecosystems is not well understood. In particular, a better knowledge of processes governing their environmental behavior is needed. Although a lot of data are available concerning the ubiquitous presence of micropollutants in rivers, accurate data on transport and removal rates are lacking. In this paper, a mass balance approach is presented, which is based on the Lagrangian sampling scheme, but extended to account for precise transport velocities and mixing along river stretches. The calculated mass balances allow accurate quantification of pollutants' reactivity along river segments. This is demonstrated for representative members of important groups of micropollutants, e.g. pharmaceuticals, musk fragrances, flame retardants, and pesticides. A model-aided analysis of the measured data series gives insight into the temporal dynamics of removal processes. The occurrence of different removal mechanisms such as photooxidation, microbial degradation, and volatilization is discussed. The results demonstrate, that removal processes are highly variable in time and space and this has to be considered for future studies. The high precision sampling scheme presented could be a powerful tool for quantifying removal processes under different boundary conditions and in river segments with contrasting properties. Copyright © 2015. Published by Elsevier B.V.
QED Effects in Molecules: Test on Rotational Quantum States of H2
NASA Astrophysics Data System (ADS)
Salumbides, E. J.; Dickenson, G. D.; Ivanov, T. I.; Ubachs, W.
2011-07-01
Quantum electrodynamic effects have been systematically tested in the progression of rotational quantum states in the XΣg+1, v=0 vibronic ground state of molecular hydrogen. High-precision Doppler-free spectroscopy of the EFΣg+1-XΣg+1 (0,0) band was performed with 0.005cm-1 accuracy on rotationally hot H2 (with rotational quantum states J up to 16). QED and relativistic contributions to rotational level energies as high as 0.13cm-1 are extracted, and are in perfect agreement with recent calculations of QED and high-order relativistic effects for the H2 ground state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Mi-Ae; Moore, Stephen C.; McQuaid, Sarah J.
Purpose: The authors have previously reported the advantages of high-sensitivity single-photon emission computed tomography (SPECT) systems for imaging structures located deep inside the brain. DaTscan (Isoflupane I-123) is a dopamine transporter (DaT) imaging agent that has shown potential for early detection of Parkinson disease (PD), as well as for monitoring progression of the disease. Realizing the full potential of DaTscan requires efficient estimation of striatal uptake from SPECT images. They have evaluated two SPECT systems, a conventional dual-head gamma camera with low-energy high-resolution collimators (conventional) and a dedicated high-sensitivity multidetector cardiac imaging system (dedicated) for imaging tasks related to PD.more » Methods: Cramer-Rao bounds (CRB) on precision of estimates of striatal and background activity concentrations were calculated from high-count, separate acquisitions of the compartments (right striata, left striata, background) of a striatal phantom. CRB on striatal and background activity concentration were calculated from essentially noise-free projection datasets, synthesized by scaling and summing the compartment projection datasets, for a range of total detected counts. They also calculated variances of estimates of specific-to-nonspecific binding ratios (BR) and asymmetry indices from these values using propagation of error analysis, as well as the precision of measuring changes in BR on the order of the average annual decline in early PD. Results: Under typical clinical conditions, the conventional camera detected 2 M counts while the dedicated camera detected 12 M counts. Assuming a normal BR of 5, the standard deviation of BR estimates was 0.042 and 0.021 for the conventional and dedicated system, respectively. For an 8% decrease to BR = 4.6, the signal-to-noise ratio were 6.8 (conventional) and 13.3 (dedicated); for a 5% decrease, they were 4.2 (conventional) and 8.3 (dedicated). Conclusions: This implies that PD can be detected earlier with the dedicated system than with the conventional system; therefore, earlier identification of PD progression should be possible with the high-sensitivity dedicated SPECT camera.« less
Hoffmann, K; Kesners, P; Bader, A; Avermaete, A; Altmeyer, P; Gambichler, T
2001-11-01
Spectrophotometric assessment (in vitro) is the most established method for determining the ultraviolet protection factor (UPF) of textiles. Apart from stringent requirements for measurement precision, practical methods are required for the routine determination of the UPF. We report here spectrophotometric measurements of textiles using a newly developed autosampler. Measurement precision was evaluated under repeatable conditions. Fifteen different textiles were spectrophotometrically assessed for the determination of the UPF. Sample handling inside the spectrophotometer was performed with a computer-controlled sampling device, capable of loading and unloading a textile sample from a magazine as well as rotating the sample perpendicular to the spectrometer beam. In order to evaluate the repeatability of measurements, one sample of each textile was assessed eight times under the same conditions in the same laboratory. A mean percentage of the standard error of 1% [E(UPF)] was calculated for the UPF measurements. For UPFs >30, a significantly higher E(UPF) was found (r=0.78; P<0.001). E(UV) (3.9%) of ultraviolet A (UVA) transmission differed significantly from E(UV) (1.1 %) of ultraviolet B (UVB) transmission (P<0.05). Though a slight decrease of repeatability was observed for UVA transmission measurements and UPFs higher than 30, our data indicate a high measurement precision under repeatable conditions. In conclusion, spectrophotometric measurements of textiles with the aid of the autosampler presented have been shown to be highly practical, time saving and precise.
Small field models with gravitational wave signature supported by CMB data
Brustein, Ramy
2018-01-01
We study scale dependence of the cosmic microwave background (CMB) power spectrum in a class of small, single-field models of inflation which lead to a high value of the tensor to scalar ratio. The inflaton potentials that we consider are degree 5 polynomials, for which we precisely calculate the power spectrum, and extract the cosmological parameters: the scalar index ns, the running of the scalar index nrun and the tensor to scalar ratio r. We find that for non-vanishing nrun and for r as small as r = 0.001, the precisely calculated values of ns and nrun deviate significantly from what the standard analytic treatment predicts. We study in detail, and discuss the probable reasons for such deviations. As such, all previously considered models (of this kind) are based upon inaccurate assumptions. We scan the possible values of potential parameters for which the cosmological parameters are within the allowed range by observations. The 5 parameter class is able to reproduce all of the allowed values of ns and nrun for values of r that are as high as 0.001. Subsequently this study at once refutes previous such models built using the analytical Stewart-Lyth term, and revives the small field brand, by building models that do yield an appreciable r while conforming to known CMB observables. PMID:29795608
NASA Astrophysics Data System (ADS)
Schlichting, Johannes; Winkler, Kerstin; Koerner, Lienhard; Schletterer, Thomas; Burghardt, Berthold; Kahlert, Hans-Juergen
2000-10-01
The productive and accurate ablation of microstructures demands the precise imaging of a mask pattern onto the substrate under work. The job can be done with high performance wide field lenses as a key component of ablation equipment. The image field has dimensions of 20 to 30 mm. Typical dimensions and accuracy of the microstructures are in the order of some microns. On the other hand, the working depth of focus (DOF) has to be in the order of some 10 microns to be successful on drilling through 20 to 50 μm substrates. All these features have to be reached under the conditions of high power laser UV light. Some design principles for such systems are applied, such as optimum number of elements, minimum tolerance sensitivity, material restrictions for the lens elements as well as mechanical parts (mounting), restrictions of possible power densities on lens surfaces (including ghosts), matched quality for the manufactures system. The special applications require appropriate performance criteria for theoretical calculation and measurements, which allow to conclude the performance of the application. The base is wave front calculation and measurement (using Shack- Hartmann sensor) in UV. Derived criteria are calculated and compared with application results.
PrimerStation: a highly specific multiplex genomic PCR primer design server for the human genome
Yamada, Tomoyuki; Soma, Haruhiko; Morishita, Shinichi
2006-01-01
PrimerStation () is a web service that calculates primer sets guaranteeing high specificity against the entire human genome. To achieve high accuracy, we used the hybridization ratio of primers in liquid solution. Calculating the status of sequence hybridization in terms of the stringent hybridization ratio is computationally costly, and no web service checks the entire human genome and returns a highly specific primer set calculated using a precise physicochemical model. To shorten the response time, we precomputed candidates for specific primers using a massively parallel computer with 100 CPUs (SunFire 15 K) about 3 months in advance. This enables PrimerStation to search and output qualified primers interactively. PrimerStation can select highly specific primers suitable for multiplex PCR by seeking a wider temperature range that minimizes the possibility of cross-reaction. It also allows users to add heuristic rules to the primer design, e.g. the exclusion of single nucleotide polymorphisms (SNPs) in primers, the avoidance of poly(A) and CA-repeats in the PCR products, and the elimination of defective primers using the secondary structure prediction. We performed several tests to verify the PCR amplification of randomly selected primers for ChrX, and we confirmed that the primers amplify specific PCR products perfectly. PMID:16845094
Measurement of super large radius optics in the detection of gravitational waves
NASA Astrophysics Data System (ADS)
Yang, Cheng; Han, Sen; Wu, Quanying; Liang, Binming; Hou, Changlun
2015-10-01
The existence of Gravitational Wave (GW) is one of the greatest predictions of Einstein's relative theory. It has played an important part in the radiation theory, black hole theory, space explore and so on. The GW detection has been an important aspect of modern physics. With the research proceeding further, there are still a lot of challenges existing in the interferometer which is the key instrument in GW detection especially the measurement of the super large radius optics. To solve this problem, one solution , Fizeau interference, for measuring the super large radius has been presented. We change the tradition that curved surface must be measured with a standard curved surface. We use a flat mirror as a reference flat and it can lower both the cost and the test requirement a lot. We select a concave mirror with the radius of 1600mm as a sample. After the precision measurement and analysis, the experimental results show that the relative error of radius is better than 3%, and it can fully meet the requirements of the measurement of super large radius optics. When calculating each pixel with standard cylinder, the edges are not sharp because of diffraction or some other reasons, we detect the edge and calculate the diameter of the cylinder automatically, and it can improve the precision a lot. In general, this method is simple, fast, non-traumatic, and highly precision, it can also provide us a new though in the measurement of super large radius optics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
pp interaction at very high energies in cosmic ray experiments
NASA Astrophysics Data System (ADS)
Kendi Kohara, A.; Ferreira, Erasmo; Kodama, Takeshi
2014-11-01
An analysis of p-air cross section data from extensive air shower measurements is presented, based on an analytical representation of the pp scattering amplitudes that describes with high precision all available accelerator data at ISR, SPS and LHC energies. The theoretical basis of the representation, together with the very smooth energy dependence of parameters controlled by unitarity and dispersion relations, permits reliable extrapolation to high energy cosmic ray (CR) and asymptotic energy ranges. Calculations of σ p-airprod based on Glauber formalism are made using the input values of the quantities σ , ρ , BI and BR at high energies, with attention given to the independence of the slope parameters, with {{B}R}\
Measurement device for high-precision spectral transmittance of solar blind filter
NASA Astrophysics Data System (ADS)
Wang, Yan; Qian, Yunsheng; Lv, Yang; Feng, Cheng; Liu, Jian
2017-02-01
In order to measure spectral transmittance of solar-blind filter ranging from ultraviolet to visible light accurately, a high-precision filter transmittance measuring system based on the ultraviolet photomultiplier is developed. The calibration method is mainly used to measure transmittance in this system, which mainly consists of an ultraviolet photomultiplier as core of the system and a lock-in amplifier combined with an optical modulator as the aided measurement for the system. The ultraviolet photomultiplier can amplify the current signal through the filter and have the characteristics of low dark current and high luminance gain. The optical modulator and the lock-in amplifier can obtain the signal from the photomultiplier and inhibit dark noise and spurious signal effectively. Through these two parts, the low light passing through the filters can be detected and we can calculate the transmittance by the optical power detected. Based on the proposed system, the limit detection of the transmittance can reach 10-12, while the result of the conventional approach is merely 10-6. Therefore, the system can make an effective assessment of solar blind ultraviolet filters.
NASA Astrophysics Data System (ADS)
Merzlaya, Anastasia;
2017-01-01
The heavy-ion programme of the NA61/SHINE experiment at CERN SPS is expanding to allow precise measurements of exotic particles with lifetime few hundred microns. A Vertex Detector for open charm measurements at the SPS is being constructed by the NA61/SHINE Collaboration to meet the challenges of high spatial resolution of secondary vertices and efficiency of track registration. This task is solved by the application of the coordinate sensitive CMOS Monolithic Active Pixel Sensors with extremely low material budget in the new Vertex Detector. A small-acceptance version of the Vertex Detector is being tested this year, later it will be expanded to a large-acceptance version. Simulation studies will be presented. A method of track reconstruction in the inhomogeneous magnetic field for the Vertex Detector was developed and implemented. Numerical calculations show the possibility of high precision measurements in heavy ion collisions of strange and multi strange particles, as well as heavy flavours, like charmed particles.
Lubk, A; Rossell, M D; Seidel, J; He, Q; Yang, S Y; Chu, Y H; Ramesh, R; Hÿtch, M J; Snoeck, E
2012-07-27
Domain walls (DWs) substantially influence a large number of applications involving ferroelectric materials due to their limited mobility when shifted during polarization switching. The discovery of greatly enhanced conduction at BiFeO(3) DWs has highlighted yet another role of DWs as a local material state with unique properties. However, the lack of precise information on the local atomic structure is still hampering microscopical understanding of DW properties. Here, we examine the atomic structure of BiFeO(3) 109° DWs with pm precision by a combination of high-angle annular dark-field scanning transmission electron microscopy and a dedicated structural analysis. By measuring simultaneously local polarization and strain, we provide direct experimental proof for the straight DW structure predicted by ab initio calculations as well as the recently proposed theory of diffuse DWs, thus resolving a long-standing discrepancy between experimentally measured and theoretically predicted DW mobilities.
Optimization of the MINERVA Exoplanet Search Strategy via Simulations
NASA Astrophysics Data System (ADS)
Nava, Chantell; Johnson, Samson; McCrady, Nate; Minerva
2015-01-01
Detection of low-mass exoplanets requires high spectroscopic precision and high observational cadence. MINERVA is a dedicated observatory capable of sub meter-per-second radial velocity precision. As a dedicated observatory, MINERVA can observe with every-clear-night cadence that is essential for low-mass exoplanet detection. However, this cadence complicates the determination of an optimal observing strategy. We simulate MINERVA observations to optimize our observing strategy and maximize exoplanet detections. A dispatch scheduling algorithm provides observations of MINERVA targets every day over a three-year observing campaign. An exoplanet population with a distribution informed by Kepler statistics is assigned to the targets, and radial velocity curves induced by the planets are constructed. We apply a correlated noise model that realistically simulates stellar astrophysical noise sources. The simulated radial velocity data is fed to the MINERVA planet detection code and the expected exoplanet yield is calculated. The full simulation provides a tool to test different strategies for scheduling observations of our targets and optimizing the MINERVA exoplanet search strategy.
Patient safety and systematic reviews: finding papers indexed in MEDLINE, EMBASE and CINAHL.
Tanon, A A; Champagne, F; Contandriopoulos, A-P; Pomey, M-P; Vadeboncoeur, A; Nguyen, H
2010-10-01
To develop search strategies for identifying papers on patient safety in MEDLINE, EMBASE and CINAHL. Six journals were electronically searched for papers on patient safety published between 2000 and 2006. Identified papers were divided into two gold standards: one to build and the other to validate the search strategies. Candidate terms for strategy construction were identified using a word frequency analysis of titles, abstracts and keywords used to index the papers in the databases. Searches were run for each one of the selected terms independently in every database. Sensitivity, precision and specificity were calculated for each candidate term. Terms with sensitivity greater than 10% were combined to form the final strategies. The search strategies developed were run against the validation gold standard to assess their performance. A final step in the validation process was to compare the performance of each strategy to those of other strategies found in the literature. We developed strategies for all three databases that were highly sensitive (range 95%-100%), precise (range 40%-60%) and balanced (the product of sensitivity and precision being in the range of 30%-40%). The strategies were very specific and outperformed those found in the literature. The strategies we developed can meet the needs of users aiming to maximise either sensitivity or precision, or seeking a reasonable compromise between sensitivity and precision, when searching for papers on patient safety in MEDLINE, EMBASE or CINAHL.
Aulenbach, Brent T.
2013-01-01
A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.
Pavanello, Michele; Adamowicz, Ludwik; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Polyansky, Oleg L; Tennyson, Jonathan; Szidarovszky, Tamás; Császár, Attila G; Berg, Max; Petrignani, Annemieke; Wolf, Andreas
2012-01-13
First-principles computations and experimental measurements of transition energies are carried out for vibrational overtone lines of the triatomic hydrogen ion H(3)(+) corresponding to floppy vibrations high above the barrier to linearity. Action spectroscopy is improved to detect extremely weak visible-light spectral lines on cold trapped H(3)(+) ions. A highly accurate potential surface is obtained from variational calculations using explicitly correlated Gaussian wave function expansions. After nonadiabatic corrections, the floppy H(3)(+) vibrational spectrum is reproduced at the 0.1 cm(-1) level up to 16600 cm(-1).
Third-order Zeeman effect in highly charged ions
NASA Astrophysics Data System (ADS)
Varentsova, A. S.; Agababaev, V. A.; Volchkova, A. M.; Glazov, D. A.; Volotka, A. V.; Shabaev, V. M.; Plunien, G.
2017-10-01
The contribution of the third order in magnetic field to the Zeeman splitting of the ground state of hydrogenlike, lithiumlike, and boronlike ions in the range Z = 6 - 82 is investigated within the relativistic approach. Both perturbative and non-perturbative methods of calculation are employed and found to be in agreement. For lithiumlike and boronlike ions the interelectronic-interaction effects are taken into account within the approximation of the local screening potential. The contribution of the third-order effect in low- and medium-Z boronlike ions is found to be important for anticipated high-precision measurements.
NASA Astrophysics Data System (ADS)
Niimura, Subaru; Kurosu, Hiromichi; Shoji, Akira
2010-04-01
To clarify the positive role of side-chain conformation in the stability of protein secondary structure (main-chain conformation), we successfully calculated the optimization structure of a series of well-defined α-helical octadecapeptides composed of two L-phenylalanine (Phe) and 16 L-alanine (Ala) residues, based on the molecular orbital calculation with density functional theory (DFT/B3LYP/6-31G(d)). From the total energy calculation and the precise secondary structural analysis, we found that the conformational stability of the α-helix is closely related to the reciprocal side-chain combinations (such as positional relation and side-chain conformation) of two Phe residues in this system. Furthermore, we demonstrated that the 1H, 13C, 15N and 17O isotropic chemical shifts of each Phe residue depend on the respective side-chain conformations of the Phe residue.
Modified pressure loss model for T-junctions of engine exhaust manifold
NASA Astrophysics Data System (ADS)
Wang, Wenhui; Lu, Xiaolu; Cui, Yi; Deng, Kangyao
2014-11-01
The T-junction model of engine exhaust manifolds significantly influences the simulation precision of the pressure wave and mass flow rate in the intake and exhaust manifolds of diesel engines. Current studies have focused on constant pressure models, constant static pressure models and pressure loss models. However, low model precision is a common disadvantage when simulating engine exhaust manifolds, particularly for turbocharged systems. To study the performance of junction flow, a cold wind tunnel experiment with high velocities at the junction of a diesel exhaust manifold is performed, and the variation in the pressure loss in the T-junction under different flow conditions is obtained. Despite the trend of the calculated total pressure loss coefficient, which is obtained by using the original pressure loss model and is the same as that obtained from the experimental results, large differences exist between the calculated and experimental values. Furthermore, the deviation becomes larger as the flow velocity increases. By improving the Vazsonyi formula considering the flow velocity and introducing the distribution function, a modified pressure loss model is established, which is suitable for a higher velocity range. Then, the new model is adopted to solve one-dimensional, unsteady flow in a D6114 turbocharged diesel engine. The calculated values are compared with the measured data, and the result shows that the simulation accuracy of the pressure wave before the turbine is improved by 4.3% with the modified pressure loss model because gas compressibility is considered when the flow velocities are high. The research results provide valuable information for further junction flow research, particularly the correction of the boundary condition in one-dimensional simulation models.
Huang, Chenxi; Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi; Liu, Huafeng
2012-11-19
We propose a new method for realizing high-spatial-resolution detection of singularity points in optical vortex beams. The method uses a Shack-Hartmann wavefront sensor (SHWS) to record a Hartmanngram. A map of evaluation values related to phase slope is then calculated from the Hartmanngram. The position of an optical vortex is determined by comparing the map with reference maps that are calculated from numerically created spiral phases having various positions. Optical experiments were carried out to verify the method. We displayed various spiral phase distribution patterns on a phase-only spatial light modulator and measured the resulting singularity point using the proposed method. The results showed good linearity in detecting the position of singularity points. The RMS error of the measured position of the singularity point was approximately 0.056, in units normalized to the lens size of the lenslet array used in the SHWS.
P dopants induced ferromagnetism in g-C3N4 nanosheets: Experiments and calculations
NASA Astrophysics Data System (ADS)
Liu, Yonggang; Liu, Peitao; Sun, Changqi; Wang, Tongtong; Tao, Kun; Gao, Daqiang
2017-05-01
Outstanding magnetic properties are highly desired for two-dimensional (2D) semiconductor nanosheets due to their potential applications in spintronics. Metal-free ferromagnetic 2D materials whose magnetism originated from the pure s/p electron configuration could give a long spin relaxation time, which plays the vital role in spin information transfer. Here, we synthesize 2D g-C3N4 nanosheets with room temperature ferromagnetism induced by P doping. In our case, the Curie temperature of P doped g-C3N4 nanosheets reaches as high as 911 K and the precise control of the P concentration can further adjust the saturation magnetization of the samples. First principles calculation results indicate that the magnetic moment is primarily due to strong hybridization between p bonds of P, N, and C atoms, giving the theoretical evidence of the ferromagnetism. This work opens another door to engineer a future generation of spintronic devices.
Dynamic Stark broadening as the Dicke narrowing effect
NASA Astrophysics Data System (ADS)
Calisti, A.; Mossé, C.; Ferri, S.; Talin, B.; Rosmej, F.; Bureyeva, L. A.; Lisitsa, V. S.
2010-01-01
A very fast method to account for charged particle dynamics effects in calculations of spectral line shape emitted by plasmas is presented. This method is based on a formulation of the frequency fluctuation model (FFM), which provides an expression of the dynamic line shape as a functional of the static distribution of frequencies. Thus, the main numerical work rests on the calculation of the quasistatic Stark profile. This method for taking into account ion dynamics allows a very fast and accurate calculation of Stark broadening of atomic hydrogen high- n series emission lines. It is not limited to hydrogen spectra. Results on helium- β and Lyman- α lines emitted by argon in microballoon implosion experiment conditions compared with experimental data and simulation results are also presented. The present approach reduces the computer time by more than 2 orders of magnitude as compared with the original FFM with an improvement of the calculation precision, and it opens broad possibilities for its application in spectral line-shape codes.
NASA Astrophysics Data System (ADS)
Archer, Gregory J.
Highly siderophile element (HSE) abundances and 187Re- 187Os isotopic systematics for H chondrites and ungrouped achondrites, as well as 182Hf-182W isotopic systematics of H and CR chondrites are reported. Achondrite fractions with higher HSE abundances show little disturbance of 187Re-187Os isotopic systematics. By contrast, isotopic systematics for lower abundance fractions are consistent with minor Re mobilization. For magnetically separated H chondrite fractions, the magnitudes of disturbance for the 187Re-187Os isotopic system follow the trend coarse-metal isotopic system follow the trend coarse-metal
Precise calculation of the local pressure tensor in Cartesian and spherical coordinates in LAMMPS
NASA Astrophysics Data System (ADS)
Nakamura, Takenobu; Kawamoto, Shuhei; Shinoda, Wataru
2015-05-01
An accurate and efficient algorithm for calculating the 3D pressure field has been developed and implemented in the open-source molecular dynamics package, LAMMPS. Additionally, an algorithm to compute the pressure profile along the radial direction in spherical coordinates has also been implemented. The latter is particularly useful for systems showing a spherical symmetry such as micelles and vesicles. These methods yield precise pressure fields based on the Irving-Kirkwood contour integration and are particularly useful for biomolecular force fields. The present methods are applied to several systems including a buckled membrane and a vesicle.
NASA Astrophysics Data System (ADS)
Zhong, Ruibo; Yuan, Ming; Gao, Haiyang; Bai, Zhijun; Guo, Jun; Zhao, Xinmin; Zhang, Feng
2016-03-01
Discrete biomolecule-nanoparticle (NP) conjugates play paramount roles in nanofabrication, in which the key is to get the precise molar extinction coefficient of NPs. By making best use of the gift from a specific separation phenomenon of agarose gel electrophoresis (GE), amphiphilic polymer coated NP with exact number of bovine serum albumin (BSA) proteins can be extracted and further experimentally employed to precisely calculate the molar extinction coefficient of the NPs. This method could further benefit the evaluation and extraction of any other dual-component NP-containing bio-conjugates.
Feng, Zhao; Ling, Jie; Ming, Min; Xiao, Xiao-Hui
2017-08-01
For precision motion, high-bandwidth and flexible tracking are the two important issues for significant performance improvement. Iterative learning control (ILC) is an effective feedforward control method only for systems that operate strictly repetitively. Although projection ILC can track varying references, the performance is still limited by the fixed-bandwidth Q-filter, especially for triangular waves tracking commonly used in a piezo nanopositioner. In this paper, a wavelet transform-based linear time-varying (LTV) Q-filter design for projection ILC is proposed to compensate high-frequency errors and improve the ability to tracking varying references simultaneously. The LVT Q-filter is designed based on the modulus maximum of wavelet detail coefficients calculated by wavelet transform to determine the high-frequency locations of each iteration with the advantages of avoiding cross-terms and segmenting manually. The proposed approach was verified on a piezo nanopositioner. Experimental results indicate that the proposed approach can locate the high-frequency regions accurately and achieve the best performance under varying references compared with traditional frequency-domain and projection ILC with a fixed-bandwidth Q-filter, which validates that through implementing the LTV filter on projection ILC, high-bandwidth and flexible tracking can be achieved simultaneously by the proposed approach.
Hind, Karen; Oldroyd, Brian; Truscott, John G
2010-01-01
Knowledge of precision is integral to the monitoring of bone mineral density (BMD) changes using dual-energy X-ray absorptiometry (DXA). We evaluated the precision for bone measurements acquired using a GE Lunar iDXA (GE Healthcare, Waukesha, WI) in self-selected men and women, with mean age of 34.8 yr (standard deviation [SD]: 8.4; range: 20.1-50.5), heterogeneous in terms of body mass index (mean: 25.8 kg/m(2); SD: 5.1; range: 16.7-42.7 kg/m(2)). Two consecutive iDXA scans (with repositioning) of the total body, lumbar spine, and femur were conducted within 1h, for each subject. The coefficient of variation (CV), the root-mean-square (RMS) averages of SDs of repeated measurements, and the corresponding 95% least significant change were calculated. Linear regression analyses were also undertaken. We found a high level of precision for BMD measurements, particularly for scans of the total body, lumbar spine, and total hip (RMS: 0.007, 0.004, and 0.007 g/cm(2); CV: 0.63%, 0.41%, and 0.53%, respectively). Precision error for the femoral neck was higher but still represented good reproducibility (RMS: 0.014 g/cm(2); CV: 1.36%). There were associations between body size and total-body BMD and total-hip BMD SD precisions (r=0.534-0.806, p<0.05) in male subjects. Regression parameters showed good association between consecutive measurements for all body sites (r(2)=0.98-0.99). The Lunar iDXA provided excellent precision for BMD measurements of the total body, lumbar spine, femoral neck, and total hip. Copyright © 2010 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gulyuz, K.; Bollen, G.; Brodeur, M.; Bryce, R. A.; Cooper, K.; Eibach, M.; Izzo, C.; Kwan, E.; Manukyan, K.; Morrissey, D. J.; Naviliat-Cuncic, O.; Redshaw, M.; Ringle, R.; Sandler, R.; Schwarz, S.; Sumithrarachchi, C. S.; Valverde, A. A.; Villari, A. C. C.
2016-01-01
We report the determination of the QEC value of the mirror transition of 11C by measuring the atomic masses of 11C and 11B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated F t value. Using the new value, QEC=1981.690 (61 ) keV , the uncertainty on F t is no longer dominated by the uncertainty on the QEC value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.
Gulyuz, K; Bollen, G; Brodeur, M; Bryce, R A; Cooper, K; Eibach, M; Izzo, C; Kwan, E; Manukyan, K; Morrissey, D J; Naviliat-Cuncic, O; Redshaw, M; Ringle, R; Sandler, R; Schwarz, S; Sumithrarachchi, C S; Valverde, A A; Villari, A C C
2016-01-08
We report the determination of the Q(EC) value of the mirror transition of (11)C by measuring the atomic masses of (11)C and (11)B using Penning trap mass spectrometry. More than an order of magnitude improvement in precision is achieved as compared to the 2012 Atomic Mass Evaluation (Ame2012) [Chin. Phys. C 36, 1603 (2012)]. This leads to a factor of 3 improvement in the calculated Ft value. Using the new value, Q(EC)=1981.690(61) keV, the uncertainty on Ft is no longer dominated by the uncertainty on the Q(EC) value. Based on this measurement, we provide an updated estimate of the Gamow-Teller to Fermi mixing ratio and standard model values of the correlation coefficients.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, A.; Acero, J.; Alberdi, B.
High precision coil current control, stability and ripple content are very important aspects for a stellarator design. The TJ-II coils will be supplied by network commutated current converters and therefore the coil currents will contain harmonics which have to be kept to a very low level. An analytical investigation as well as numerous simulations with EMTP, SABER{reg_sign} and other softwares, have been done in order to predict the harmonic currents and to verify the completion with the specified maximum levels. The calculations and the results are presented.
Number of independent parameters in the potentiometric titration of humic substances.
Lenoir, Thomas; Manceau, Alain
2010-03-16
With the advent of high-precision automatic titrators operating in pH stat mode, measuring the mass balance of protons in solid-solution mixtures against the pH of natural and synthetic polyelectrolytes is now routine. However, titration curves of complex molecules typically lack obvious inflection points, which complicates their analysis despite the high-precision measurements. The calculation of site densities and median proton affinity constants (pK) from such data can lead to considerable covariance between fit parameters. Knowing the number of independent parameters that can be freely varied during the least-squares minimization of a model fit to titration data is necessary to improve the model's applicability. This number was calculated for natural organic matter by applying principal component analysis (PCA) to a reference data set of 47 independent titration curves from fulvic and humic acids measured at I = 0.1 M. The complete data set was reconstructed statistically from pH 3.5 to 9.8 with only six parameters, compared to seven or eight generally adjusted with common semi-empirical speciation models for organic matter, and explains correlations that occur with the higher number of parameters. Existing proton-binding models are not necessarily overparametrized, but instead titration data lack the sensitivity needed to quantify the full set of binding properties of humic materials. Model-independent conditional pK values can be obtained directly from the derivative of titration data, and this approach is the most conservative. The apparent proton-binding constants of the 23 fulvic acids (FA) and 24 humic acids (HA) derived from a high-quality polynomial parametrization of the data set are pK(H,COOH)(FA) = 4.18 +/- 0.21, pK(H,Ph-OH)(FA) = 9.29 +/- 0.33, pK(H,COOH)(HA) = 4.49 +/- 0.18, and pK(H,Ph-OH)(HA) = 9.29 +/- 0.38. Their values at other ionic strengths are more reliably calculated with the empirical Davies equation than any existing model fit.
Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation
NASA Astrophysics Data System (ADS)
Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.
2016-06-01
Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.
Identifying Few-Molecule Water Clusters with High Precision on Au(111) Surface.
Dong, Anning; Yan, Lei; Sun, Lihuan; Yan, Shichao; Shan, Xinyan; Guo, Yang; Meng, Sheng; Lu, Xinghua
2018-06-01
Revealing the nature of a hydrogen-bond network in water structures is one of the imperative objectives of science. With the use of a low-temperature scanning tunneling microscope, water clusters on a Au(111) surface were directly imaged with molecular resolution by a functionalized tip. The internal structures of the water clusters as well as the geometry variations with the increase of size were identified. In contrast to a buckled water hexamer predicted by previous theoretical calculations, our results present deterministic evidence for a flat configuration of water hexamers on Au(111), corroborated by density functional theory calculations with properly implemented van der Waals corrections. The consistency between the experimental observations and improved theoretical calculations not only renders the internal structures of absorbed water clusters unambiguously, but also directly manifests the crucial role of van der Waals interactions in constructing water-solid interfaces.
Hirschi, Jennifer S.; Takeya, Tetsuya; Hang, Chao; Singleton, Daniel A.
2009-01-01
We suggest here and evaluate a methodology for the measurement of specific interatomic distances from a combination of theoretical calculations and experimentally measured 13C kinetic isotope effects. This process takes advantage of a broad diversity of transition structures available for the epoxidation of 2-methyl-2-butene with oxaziridines. From the isotope effects calculated for these transition structures, a theory-independent relationship between the C-O bond distances of the newly forming bonds and the isotope effects is established. Within the precision of the measurement, this relationship in combination with the experimental isotope effects provides a highly accurate picture of the C-O bonds forming at the transition state. The diversity of transition structures also allows an evaluation of the Schramm process for defining transition state geometries based on calculations at non-stationary points, and the methodology is found to be reasonably accurate. PMID:19146405
The Impact of Different Sources of Fluctuations on Mutual Information in Biochemical Networks
Chevalier, Michael; Venturelli, Ophelia; El-Samad, Hana
2015-01-01
Stochastic fluctuations in signaling and gene expression limit the ability of cells to sense the state of their environment, transfer this information along cellular pathways, and respond to it with high precision. Mutual information is now often used to quantify the fidelity with which information is transmitted along a cellular pathway. Mutual information calculations from experimental data have mostly generated low values, suggesting that cells might have relatively low signal transmission fidelity. In this work, we demonstrate that mutual information calculations might be artificially lowered by cell-to-cell variability in both initial conditions and slowly fluctuating global factors across the population. We carry out our analysis computationally using a simple signaling pathway and demonstrate that in the presence of slow global fluctuations, every cell might have its own high information transmission capacity but that population averaging underestimates this value. We also construct a simple synthetic transcriptional network and demonstrate using experimental measurements coupled to computational modeling that its operation is dominated by slow global variability, and hence that its mutual information is underestimated by a population averaged calculation. PMID:26484538
NASA Technical Reports Server (NTRS)
Andrews, Arlyn E.; Burris, John F.; Abshire, James B.; Krainak, Michael A.; Riris, Haris; Sun, Xiao-Li; Collatz, G. James
2002-01-01
Ground-based LIDAR observations can potentially provide continuous profiles of CO2 through the planetary boundary layer and into the free troposphere. We will present initial atmospheric measurements from a prototype system that is based on components developed by the telecommunications industry. Preliminary measurements and instrument performance calculations indicate that an optimized differential absorption LIDAR (DIAL) system will be capable of providing continuous hourly averaged profiles with 250m vertical resolution and better than 1 ppm precision at 1 km. Precision increases (decreases) at lower (higher) altitudes and is directly proportional to altitude resolution and acquisition time. Thus, precision can be improved if temporal or vertical resolution is sacrificed. Our approach measures absorption by CO2 of pulsed laser light at 1.6 microns backscattered from atmospheric aerosols. Aerosol concentrations in the planetary boundary layer are relatively high and are expected to provide adequate signal returns for the desired resolution. The long-term goal of the project is to develop a rugged, autonomous system using only commercially available components that can be replicated inexpensively for deployment in a monitoring network.
Interplay of threshold resummation and hadron mass corrections in deep inelastic processes
Accardi, Alberto; Anderle, Daniele P.; Ringer, Felix
2015-02-01
We discuss hadron mass corrections and threshold resummation for deep-inelastic scattering lN-->l'X and semi-inclusive annihilation e +e - → hX processes, and provide a prescription how to consistently combine these two corrections respecting all kinematic thresholds. We find an interesting interplay between threshold resummation and target mass corrections for deep-inelastic scattering at large values of Bjorken x B. In semi-inclusive annihilation, on the contrary, the two considered corrections are relevant in different kinematic regions and do not affect each other. A detailed analysis is nonetheless of interest in the light of recent high precision data from BaBar and Belle onmore » pion and kaon production, with which we compare our calculations. For both deep inelastic scattering and single inclusive annihilation, the size of the combined corrections compared to the precision of world data is shown to be large. Therefore, we conclude that these theoretical corrections are relevant for global QCD fits in order to extract precise parton distributions at large Bjorken x B, and fragmentation functions over the whole kinematic range.« less
Daubechies wavelets for linear scaling density functional theory.
Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan
2014-05-28
We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.
Precise methane absorption measurements in the 1.64 μm spectral region for the MERLIN mission.
Delahaye, T; Maxwell, S E; Reed, Z D; Lin, H; Hodges, J T; Sung, K; Devi, V M; Warneke, T; Spietz, P; Tran, H
2016-06-27
In this article we describe a high-precision laboratory measurement targeting the R(6) manifold of the 2 ν 3 band of 12 CH 4 . Accurate physical models of this absorption spectrum will be required by the Franco-German, Methane Remote Sensing LIDAR (MERLIN) space mission for retrievals of atmospheric methane. The analysis uses the Hartmann-Tran profile for modeling line shape and also includes line-mixing effects. To this end, six high-resolution and high signal-to-noise absorption spectra of air-broadened methane were recorded using a frequency-stabilized cavity ring-down spectroscopy apparatus. Sample conditions corresponded to room temperature and spanned total sample pressures of 40 hPa - 1013 hPa with methane molar fractions between 1 μmol mol -1 and 12 μmol mol -1 . All spectroscopic model parameters were simultaneously adjusted in a multispectrum nonlinear least-squares fit to the six measured spectra. Comparison of the fitted model to the measured spectra reveals the ability to calculate the room-temperature, methane absorption coefficient to better than 0.1% at the on-line position of the MERLIN mission. This is the first time that such fidelity has been reached in modeling methane absorption in the investigated spectral region, fulfilling the accuracy requirements of the MERLIN mission. We also found excellent agreement when comparing the present results with measurements obtained over different pressure conditions and using other laboratory techniques. Finally, we also evaluated the impact of these new spectral parameters on atmospheric transmissions spectra calculations.
Precise methane absorption measurements in the 1.64 μm spectral region for the MERLIN mission
Delahaye, T.; Maxwell, S.E.; Reed, Z.D.; Lin, H.; Hodges, J.T.; Sung, K.; Devi, V.M.; Warneke, T.; Spietz, P.; Tran, H.
2016-01-01
In this article we describe a high-precision laboratory measurement targeting the R(6) manifold of the 2ν3 band of 12CH4. Accurate physical models of this absorption spectrum will be required by the Franco-German, Methane Remote Sensing LIDAR (MERLIN) space mission for retrievals of atmospheric methane. The analysis uses the Hartmann-Tran profile for modeling line shape and also includes line-mixing effects. To this end, six high-resolution and high signal-to-noise absorption spectra of air-broadened methane were recorded using a frequency-stabilized cavity ring-down spectroscopy apparatus. Sample conditions corresponded to room temperature and spanned total sample pressures of 40 hPa – 1013 hPa with methane molar fractions between 1 μmol mol−1 and 12 μmol mol−1. All spectroscopic model parameters were simultaneously adjusted in a multispectrum nonlinear least-squares fit to the six measured spectra. Comparison of the fitted model to the measured spectra reveals the ability to calculate the room-temperature, methane absorption coefficient to better than 0.1% at the on-line position of the MERLIN mission. This is the first time that such fidelity has been reached in modeling methane absorption in the investigated spectral region, fulfilling the accuracy requirements of the MERLIN mission. We also found excellent agreement when comparing the present results with measurements obtained over different pressure conditions and using other laboratory techniques. Finally, we also evaluated the impact of these new spectral parameters on atmospheric transmissions spectra calculations. PMID:27551656
Electronic structure and defect properties of selenophosphate Pb2P2Se6 for γ-ray detection
NASA Astrophysics Data System (ADS)
Kontsevoi, Oleg Y.; Im, Jino; Wessels, Bruce W.; Kanatzidis, Mercouri G.; Freeman, Arthur J.
Heavy metal chalco-phosphate Pb2P2Se6 has shown a significant promise as an X-ray and γ-ray detector material. To assess the fundamental physical properties important for its performance as detector, theoretical calculations were performed for the electronic structure, band gaps, electron and hole effective masses, and static dielectric constants. The calculations were based on first-principles density functional theory (DFT) and employ the highly precise full potential linearized augmented plane wave method and the projector augmented wave method and include nonlocal exchange-correlation functionals to overcome the band gap underestimation in DFT calculations. The calculations show that Pb2P2Se6 is an indirect band gap material with the calculated band gap of 2.0 eV, has small effective masses, which could result in a good carrier mobility-lifetime product μτ , and a very high static dielectric constant, which could lead to high mobility of carriers by screening of charged scattering centers. We further investigated a large set of native defects in Pb2P2Se6 to determine the optimal growth conditions for application as γ-ray detectors. The results suggest that the prevalent intrinsic defects are selenium vacancies, followed by lead vacancies, then phosphorus vacancies and antisite defects. The effect of various chemical environments on defect properties was examined and the optimal conditions for material synthesis were suggested. Supported by DHS (Grant No. 2014-DN-077-ARI086-01).
Cost and Precision of Brownian Clocks
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Seifert, Udo
2016-10-01
Brownian clocks are biomolecular networks that can count time. A paradigmatic example are proteins that go through a cycle, thus regulating some oscillatory behavior in a living system. Typically, such a cycle requires free energy often provided by ATP hydrolysis. We investigate the relation between the precision of such a clock and its thermodynamic costs. For clocks driven by a constant thermodynamic force, a given precision requires a minimal cost that diverges as the uncertainty of the clock vanishes. In marked contrast, we show that a clock driven by a periodic variation of an external protocol can achieve arbitrary precision at arbitrarily low cost. This result constitutes a fundamental difference between processes driven by a fixed thermodynamic force and those driven periodically. As a main technical tool, we map a periodically driven system with a deterministic protocol to one subject to an external protocol that changes in stochastic time intervals, which simplifies calculations significantly. In the nonequilibrium steady state of the resulting bipartite Markov process, the uncertainty of the clock can be deduced from the calculable dispersion of a corresponding current.
NASA Astrophysics Data System (ADS)
Du, W.; Chen, L.; Xie, H.; Hai, G.; Zhang, S.; Tong, X.
2017-09-01
This paper analyzes the precision and deviation of elevations acquired from Envisat and The Ice, Cloud and Land Elevation Satellite (ICESat) over typical ice gaining and losing regions, i.e. Lambert-Amery System (LAS) in east Antarctica, and Amundsen Sea Sector (ASS) in west Antarctica, during the same period from 2003 to 2008. We used GLA12 dataset of ICESat and Level 2 data of Envisat. Data preprocessing includes data filtering, projection transformation and track classification. Meanwhile, the slope correction is applied to Envisat data and saturation correction for ICESat data. Then the crossover analysis was used to obtain the crossing points of the ICESat tracks, Envisat tracks and ICESat-Envisat tracks separately. The two tracks we chose for cross-over analysis should be in the same campaign for ICESat (within 33 days) or the same cycle for Envisat (within 35 days).The standard deviation of a set of elevation residuals at time-coincident crossovers is calculated as the precision of each satellite while the mean value is calculated as the deviation of ICESat-Envisat. Generally, the ICESat laser altimeter gets a better precision than the Envisat radar altimeter. For Amundsen Sea Sector, the ICESat precision is found to vary from 8.9 cm to 17 cm and the Envisat precision varies from 0.81 m to 1.57 m. For LAS area, the ICESat precision is found to vary from 6.7 cm to 14.3 cm and the Envisat precision varies from 0.46 m to 0.81 m. Comparison result between Envisat and ICESat elevations shows a mean difference of 0.43 ±7.14 m for Amundsen Sea Sector and 0.53 ± 1.23 m over LAS.
Statistical analysis of radioimmunoassay. In comparison with bioassay (in Japanese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, R.
1973-01-01
Using the data of RIA (radioimmunoassay), statistical procedures for dealing with two problems of the linearization of dose response curve and calculation of relative potency were described. There were three methods for linearization of dose response curve of RIA. In each method, the following parameters were shown on the horizontal and vertical axis: dose x, (B/T)/sup -1/; c/x + c, B/T (C: dose which makes B/T 50%); log x, logit B/T. Among them, the last method seems to be most practical. The statistical procedures for bioassay were employed for calculating the relative potency of unknown samples compared to the standardmore » samples from dose response curves of standand and unknown samples using regression coefficient. It is desirable that relative potency is calculated by plotting more than 5 points in the standard curve and plotting more than 2 points in unknow samples. For examining the statistical limit of precision of measuremert, LH activity of gonadotropin in urine was measured and relative potency, precision coefficient and the upper and lower limits of relative potency at 95% confidence limit were calculated. On the other hand, bioassay (by the ovarian ascorbic acid reduction method and anteriol lobe of prostate weighing method) was done in the same samples, and the precision was compared with that of RIA. In these examinations, the upper and lower limits of the relative potency at 95% confidence limit were near each other, while in bioassay, a considerable difference was observed between the upper and lower limits. The necessity of standardization and systematization of the statistical procedures for increasing the precision of RIA was pointed out. (JA)« less
Accelerating calculations of RNA secondary structure partition functions using GPUs
2013-01-01
Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434
Aboulbanine, Zakaria; El Khayati, Naïma
2018-04-13
The use of phase space in medical linear accelerator Monte Carlo (MC) simulations significantly improves the execution time and leads to results comparable to those obtained from full calculations. The classical representation of phase space stores directly the information of millions of particles, producing bulky files. This paper presents a virtual source model (VSM) based on a reconstruction algorithm, taking as input a compressed file of roughly 800 kb derived from phase space data freely available in the International Atomic Energy Agency (IAEA) database. This VSM includes two main components; primary and scattered particle sources, with a specific reconstruction method developed for each. Energy spectra and other relevant variables were extracted from IAEA phase space and stored in the input description data file for both sources. The VSM was validated for three photon beams: Elekta Precise 6 MV/10 MV and a Varian TrueBeam 6 MV. Extensive calculations in water and comparisons between dose distributions of the VSM and IAEA phase space were performed to estimate the VSM precision. The Geant4 MC toolkit in multi-threaded mode (Geant4-[mt]) was used for fast dose calculations and optimized memory use. Four field configurations were chosen for dose calculation validation to test field size and symmetry effects, [Formula: see text] [Formula: see text], [Formula: see text] [Formula: see text], and [Formula: see text] [Formula: see text] for squared fields, and [Formula: see text] [Formula: see text] for an asymmetric rectangular field. Good agreement in terms of [Formula: see text] formalism, for 3%/3 mm and 2%/3 mm criteria, for each evaluated radiation field and photon beam was obtained within a computation time of 60 h on a single WorkStation for a 3 mm voxel matrix. Analyzing the VSM's precision in high dose gradient regions, using the distance to agreement concept (DTA), showed also satisfactory results. In all investigated cases, the mean DTA was less than 1 mm in build-up and penumbra regions. In regards to calculation efficiency, the event processing speed is six times faster using Geant4-[mt] compared to sequential Geant4, when running the same simulation code for both. The developed VSM for 6 MV/10 MV beams widely used, is a general concept easy to adapt in order to reconstruct comparable beam qualities for various linac configurations, facilitating its integration for MC treatment planning purposes.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Wang, Qiang; Liu, Yuefei; Chen, Yiqiang; Ma, Jing; Tan, Liying; Yu, Siyuan
2017-03-01
Accurate location computation for a beacon is an important factor of the reliability of satellite optical communications. However, location precision is generally limited by the resolution of CCD. How to improve the location precision of a beacon is an important and urgent issue. In this paper, we present two precise centroid computation methods for locating a beacon in satellite optical communications. First, in terms of its characteristics, the beacon is divided into several parts according to the gray gradients. Afterward, different numbers of interpolation points and different interpolation methods are applied in the interpolation area; we calculate the centroid position after interpolation and choose the best strategy according to the algorithm. The method is called a "gradient segmentation interpolation approach," or simply, a GSI (gradient segmentation interpolation) algorithm. To take full advantage of the pixels of the beacon's central portion, we also present an improved segmentation square weighting (SSW) algorithm, whose effectiveness is verified by the simulation experiment. Finally, an experiment is established to verify GSI and SSW algorithms. The results indicate that GSI and SSW algorithms can improve locating accuracy over that calculated by a traditional gray centroid method. These approaches help to greatly improve the location precision for a beacon in satellite optical communications.
NASA Astrophysics Data System (ADS)
Wortham, B. E.; Banner, J. L.; James, E.; Loewy, S. L.
2013-12-01
Speleothems, calcite deposits in caves, preserve a record of climate in their growth rate, isotope ratios and trace element concentrations. These variables must be tied to precise ages to produce pre-instrumental records of climate. The 238U-234U- 230Th disequilibrium method of dating can yield precise ages if the amount of 230Th from the decay of radiogenic 238U can be constrained. 230Th in a speleothem calcite growth layer has two potential sources - 1) decay of radioactive 238U since the time of growth of the calcite layer; and 2) initial detrital 230Th, incorporated along with detrital 232Th, into the calcite layer at the time it grew. Although the calcite lattice does not typically incorporate Th, samples can contain impurities with relatively high Th contents. Initial 230Th/232Th is commonly estimated by assuming a source with bulk-Earth U/Th values in a state of secular equilibrium in the 238U-decay chain. The uncertainty in this 230Th/232Th estimate is also assumed, typically at +/-100%. Both assumptions contribute to uncertainty in ages determined for young speleothems. If the amount of initial detrital 230Th can be better quantified for samples or sites, then U-series ages will have smaller uncertainties and more precisely define the time series of climate proxies. This study determined the initial 230Th/232Th of modern calcite to provide more precise dates for central Texas speleothems. Calcite was grown on glass-plate substrates placed under active drips in central Texas caves. The 230Th/232Th of this modern calcite was determined using thermal ionization mass spectrometry. Results show that: 1) initial 230Th/232Th ratios can be accurately determined in these young samples and 2) measuring 230Th/232Th reduces the uncertainties in previously-determined ages on stalagmites from under the same drips. For example, measured initial 230Th/232Th in calcite collected on substrates from different locations in the cave at Westcave Preserve are 15.3 × 0.67 ppm, 14.6 × 0.83 ppm, 5.8 × 0.56 ppm, and 5.9 × 0.60 ppm, which are higher and more precise than the value commonly assumed for initial 230Th/232Th, 4.4 × 4.4 ppm. Soil sampled above Westcave, a potential source of detrital Th incorporated into speleothems, also has a high calculated 230Th/232Th. We calculate soil 230Th/232Th from measured U and Th concentrations of soil leachates (using DI water and ammonium acetate). Calculated 230Th/232Th for Westcave soils range from 0.39 to 28.4 ppm, which encompasses the range of initial 230Th/232Th values found in the modern calcite. Soil leachates from Natural Bridge Caverns and Inner Space Cavern were analyzed by the same method, yielding calculated 230Th/232Th ranging from 1.5 to 12.6 ppm (Natural Bridge), and from 1.43 to 272 ppm (Inner Space). Soil and calcite data indicate that the commonly assumed initial 230Th/232Th is not always applicable and that initial 230Th/232Th can be estimated more accurately by measuring Th isotope ratios in modern calcite and soils to determine speleothem U-series ages.
Klein-Fedyshin, Michele; Ketchum, Andrea M; Arnold, Robert M; Fedyshin, Peter J
2014-12-01
MEDLINE offers the Core Clinical Journals filter to limit to clinically useful journals. To determine its effectiveness for searching and patient-centric decision making, this study compared literature used for Morning Report in Internal Medicine with journals in the filter. An EndNote library with references answering 327 patient-related questions during Morning Report from 2007 to 2012 was exported to a file listing variables including designated Core Clinical Journal, Impact Factor, date used and medical subject. Bradford's law of scattering was applied ranking the journals and reflecting their clinical utility. Recall (sensitivity) and precision of the Core Morning Report journals and non-Core set was calculated. This study applied bibliometrics to compare the 628 articles used against these criteria to determine journals impacting decision making. Analysis shows 30% of clinically used articles are from the Core Clinical Journals filter and 16% of the journals represented are Core titles. When Bradford-ranked, 55% of the top 20 journals are Core. Articles <5 years old furnish 63% of sources used. Among the 63 Morning Report subjects, 55 have <50% precision and 41 have <50% recall including 37 subjects with 0% precision and 0% recall. Low usage of publications within the Core Clinical Journals filter indicates less relevance for hospital-based care. The divergence from high-impact medicine titles suggests clinically valuable journals differ from academically important titles. With few subjects demonstrating high recall or precision, the MEDLINE Core Clinical Journals filter may require a review and update to better align with current clinical needs. © 2014 John Wiley & Sons, Ltd.
Precise measurement of scleral radius using anterior eye profilometry.
Jesus, Danilo A; Kedzia, Renata; Iskander, D Robert
2017-02-01
To develop a new and precise methodology to measure the scleral radius based on anterior eye surface. Eye Surface Profiler (ESP, Eaglet-Eye, Netherlands) was used to acquire the anterior eye surface of 23 emmetropic subjects aged 28.1±6.6years (mean±standard deviation) ranging from 20 to 45. Scleral radius was obtained based on the approximation of the topographical scleral data to a sphere using least squares fitting and considering the axial length as a reference point. To better understand the role of scleral radius in ocular biometry, measurements of corneal radius, central corneal thickness, anterior chamber depth and white-to-white corneal diameter were acquired with IOLMaster 700 (Carl Zeiss Meditec AG, Jena, Germany). The estimated scleral radius (11.2±0.3mm) was shown to be highly precise with a coefficient of variation of 0.4%. A statistically significant correlation between axial length and scleral radius (R 2 =0.957, p<0.001) was observed. Moreover, corneal radius (R 2 =0.420, p<0.001), anterior chamber depth (R 2 =0.141, p=0.039) and white-to-white corneal diameter (R 2 =0.146, p=0.036) have also shown statistically significant correlations with the scleral radius. Lastly, no correlation was observed comparing scleral radius to the central corneal thickness (R 2 =0.047, p=0.161). Three-dimensional topography of anterior eye acquired with Eye Surface Profiler together with a given estimate of the axial length, can be used to calculate the scleral radius with high precision. Copyright © 2016 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Absolute parameters for AI Phoenicis using WASP photometry
NASA Astrophysics Data System (ADS)
Kirkby-Kent, J. A.; Maxted, P. F. L.; Serenelli, A. M.; Turner, O. D.; Evans, D. F.; Anderson, D. R.; Hellier, C.; West, R. G.
2016-06-01
Context. AI Phe is a double-lined, detached eclipsing binary, in which a K-type sub-giant star totally eclipses its main-sequence companion every 24.6 days. This configuration makes AI Phe ideal for testing stellar evolutionary models. Difficulties in obtaining a complete lightcurve mean the precision of existing radii measurements could be improved. Aims: Our aim is to improve the precision of the radius measurements for the stars in AI Phe using high-precision photometry from the Wide Angle Search for Planets (WASP), and use these improved radius measurements together with estimates of the masses, temperatures and composition of the stars to place constraints on the mixing length, helium abundance and age of the system. Methods: A best-fit ebop model is used to obtain lightcurve parameters, with their standard errors calculated using a prayer-bead algorithm. These were combined with previously published spectroscopic orbit results, to obtain masses and radii. A Bayesian method is used to estimate the age of the system for model grids with different mixing lengths and helium abundances. Results: The radii are found to be R1 = 1.835 ± 0.014 R⊙, R2 = 2.912 ± 0.014 R⊙ and the masses M1 = 1.1973 ± 0.0037 M⊙, M2 = 1.2473 ± 0.0039 M⊙. From the best-fit stellar models we infer a mixing length of 1.78, a helium abundance of YAI = 0.26 +0.02-0.01 and an age of 4.39 ± 0.32 Gyr. Times of primary minimum show the period of AI Phe is not constant. Currently, there are insufficient data to determine the cause of this variation. Conclusions: Improved precision in the masses and radii have improved the age estimate, and allowed the mixing length and helium abundance to be constrained. The eccentricity is now the largest source of uncertainty in calculating the masses. Further work is needed to characterise the orbit of AI Phe. Obtaining more binaries with parameters measured to a similar level of precision would allow us to test for relationships between helium abundance and mixing length.
NASA Technical Reports Server (NTRS)
Patch, R. W.
1971-01-01
The composition and thermodynamic properties were calculated for 100 to 110,000 K and 1.01325 x 10 to the 2nd power to 1.01325 x 10 to the 8th power N/sq m for chemical equilibrium in the Debye-Huckel and ideal-gas approximations. Quantities obtained were the concentrations of hydrogen atoms, protons, free electrons, hydrogen molecules, negative hydrogen ions, hydrogen diatomic molecular ions, and hydrogen triatomic molecular ions, and the enthalpy, entropy, average molecular weight, specific heat at constant pressure, density, and isentropic exponent. Electronically excited states of H and H2 were included. Choked, isentropic, one-dimensional nozzle flow with shifting chemical equilibrium was calculated to the Debye-Huckel and ideal-gas approximations for stagnation temperatures from 2500 to 100,000 K. The mass flow per unit throat area and the sonic flow factor were obtained. The pressure ratio, temperature, velocity, and ideal and vacuum specific impulses at the throat and for pressure ratios as low as 0.000001 downstream were found. For high temperatures at pressures approaching 1.01325 x 10 to the 8th power N/sq m, the ideal-gas approximation was found to be inadequate for calculations of composition, precise thermodynamic properties, and precise nozzle flow. The greatest discrepancy in nozzle flow occurred in the exit temperature, which was as much as 21 percent higher when the Debye-Huckel approximation was used.
Epilepsy Treatment Simplified through Mobile Ketogenic Diet Planning.
Li, Hanzhou; Jauregui, Jeffrey L; Fenton, Cagla; Chee, Claire M; Bergqvist, A G Christina
2014-07-01
The Ketogenic Diet (KD) is an effective, alternative treatment for refractory epilepsy. This high fat, low protein and carbohydrate diet mimics the metabolic and hormonal changes that are associated with fasting. To maximize the effectiveness of the KD, each meal is precisely planned, calculated, and weighed to within 0.1 gram for the average three-year duration of treatment. Managing the KD is time-consuming and may deter caretakers and patients from pursuing or continuing this treatment. Thus, we investigated methods of planning KD faster and making the process more portable through mobile applications. Nutritional data was gathered from the United States Department of Agriculture (USDA) Nutrient Database. User selected foods are converted into linear equations with n variables and three constraints: prescribed fat content, prescribed protein content, and prescribed carbohydrate content. Techniques are applied to derive the solutions to the underdetermined system depending on the number of foods chosen. The method was implemented on an iOS device and tested with varieties of foods and different number of foods selected. With each case, the application's constructed meal plan was within 95% precision of the KD requirements. In this study, we attempt to reduce the time needed to calculate a meal by automating the computation of the KD via a linear algebra model. We improve upon previous KD calculators by offering optimal suggestions and incorporating the USDA database. We believe this mobile application will help make the KD and other dietary treatment preparations less time consuming and more convenient.
Precise Point Positioning technique for short and long baselines time transfer
NASA Astrophysics Data System (ADS)
Lejba, Pawel; Nawrocki, Jerzy; Lemanski, Dariusz; Foks-Ryznar, Anna; Nogas, Pawel; Dunst, Piotr
2013-04-01
In this work the clock parameters determination of several timing receivers TTS-4 (AOS), ASHTECH Z-XII3T (OP, ORB, PTB, USNO) and SEPTENTRIO POLARX4TR (ORB, since February 11, 2012) by use of the Precise Point Positioning (PPP) technique were presented. The clock parameters were determined for several time links based on the data delivered by time and frequency laboratories mentioned above. The computations cover the period from January 1 to December 31, 2012 and were performed in two modes with 7-day and one-month solution for all links. All RINEX data files which include phase and code GPS data were recorded in 30-second intervals. All calculations were performed by means of Natural Resource Canada's GPS Precise Point Positioning (GPS-PPP) software based on high-quality precise satellite coordinates and satellite clock delivered by IGS as the final products. The used independent PPP technique is a very powerful and simple method which allows for better control of antenna positions in AOS and a verification of other time transfer techniques like GPS CV, GLONASS CV and TWSTFT. The PPP technique is also a very good alternative for calibration of a glass fiber link PL-AOS realized at present by AOS. Currently PPP technique is one of the main time transfer methods used at AOS what considerably improve and strengthen the quality of the Polish time scales UTC(AOS), UTC(PL), and TA(PL). KEY-WORDS: Precise Point Positioning, time transfer, IGS products, GNSS, time scales.
Implementation of the common phrase index method on the phrase query for information retrieval
NASA Astrophysics Data System (ADS)
Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah
2017-08-01
As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Benchmark studies of induced radioactivity produced in LHC materials, Part I: Specific activities.
Brugger, M; Khater, H; Mayer, S; Prinz, A; Roesler, S; Ulrici, L; Vincke, H
2005-01-01
Samples of materials which will be used in the LHC machine for shielding and construction components were irradiated in the stray radiation field of the CERN-EU high-energy reference field facility. After irradiation, the specific activities induced in the various samples were analysed with a high-precision gamma spectrometer at various cooling times, allowing identification of isotopes with a wide range of half-lives. Furthermore, the irradiation experiment was simulated in detail with the FLUKA Monte Carlo code. A comparison of measured and calculated specific activities shows good agreement, supporting the use of FLUKA for estimating the level of induced activity in the LHC.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
NASA Technical Reports Server (NTRS)
French, R. A.; Cohen, B. A.; Miller, J. S.
2014-01-01
KArLE (Potassium--Argon Laser Experiment) has been developed for in situ planetary geochronology using the K - Ar (potassium--argon) isotope system, where material ablated by LIBS (Laser--Induced Breakdown Spectroscopy) is used to calculate isotope abundances. We are determining the accuracy and precision of volume measurements of these pits using stereo and laser microscope data to better understand the ablation process for isotope abundance calculations. If a characteristic volume can be determined with sufficient accuracy and precision for specific rock types, KArLE will prove to be a useful instrument for future planetary rover missions.
Astrophysical S-factor for destructive reactions of lithium-7 in big bang nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Komatsubara, Tetsuro; Kwon, YoungKwan; Moon, JunYoung
One of the most prominent success with the Big Bang models is the precise reproduction of mass abundance ratio for {sup 4}He. In spite of the success, abundances of lithium isotopes are still inconsistent between observations and their calculated results, which is known as lithium abundance problem. Since the calculations were based on the experimental reaction data together with theoretical estimations, more precise experimental measurements may improve the knowledge of the Big Bang nucleosynthesis. As one of the destruction process of lithium-7, we have performed measurements for the reaction cross sections of the {sup 7}L({sup 3}He,p){sup 9}Be reaction.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
ERIC Educational Resources Information Center
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Kawalilak, C E; Johnston, J D; Cooper, D M L; Olszynski, W P; Kontulainen, S A
2016-02-01
Precision errors of cortical bone micro-architecture from high-resolution peripheral quantitative computed tomography (pQCT) ranged from 1 to 16 % and did not differ between automatic or manually modified endocortical contour methods in postmenopausal women or young adults. In postmenopausal women, manually modified contours led to generally higher cortical bone properties when compared to the automated method. First, the objective of the study was to define in vivo precision errors (coefficient of variation root mean square (CV%RMS)) and least significant change (LSC) for cortical bone micro-architecture using two endocortical contouring methods: automatic (AUTO) and manually modified (MOD) in two groups (postmenopausal women and young adults) from high-resolution pQCT (HR-pQCT) scans. Second, it was to compare precision errors and bone outcomes obtained with both methods within and between groups. Using HR-pQCT, we scanned twice the distal radius and tibia of 34 postmenopausal women (mean age ± SD 74 ± 7 years) and 30 young adults (27 ± 9 years). Cortical micro-architecture was determined using AUTO and MOD contour methods. CV%RMS and LSC were calculated. Repeated measures and multivariate ANOVA were used to compare mean CV% and bone outcomes between the methods within and between the groups. Significance was accepted at P < 0.05. CV%RMS ranged from 0.9 to 16.3 %. Within-group precision did not differ between evaluation methods. Compared to young adults, postmenopausal women had better precision for radial cortical porosity (precision difference 9.3 %) and pore volume (7.5 %) with MOD. Young adults had better precision for cortical thickness (0.8 %, MOD) and tibial cortical density (0.2 %, AUTO). In postmenopausal women, MOD resulted in 0.2-54 % higher values for most cortical outcomes, as well as 6-8 % lower radial and tibial cortical BMD and 2 % lower tibial cortical thickness. Results suggest that AUTO and MOD endocortical contour methods provide comparable repeatability. In postmenopausal women, manual modification of endocortical contours led to generally higher cortical bone properties when compared to the automated method, while no between-method differences were observed in young adults.
NASA Technical Reports Server (NTRS)
Payne, M. H.
1973-01-01
A computer program is described for the calculation of the zeroes of the associated Legendre functions, Pnm, and their derivatives, for the calculation of the extrema of Pnm and also the integral between pairs of successive zeroes. The program has been run for all n,m from (0,0) to (20,20) and selected cases beyond that for n up to 40. Up to (20,20), the program (written in double precision) retains nearly full accuracy, and indications are that up to (40,40) there is still sufficient precision (4-5 decimal digits for a 54-bit mantissa) for estimation of various bounds and errors involved in geopotential modelling, the purpose for which the program was written.
Measurement uncertainty of liquid chromatographic analyses visualized by Ishikawa diagrams.
Meyer, Veronika R
2003-09-01
Ishikawa, or cause-and-effect diagrams, help to visualize the parameters that influence a chromatographic analysis. Therefore, they facilitate the set up of the uncertainty budget of the analysis, which can then be expressed in mathematical form. If the uncertainty is calculated as the Gaussian sum of all uncertainty parameters, it is necessary to quantitate them all, a task that is usually not practical. The other possible approach is to use the intermediate precision as a base for the uncertainty calculation. In this case, it is at least necessary to consider the uncertainty of the purity of the reference material in addition to the precision data. The Ishikawa diagram is then very simple, and so is the uncertainty calculation. This advantage is given by the loss of information about the parameters that influence the measurement uncertainty.
Precision of MRI-based body composition measurements of postmenopausal women
Romu, Thobias; Thorell, Sofia; Lindblom, Hanna; Berin, Emilia; Holm, Anna-Clara Spetz; Åstrand, Lotta Lindh; Karlsson, Anette; Borga, Magnus; Hammar, Mats; Leinhard, Olof Dahlqvist
2018-01-01
Objectives To determine precision of magnetic resonance imaging (MRI) based fat and muscle quantification in a group of postmenopausal women. Furthermore, to extend the method to individual muscles relevant to upper-body exercise. Materials and methods This was a sub-study to a randomized control trial investigating effects of resistance training to decrease hot flushes in postmenopausal women. Thirty-six women were included, mean age 56 ± 6 years. Each subject was scanned twice with a 3.0T MR-scanner using a whole-body Dixon protocol. Water and fat images were calculated using a 6-peak lipid model including R2*-correction. Body composition analyses were performed to measure visceral and subcutaneous fat volumes, lean volumes and muscle fat infiltration (MFI) of the muscle groups’ thigh muscles, lower leg muscles, and abdominal muscles, as well as the three individual muscles pectoralis, latissimus, and rhomboideus. Analysis was performed using a multi-atlas, calibrated water-fat separated quantification method. Liver-fat was measured as average proton density fat-fraction (PDFF) of three regions-of-interest. Precision was determined with Bland-Altman analysis, repeatability, and coefficient of variation. Results All of the 36 included women were successfully scanned and analysed. The coefficient of variation was 1.1% to 1.5% for abdominal fat compartments (visceral and subcutaneous), 0.8% to 1.9% for volumes of muscle groups (thigh, lower leg, and abdomen), and 2.3% to 7.0% for individual muscle volumes (pectoralis, latissimus, and rhomboideus). Limits of agreement for MFI was within ± 2.06% for muscle groups and within ± 5.13% for individual muscles. The limits of agreement for liver PDFF was within ± 1.9%. Conclusion Whole-body Dixon MRI could characterize a range of different fat and muscle compartments with high precision, including individual muscles, in the study-group of postmenopausal women. The inclusion of individual muscles, calculated from the same scan, enables analysis for specific intervention programs and studies. PMID:29415060
In vivo precision of conventional and digital methods for obtaining quadrant dental impressions.
Ender, Andreas; Zimmermann, Moritz; Attin, Thomas; Mehl, Albert
2016-09-01
Quadrant impressions are commonly used as alternative to full-arch impressions. Digital impression systems provide the ability to take these impressions very quickly; however, few studies have investigated the accuracy of the technique in vivo. The aim of this study is to assess the precision of digital quadrant impressions in vivo in comparison to conventional impression techniques. Impressions were obtained via two conventional (metal full-arch tray, CI, and triple tray, T-Tray) and seven digital impression systems (Lava True Definition Scanner, T-Def; Lava Chairside Oral Scanner, COS; Cadent iTero, ITE; 3Shape Trios, TRI; 3Shape Trios Color, TRC; CEREC Bluecam, Software 4.0, BC4.0; CEREC Bluecam, Software 4.2, BC4.2; and CEREC Omnicam, OC). Impressions were taken three times for each of five subjects (n = 15). The impressions were then superimposed within the test groups. Differences from model surfaces were measured using a normal surface distance method. Precision was calculated using the Perc90_10 value. The values for all test groups were statistically compared. The precision ranged from 18.8 (CI) to 58.5 μm (T-Tray), with the highest precision in the CI, T-Def, BC4.0, TRC, and TRI groups. The deviation pattern varied distinctly depending on the impression method. Impression systems with single-shot capture exhibited greater deviations at the tooth surface whereas high-frame rate impression systems differed more in gingival areas. Triple tray impressions displayed higher local deviation at the occlusal contact areas of upper and lower jaw. Digital quadrant impression methods achieve a level of precision, comparable to conventional impression techniques. However, there are significant differences in terms of absolute values and deviation pattern. With all tested digital impression systems, time efficient capturing of quadrant impressions is possible. The clinical precision of digital quadrant impression models is sufficient to cover a broad variety of restorative indications. Yet the precision differs significantly between the digital impression systems.
Back-illuminate fiber system research for multi-object fiber spectroscopic telescope
NASA Astrophysics Data System (ADS)
Zhou, Zengxiang; Liu, Zhigang; Hu, Hongzhuan; Wang, Jianping; Zhai, Chao; Chu, Jiaru
2016-07-01
In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. A set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare with the integrating sphere, meet the conditions of fiber position measurement.Using parallel controlled fiber positioner as the spectroscopic receiver is an efficiency observation system for spectra survey, has been used in LAMOST recently, and will be proposed in CFHT and rebuilt telescope Mayall. In the telescope observation, the position of fiber will highly influence the spectra efficient input in the fiber to the spectrograph. When the fibers were back illuminated on the spectra end, they would export light on the positioner end, so the CCD cameras could capture the photo of fiber tip position covered the focal plane, calculates the precise position information by light centroid method and feeds back to control system. After many years on these research, the back illuminated fiber measurement was the best method to acquire the precision position of fibers. In LAMOST, a set of fiber back illuminated system was developed which combined to the low revolution spectro instruments in LAMOST. It could provide uniform light output to the fibers, meet the requirements for the CCD camera measurement and was controlled by high-level observation system which could shut down during the telescope observation. The paper was introduced the back illuminated system design and different test for the light resource. After optimization, the effect illuminated system could compare the integrating sphere, meet the conditions of fiber position measurement.
NASA Astrophysics Data System (ADS)
Fagan, Mike; Dueben, Peter; Palem, Krishna; Carver, Glenn; Chantry, Matthew; Palmer, Tim; Schlacter, Jeremy
2017-04-01
It has been shown that a mixed precision approach that judiciously replaces double precision with single precision calculations can speed-up global simulations. In particular, a mixed precision variation of the Integrated Forecast System (IFS) of the European Centre for Medium-Range Weather Forecasts (ECMWF) showed virtually the same quality model results as the standard double precision version (Vana et al., Single precision in weather forecasting models: An evaluation with the IFS, Monthly Weather Review, in print). In this study, we perform detailed measurements of savings in computing time and energy using a mixed precision variation of the -OpenIFS- model. The mixed precision variation of OpenIFS is analogous to the IFS variation used in Vana et al. We (1) present results for energy measurements for simulations in single and double precision using Intel's RAPL technology, (2) conduct a -scaling- study to quantify the effects that increasing model resolution has on both energy dissipation and computing cycles, (3) analyze the differences between single core and multicore processing, and (4) compare the effects of different compiler technologies on the mixed precision OpenIFS code. In particular, we compare intel icc/ifort with gnu gcc/gfortran.
Efficient calculation of cosmological neutrino clustering in the non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archidiacono, Maria; Hannestad, Steen, E-mail: archi@phys.au.dk, E-mail: sth@phys.au.dk
2016-06-01
We study in detail how neutrino perturbations can be followed in linear theory by using only terms up to l =2 in the Boltzmann hierarchy. We provide a new approximation to the third moment and demonstrate that the neutrino power spectrum can be calculated to a precision of better than ∼ 5% for masses up to ∼ 1 eV and k ∼< 10 h /Mpc. The matter power spectrum can be calculated far more precisely and typically at least a factor of a few better than with existing approximations. We then proceed to study how the neutrino power spectrum canmore » be reliably calculated even in the non-linear regime by using the non-linear gravitational potential, sourced by dark matter overdensities, as it is derived from semi-analytic methods based on N -body simulations in the Boltzmann evolution hierarchy. Our results agree extremely well with results derived from N -body simulations that include cold dark matter and neutrinos as independent particles with different properties.« less
Virial Coefficients and Equations of State for Hard Polyhedron Fluids.
Irrgang, M Eric; Engel, Michael; Schultz, Andrew J; Kofke, David A; Glotzer, Sharon C
2017-10-24
Hard polyhedra are a natural extension of the hard sphere model for simple fluids, but there is no general scheme for predicting the effect of shape on thermodynamic properties, even in moderate-density fluids. Only the second virial coefficient is known analytically for general convex shapes, so higher-order equations of state have been elusive. Here we investigate high-precision state functions in the fluid phase of 14 representative polyhedra with different assembly behaviors. We discuss historic efforts in analytically approximating virial coefficients up to B 4 and numerically evaluating them to B 8 . Using virial coefficients as inputs, we show the convergence properties for four equations of state for hard convex bodies. In particular, the exponential approximant of Barlow et al. (J. Chem. Phys. 2012, 137, 204102) is found to be useful up to the first ordering transition for most polyhedra. The convergence behavior we explore can guide choices in expending additional resources for improved estimates. Fluids of arbitrary hard convex bodies are too complicated to be described in a general way at high densities, so the high-precision state data we provide can serve as a reference for future work in calculating state data or as a basis for thermodynamic integration.
High Precision Temperature Insensitive Strain Sensor Based on Fiber-Optic Delay
Yang, Ning; Su, Jun; Fan, Zhiqiang; Qiu, Qi
2017-01-01
A fiber-optic delay based strain sensor with high precision and temperature insensitivity was reported, which works on detecting the delay induced by strain instead of spectrum. In order to analyze the working principle of this sensor, the elastic property of fiber-optic delay was theoretically researched and the elastic coefficient was measured as 3.78 ps/km·με. In this sensor, an extra reference path was introduced to simplify the measurement of delay and resist the cross-effect of environmental temperature. Utilizing an optical fiber stretcher driven by piezoelectric ceramics, the performance of this strain sensor was tested. The experimental results demonstrate that temperature fluctuations contribute little to the strain error and that the calculated strain sensitivity is as high as 4.75 με in the range of 350 με. As a result, this strain sensor is proved to be feasible and practical, which is appropriate for strain measurement in a simple and economical way. Furthermore, on basis of this sensor, the quasi-distributed measurement could be also easily realized by wavelength division multiplexing and wavelength addressing for long-distance structure health and security monitoring. PMID:28468323
Calculation and Measurement of Low-Energy Radiative Moller Scattering
NASA Astrophysics Data System (ADS)
Epstein, Charles; DarkLight Collaboration
2017-09-01
A number of current nuclear physics experiments have come to rely on precise knowledge of electron-electron (Moller) and positron-electron (Bhabha) scattering. Some of these experiments, having lepton beams on targets containing atomic electrons, use these purely-QED processes as normalization. In other scenarios, with electron beams at low energy and very high intensity, Moller scattering and radiative Moller scattering have such enormous cross-sections that the backgrounds they produce must be understood. In this low-energy regime, the electron mass is also not negligible in the calculation of the cross section. This is important, for example, in the DarkLight experiment (100 MeV). As a result, we have developed a new event generator for the radiative Moller and Bhabha processes, with new calculations that keep all terms of the electron mass. The MIT High Voltage Research Laboratory provides us a unique opportunity to study this process experimentally and compare it with our work, at a low beam energy of 2.5 MeV where the effects of the electron mass are significant. We are preparing a dedicated apparatus consisting of a magnetic spectrometer in order to directly measure this process. An overview of the calculation and the status of the experiment will be presented.
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-01-01
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible. PMID:29695041
Liu, Bingqi; Wei, Shihui; Su, Guohua; Wang, Jiping; Lu, Jiazhen
2018-04-24
The navigation accuracy of the inertial navigation system (INS) can be greatly improved when the inertial measurement unit (IMU) is effectively calibrated and compensated, such as gyro drifts and accelerometer biases. To reduce the requirement for turntable precision in the classical calibration method, a continuous dynamic self-calibration method based on a three-axis rotating frame for the hybrid inertial navigation system is presented. First, by selecting a suitable IMU frame, the error models of accelerometers and gyros are established. Then, by taking the navigation errors during rolling as the observations, the overall twenty-one error parameters of hybrid inertial navigation system (HINS) are identified based on the calculation of the intermediate parameter. The actual experiment verifies that the method can identify all error parameters of HINS and this method has equivalent accuracy to the classical calibration on a high-precision turntable. In addition, this method is rapid, simple and feasible.
The Mn-53-Cr-53 System in CAIs: An Update
NASA Technical Reports Server (NTRS)
Papanastassiou, D. A.; Wasserburg, G. J.; Bogdanovski, O.
2005-01-01
High precision techniques have been developed for the measurement of Cr isotopes on the Triton mass spectrometer, at JPL. It is clear that multiple Faraday cup, simultaneous ion collection may reduce the uncertainty of isotope ratios relative to single Faraday cup ion collection, by the elimination of uncertainties from ion beam instabilities (since ion beam intensities for single cup collection are interpolated in time to calculate isotope ratios), and due to a greatly increased data collection duty cycle, for simultaneous ion collection. Efforts to measure Cr by simultaneous ion collection have not been successful in the past. Determinations on Cr-50-54Cr, by simultaneous ion collection on the Finnigan/ MAT 262 instrument at Caltech, resulted in large variations in extrinsic precision, for normal Cr, of up to 1% in Cr-53/Cr-52 (data corrected for mass fractionation, using Cr-50/Cr-52).
Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.
McShane, L M; Clark, L C; Combs, G F; Turnbull, B W
1991-06-01
Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.
ATLAS measurement of Electroweak Vector Boson production
NASA Astrophysics Data System (ADS)
Vittori, C.; Atlas Collaboration
2017-01-01
The measurements of the Drell-Yan production of W and Z /γ* bosons at the LHC provide a benchmark of our understanding of the perturbative QCD and probe the proton structure in a unique way. The ATLAS collaboration has performed new high precision measurements of the double differential cross-sections as a function of the dilepton mass and rapidity. The measurements are compared to state of calculations at NNLO in QCD and constrain the photon content of the proton. The angular distributions of the Drell-Yan lepton pairs around the Z-boson mass peak probe the underlying QCD dynamics of the Z-boson production mechanisms. The complete set of angular coefficients describing these distributions is presented and compared to theoretical predictions highlighting different approaches of the QCD and EW modelling. First precise inclusive measurements of W and Z production at 13 TeV are presented. W / Z and W+ /W- ratios profit from a cancellation of experimental uncertainties.
Brault, C; Gil, C; Boboc, A; Spuig, P
2011-04-01
On the Tore Supra tokamak, a far infrared polarimeter diagnostic has been routinely used for diagnosing the current density by measuring the Faraday rotation angle. A high precision of measurement is needed to correctly reconstruct the current profile. To reach this precision, electronics used to compute the phase and the amplitude of the detected signals must have a good resilience to the noise in the measurement. In this article, the analogue card's response to the noise coming from the detectors and their impact on the Faraday angle measurements are analyzed, and we present numerical methods to calculate the phase and the amplitude. These validations have been done using real signals acquired by Tore Supra and JET experiments. These methods have been developed to be used in real-time in the future numerical cards that will replace the Tore Supra present analogue ones. © 2011 American Institute of Physics
ENDF/B-VII.0 Data Testing Using 1,172 Critical Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechaty, E F; Cullen, D E
2007-10-01
In order to test the ENDF/B-VII.0 neutron data library [1], 1,172 critical assemblies from [2] have been calculated using the Monte Carlo transport code TART [3]. TART's 'best' physics was used for all of these calculations; this included continuous energy cross sections, delayed neutrons in their spectrum that is slower than prompt neutrons, unresolved resonance region self-shielding, the thermal scattering (free atom for all materials plus thermal scattering law data S({alpha},{beta}) when available). In this first pass through the assemblies the objective was to 'quickly' test the validity of the ENDF/B-VII.0 data [1], the assembly models as defined in [2]more » and coded for use with TART, and TART's physics treatment [3] of these assemblies. With TART we have the option of running criticality problems until K-eff has been calculated to an acceptable input accuracy. In order to 'quickly' calculate all of these assemblies K-eff was calculated in each case to +/- 0.002. For these calculations the assemblies were divided into ten types based on fuel (mixed, Pu239, U233, U235) and median fission energy (Fast, Midi, Slow). A table is provided that shows a summary of these results. This is followed be details for every assembly, and statistical information about the distribution of K-eff for each type of assembly. After a review of these results to eliminate any obvious errors in ENDF/B data, assembly models, or TART physics, all assemblies will be run again to a higher precision. Only after this second run is finished will we have highly precise results. Until then the results presently here should only be interpreted as approximate values of K-eff with a standard deviation of +/- 0.002; for such a large number of assemblies we expected the results to be approximately normal, with a spread out to several times the standard deviation; see the calculated statistical distributions and their comparisons to a normal distribution.« less
NASA Astrophysics Data System (ADS)
Yin, Zhifu; Sun, Lei; Zou, Helin; Cheng, E.
2015-05-01
A method for obtaining a low-cost and high-replication precision two-dimensional (2D) nanofluidic device with a polymethyl methacrylate (PMMA) sheet is proposed. To improve the replication precision of the 2D PMMA nanochannels during the hot embossing process, the deformation of the PMMA sheet was analyzed by a numerical simulation method. The constants of the generalized Maxwell model used in the numerical simulation were calculated by experimental compressive creep curves based on previously established fitting formula. With optimized process parameters, 176 nm-wide and 180 nm-deep nanochannels were successfully replicated into the PMMA sheet with a replication precision of 98.2%. To thermal bond the 2D PMMA nanochannels with high bonding strength and low dimensional loss, the parameters of the oxygen plasma treatment and thermal bonding process were optimized. In order to measure the dimensional loss of 2D nanochannels after thermal bonding, a dimension loss evaluating method based on the nanoindentation experiments was proposed. According to the dimension loss evaluating method, the total dimensional loss of 2D nanochannels was 6 nm and 21 nm in width and depth, respectively. The tensile bonding strength of the 2D PMMA nanofluidic device was 0.57 MPa. The fluorescence images demonstrate that there was no blocking or leakage over the entire microchannels and nanochannels.
Sepúlveda, Nuno; Paulino, Carlos Daniel; Drakeley, Chris
2015-12-30
Several studies have highlighted the use of serological data in detecting a reduction in malaria transmission intensity. These studies have typically used serology as an adjunct measure and no formal examination of sample size calculations for this approach has been conducted. A sample size calculator is proposed for cross-sectional surveys using data simulation from a reverse catalytic model assuming a reduction in seroconversion rate (SCR) at a given change point before sampling. This calculator is based on logistic approximations for the underlying power curves to detect a reduction in SCR in relation to the hypothesis of a stable SCR for the same data. Sample sizes are illustrated for a hypothetical cross-sectional survey from an African population assuming a known or unknown change point. Overall, data simulation demonstrates that power is strongly affected by assuming a known or unknown change point. Small sample sizes are sufficient to detect strong reductions in SCR, but invariantly lead to poor precision of estimates for current SCR. In this situation, sample size is better determined by controlling the precision of SCR estimates. Conversely larger sample sizes are required for detecting more subtle reductions in malaria transmission but those invariantly increase precision whilst reducing putative estimation bias. The proposed sample size calculator, although based on data simulation, shows promise of being easily applicable to a range of populations and survey types. Since the change point is a major source of uncertainty, obtaining or assuming prior information about this parameter might reduce both the sample size and the chance of generating biased SCR estimates.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks
NASA Astrophysics Data System (ADS)
Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji
High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.
Low optical-loss facet preparation for silica-on-silicon photonics using the ductile dicing regime
NASA Astrophysics Data System (ADS)
Carpenter, Lewis G.; Rogers, Helen L.; Cooper, Peter A.; Holmes, Christopher; Gates, James C.; Smith, Peter G. R.
2013-11-01
The efficient production of high-quality facets for low-loss coupling is a significant production issue in integrated optics, usually requiring time consuming and manually intensive lapping and polishing steps, which add considerably to device fabrication costs. The development of precision dicing saws with diamond impregnated blades has allowed optical grade surfaces to be machined in crystalline materials such as lithium niobate and garnets. In this report we investigate the optimization of dicing machine parameters to obtain optical quality surfaces in a silica-on-silicon planar device demonstrating high optical quality in a commercially important glassy material. We achieve a surface roughness of 4.9 nm (Sa) using the optimized dicing conditions. By machining a groove across a waveguide, using the optimized dicing parameters, a grating based loss measurement technique is used to measure precisely the average free space interface loss per facet caused by scattering as a consequence of surface roughness. The average interface loss per facet was calculated to be: -0.63 dB and -0.76 dB for the TE and TM polarizations, respectively.
New neutrino physics and the altered shapes of solar neutrino spectra
NASA Astrophysics Data System (ADS)
Lopes, Ilídio
2017-01-01
Neutrinos coming from the Sun's core have been measured with high precision, and fundamental neutrino oscillation parameters have been determined with good accuracy. In this work, we estimate the impact that a new neutrino physics model, the so-called generalized Mikheyev-Smirnov-Wolfenstein (MSW) oscillation mechanism, has on the shape of some of leading solar neutrino spectra, some of which will be partially tested by the next generation of solar neutrino experiments. In these calculations, we use a high-precision standard solar model in good agreement with helioseismology data. We found that the neutrino spectra of the different solar nuclear reactions of the pp chains and carbon-nitrogen-oxygen cycle have quite distinct sensitivities to the new neutrino physics. The He P and 8B neutrino spectra are the ones in which their shapes are more affected when neutrinos interact with quarks in addition to electrons. The shapes of the 15O and 17F neutrino spectra are also modified, although in these cases the impact is much smaller. Finally, the impact in the shapes of the P P and 13N neutrino spectra is practically negligible.
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration
NASA Technical Reports Server (NTRS)
Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.
1996-01-01
An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.
NASA Astrophysics Data System (ADS)
Redshaw, Matthew
This dissertation describes high precision measurements of atomic masses by measuring the cyclotron frequency of ions trapped singly, or in pairs, in a precision, cryogenic Penning trap. By building on techniques developed at MIT for measuring the cyclotron frequency of single trapped ions, the atomic masses of 84,86Kr, and 129,132,136Xe have been measured to less than a part in 1010 fractional precision. By developing a new technique for measuring the cyclotron frequency ratio of a pair of simultaneously trapped ions, the atomic masses of 28Si, 31P and 32S have been measured to 2 or 3 parts in 10 11. This new technique has also been used to measure the dipole moment of PH+. During the course of these measurements, two significant, but previously unsuspected sources of systematic error were discovered, characterized and eliminated. Extensive tests for other sources of systematic error were performed and are described in detail. The mass measurements presented here provide a significant increase in precision over previous values for these masses, by factors of 3 to 700. The results have a broad range of physics applications: The mass of 136 Xe is important for searches for neutrinoless double-beta-decay; the mass of 28Si is relevant to the re-definition of the artifact kilogram in terms of an atomic mass standard; the masses of 84,86Kr, and 129,132,136Xe provide convenient reference masses for less precise mass spectrometers in diverse fields such as nuclear physics and chemistry; and the dipole moment of PH+ provides a test of molecular structure calculations.
Jones, David T; Kandathil, Shaun M
2018-04-26
In addition to substitution frequency data from protein sequence alignments, many state-of-the-art methods for contact prediction rely on additional sources of information, or features, of protein sequences in order to predict residue-residue contacts, such as solvent accessibility, predicted secondary structure, and scores from other contact prediction methods. It is unclear how much of this information is needed to achieve state-of-the-art results. Here, we show that using deep neural network models, simple alignment statistics contain sufficient information to achieve state-of-the-art precision. Our prediction method, DeepCov, uses fully convolutional neural networks operating on amino-acid pair frequency or covariance data derived directly from sequence alignments, without using global statistical methods such as sparse inverse covariance or pseudolikelihood estimation. Comparisons against CCMpred and MetaPSICOV2 show that using pairwise covariance data calculated from raw alignments as input allows us to match or exceed the performance of both of these methods. Almost all of the achieved precision is obtained when considering relatively local windows (around 15 residues) around any member of a given residue pairing; larger window sizes have comparable performance. Assessment on a set of shallow sequence alignments (fewer than 160 effective sequences) indicates that the new method is substantially more precise than CCMpred and MetaPSICOV2 in this regime, suggesting that improved precision is attainable on smaller sequence families. Overall, the performance of DeepCov is competitive with the state of the art, and our results demonstrate that global models, which employ features from all parts of the input alignment when predicting individual contacts, are not strictly needed in order to attain precise contact predictions. DeepCov is freely available at https://github.com/psipred/DeepCov. d.t.jones@ucl.ac.uk.
A high-speed tracking algorithm for dense granular media
NASA Astrophysics Data System (ADS)
Cerda, Mauricio; Navarro, Cristóbal A.; Silva, Juan; Waitukaitis, Scott R.; Mujica, Nicolás; Hitschfeld, Nancy
2018-06-01
Many fields of study, including medical imaging, granular physics, colloidal physics, and active matter, require the precise identification and tracking of particle-like objects in images. While many algorithms exist to track particles in diffuse conditions, these often perform poorly when particles are densely packed together-as in, for example, solid-like systems of granular materials. Incorrect particle identification can have significant effects on the calculation of physical quantities, which makes the development of more precise and faster tracking algorithms a worthwhile endeavor. In this work, we present a new tracking algorithm to identify particles in dense systems that is both highly accurate and fast. We demonstrate the efficacy of our approach by analyzing images of dense, solid-state granular media, where we achieve an identification error of 5% in the worst evaluated cases. Going further, we propose a parallelization strategy for our algorithm using a GPU, which results in a speedup of up to 10 × when compared to a sequential CPU implementation in C and up to 40 × when compared to the reference MATLAB library widely used for particle tracking. Our results extend the capabilities of state-of-the-art particle tracking methods by allowing fast, high-fidelity detection in dense media at high resolutions.
Zhou, Tao; Zhao, Motian; Wang, Jun; Lu, Hai
2008-01-01
Two enriched isotopes, 99.94 at.% 56Fe and 99.90 at.% 54Fe, were blended under gravimetric control to prepare ten synthetic isotope samples whose 56Fe isotope abundances ranged from 95% to 20%. For multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) measurements typical polyatomic interferences were removed by using Ar and H2 as collision gas and operating the MC-ICP-MS system in soft mode. Thus high-precision measurements of the Fe isotope abundance ratios were accomplished. Based on the measurement of the synthetic isotope abundance ratios by MC-ICP-MS, the correction factor for mass discrimination was calculated and the results were in agreement with results from IRMM014. The precision of all ten correction factors was 0.044%, indicating a good linearity of the MC-ICP-MS method for different isotope abundance ratio values. An isotopic reference material was certified under the same conditions as the instrument was calibrated. The uncertainties of ten correction factors K were calculated and the final extended uncertainties of the isotopic certified Fe reference material were 5.8363(37) at.% 54Fe, 91.7621(51) at.% 56Fe, 2.1219(23) at.% 57Fe, and 0.2797(32) at.% 58Fe.
Automated brain volumetrics in multiple sclerosis: a step closer to clinical application
Beadnall, H N; Hatton, S N; Bader, G; Tomic, D; Silva, D G
2016-01-01
Background Whole brain volume (WBV) estimates in patients with multiple sclerosis (MS) correlate more robustly with clinical disability than traditional, lesion-based metrics. Numerous algorithms to measure WBV have been developed over the past two decades. We compare Structural Image Evaluation using Normalisation of Atrophy-Cross-sectional (SIENAX) to NeuroQuant and MSmetrix, for assessment of cross-sectional WBV in patients with MS. Methods MRIs from 61 patients with relapsing-remitting MS and 2 patients with clinically isolated syndrome were analysed. WBV measurements were calculated using SIENAX, NeuroQuant and MSmetrix. Statistical agreement between the methods was evaluated using linear regression and Bland-Altman plots. Precision and accuracy of WBV measurement was calculated for (1) NeuroQuant versus SIENAX and (2) MSmetrix versus SIENAX. Results Precision (Pearson's r) of WBV estimation for NeuroQuant and MSmetrix versus SIENAX was 0.983 and 0.992, respectively. Accuracy (Cb) was 0.871 and 0.994, respectively. NeuroQuant and MSmetrix showed a 5.5% and 1.0% volume difference compared with SIENAX, respectively, that was consistent across low and high values. Conclusions In the analysed population, NeuroQuant and MSmetrix both quantified cross-sectional WBV with comparable statistical agreement to SIENAX, a well-validated cross-sectional tool that has been used extensively in MS clinical studies. PMID:27071647
NASA Astrophysics Data System (ADS)
Kim, Jung Kyung; Prasad, Bibin; Kim, Suzy
2017-02-01
To evaluate the synergistic effect of radiotherapy and radiofrequency hyperthermia therapy in the treatment of lung and liver cancers, we studied the mechanism of heat absorption and transfer in the tumor using electro-thermal simulation and high-resolution temperature mapping techniques. A realistic tumor-induced mouse anatomy, which was reconstructed and segmented from computed tomography images, was used to determine the thermal distribution in tumors during radiofrequency (RF) heating at 13.56 MHz. An RF electrode was used as a heat source, and computations were performed with the aid of the multiphysics simulation platform Sim4Life. Experiments were carried out on a tumor-mimicking agar phantom and a mouse tumor model to obtain a spatiotemporal temperature map and thermal dose distribution. A high temperature increase was achieved in the tumor from both the computation and measurement, which elucidated that there was selective high-energy absorption in tumor tissue compared to the normal surrounding tissues. The study allows for effective treatment planning for combined radiation and hyperthermia therapy based on the high-resolution temperature mapping and high-precision thermal dose calculation.
Improved CVD Techniques for Depositing Passivation Layers of ICs
1975-10-01
Calculations .......................... 228 4. Precision ........... ....... ........................ 229 5. Optional Measurements of Dense Oxide and Aluminum 4...47. Typical measurements of phosphorus K. net radiation intensity as a function of the calculated phosphorus concentrations • * • 124 48. Effect of... calculated by measuring the de- formation of a substrate, usually in the form of a beam, or a circular disc. "In the beam bending method, stress is
Accuracy of Digital Impressions and Fitness of Single Crowns Based on Digital Impressions
Yang, Xin; Lv, Pin; Liu, Yihong; Si, Wenjie; Feng, Hailan
2015-01-01
In this study, the accuracy (precision and trueness) of digital impressions and the fitness of single crowns manufactured based on digital impressions were evaluated. #14-17 epoxy resin dentitions were made, while full-crown preparations of extracted natural teeth were embedded at #16. (1) To assess precision, deviations among repeated scan models made by intraoral scanner TRIOS and MHT and model scanner D700 and inEos were calculated through best-fit algorithm and three-dimensional (3D) comparison. Root mean square (RMS) and color-coded difference images were offered. (2) To assess trueness, micro computed tomography (micro-CT) was used to get the reference model (REF). Deviations between REF and repeated scan models (from (1)) were calculated. (3) To assess fitness, single crowns were manufactured based on TRIOS, MHT, D700 and inEos scan models. The adhesive gaps were evaluated under stereomicroscope after cross-sectioned. Digital impressions showed lower precision and better trueness. Except for MHT, the means of RMS for precision were lower than 10 μm. Digital impressions showed better internal fitness. Fitness of single crowns based on digital impressions was up to clinical standard. Digital impressions could be an alternative method for single crowns manufacturing. PMID:28793417
Dornacher, Daniel; Trubrich, Angela; Guelke, Joachim; Reichel, Heiko; Kappe, Thomas
2017-08-01
Regarding TT-TG in knee realignment surgery, two aspects have to be considered: first, there might be flaws in using absolute values for TT-TG, ignoring the knee size of the individual. Second, in high-grade trochlear dysplasia with a dome-shaped trochlea, measurement of TT-TG has proven to lack precision and reliability. The purpose of this examination was to establish a knee rotation angle, independent of the size of the individual knee and unaffected by a dysplastic trochlea. A total of 114 consecutive MRI scans of knee joints were analysed by two observers, retrospectively. Of these, 59 were obtained from patients with trochlear dysplasia, and another 55 were obtained from patients presenting with a different pathology of the knee joint. Trochlear dysplasia was classified into low grade and high grade. TT-TG was measured according to the method described by Schoettle et al. In addition, a modified knee rotation angle was assessed. Interobserver reliability of the knee rotation angle and its correlation with TT-TG was calculated. The knee rotation angle showed good correlation with TT-TG in the readings of observer 1 and observer 2. Interobserver correlation of the parameter showed excellent values for the scans with normal trochlea, low-grade and high-grade trochlear dysplasia, respectively. All calculations were statistically significant (p < 0.05). The knee rotation angle might meet the requirements for precise diagnostics in knee realignment surgery. Unlike TT-TG, this parameter seems not to be affected by a dysplastic trochlea. In addition, the dimensionless parameter is independent of the knee size of the individual. II.
Antisymmetric Amino-Wagging Band of Hydrazine up toK‧ = 13 Levels
NASA Astrophysics Data System (ADS)
Gulaczyk, Iwona; Kre, Marek; Valentin, Alain
1997-12-01
A newly recorded high-resolution infrared spectrum of hydrazine has been studied in the 729-1198 cm-1region (the ν12antisymmetric wagging band) with a resolution of 0.002 cm-1. About 1350 transitions withK‧ from 7 to 13 have been newly assigned and about 2350 transitions with lower values ofK‧ reanalyzed with the improved precision. The effective parameters have been calculated separately for each value ofK‧ using the Hougen-Ohashi hamiltonian for hydrazine. The extended assignment completes the analysis of the ν12band of hydrazine.
Fermi surface measurements in YBa2Cu3O(7-x) and La(1.874)Sr(126)CuO4
NASA Astrophysics Data System (ADS)
Howell, R. H.; Sterne, P. A.; Solal, F.; Fluss, M. J.; Haghighi, H.; Kaiser, J. H.; Rayner, S. L.; West, R. N.; Liu, J. Z.; Shelton, R.
1991-06-01
We report new, ultra high precision measurements of the electron-positron momentum spectra of YBa2Cu3O(7-x) and La(1.874)Sr(126)CuO4. The YBCO experiments were performed on twin free, single crystals and show discontinuities with the symmetry of the Fermi surface of the CuO chain bands. Conduction band and underlying features in LSCO share the same symmetry and can only be separated with the aid of LDA calculations.
Fermi surface measurements in YBa 2Cu 3O 7- x and La 1.874Sr .126CuO 4
NASA Astrophysics Data System (ADS)
Howell, R. H.; Sterne, P. A.; Solal, F.; Fluss, M. J.; Haghight, H.; Kaiser, J. H.; Rayner, S. L.; West, R. N.; Liu, J. Z.; Shelton, R.; Kojima, H.; Kitazawa, K.
1991-12-01
We report new, ultra high precision measurements of the electron-positron momentum spectra of YBa 2Cu 3O 7- x and La 1.874Sr .126CuO 4. The YBCO experiments were performed on twin free, single crystals and show discontinuities with the symmetry of the Fermi surface of the CuO chain bands. Conduction band and underlying features in LSCO share the same symmetry and can only be separated with the aid of LDA calculations.
NASA Technical Reports Server (NTRS)
Garcia-Espada, Susana; Haas, Rudiger; Colomer, Francisco
2010-01-01
An important limitation for the precision in the results obtained by space geodetic techniques like VLBI and GPS are tropospheric delays caused by the neutral atmosphere, see e.g. [1]. In recent years numerical weather models (NWM) have been applied to improve mapping functions which are used for tropospheric delay modeling in VLBI and GPS data analyses. In this manuscript we use raytracing to calculate slant delays and apply these to the analysis of Europe VLBI data. The raytracing is performed through the limited area numerical weather prediction (NWP) model HIRLAM. The advantages of this model are high spatial (0.2 deg. x 0.2 deg.) and high temporal resolution (in prediction mode three hours).
Precision thermometry and the quantum speed limit
NASA Astrophysics Data System (ADS)
Campbell, Steve; Genoni, Marco G.; Deffner, Sebastian
2018-04-01
We assess precision thermometry for an arbitrary single quantum system. For a d-dimensional harmonic system we show that the gap sets a single temperature that can be optimally estimated. Furthermore, we establish a simple linear relationship between the gap and this temperature, and show that the precision exhibits a quadratic relationship. We extend our analysis to explore systems with arbitrary spectra, showing that exploiting anharmonicity and degeneracy can greatly enhance the precision of thermometry. Finally, we critically assess the dynamical features of two thermometry protocols for a two level system. By calculating the quantum speed limit we find that, despite the gap fixing a preferred temperature to probe, there is no evidence of this emerging in the dynamical features.
Magnetic resonance imaging for precise radiotherapy of small laboratory animals.
Frenzel, Thorsten; Kaul, Michael Gerhard; Ernst, Thomas Michael; Salamon, Johannes; Jäckel, Maria; Schumacher, Udo; Krüll, Andreas
2017-03-01
Radiotherapy of small laboratory animals (SLA) is often not as precisely applied as in humans. Here we describe the use of a dedicated SLA magnetic resonance imaging (MRI) scanner for precise tumor volumetry, radiotherapy treatment planning, and diagnostic imaging in order to make the experiments more accurate. Different human cancer cells were injected at the lower trunk of pfp/rag2 and SCID mice to allow for local tumor growth. Data from cross sectional MRI scans were transferred to a clinical treatment planning system (TPS) for humans. Manual palpation of the tumor size was compared with calculated tumor size of the TPS and with tumor weight at necropsy. As a feasibility study MRI based treatment plans were calculated for a clinical 6MV linear accelerator using a micro multileaf collimator (μMLC). In addition, diagnostic MRI scans were used to investigate animals which did clinical poorly during the study. MRI is superior in precise tumor volume definition whereas manual palpation underestimates their size. Cross sectional MRI allow for treatment planning so that conformal irradiation of mice with a clinical linear accelerator using a μMLC is in principle feasible. Several internal pathologies were detected during the experiment using the dedicated scanner. MRI is a key technology for precise radiotherapy of SLA. The scanning protocols provided are suited for tumor volumetry, treatment planning, and diagnostic imaging. Copyright © 2016. Published by Elsevier GmbH.
NASA Astrophysics Data System (ADS)
Zhou, Chenyi; Guo, Hong
2017-01-01
We report a diagrammatic method to solve the general problem of calculating configurationally averaged Green's function correlators that appear in quantum transport theory for nanostructures containing disorder. The theory treats both equilibrium and nonequilibrium quantum statistics on an equal footing. Since random impurity scattering is a problem that cannot be solved exactly in a perturbative approach, we combine our diagrammatic method with the coherent potential approximation (CPA) so that a reliable closed-form solution can be obtained. Our theory not only ensures the internal consistency of the diagrams derived at different levels of the correlators but also satisfies a set of Ward-like identities that corroborate the conserving consistency of transport calculations within the formalism. The theory is applied to calculate the quantum transport properties such as average ac conductance and transmission moments of a disordered tight-binding model, and results are numerically verified to high precision by comparing to the exact solutions obtained from enumerating all possible disorder configurations. Our formalism can be employed to predict transport properties of a wide variety of physical systems where disorder scattering is important.
Miao, Yipu; Merz, Kenneth M
2015-04-14
We present an efficient implementation of ab initio self-consistent field (SCF) energy and gradient calculations that run on Compute Unified Device Architecture (CUDA) enabled graphical processing units (GPUs) using recurrence relations. We first discuss the machine-generated code that calculates the electron-repulsion integrals (ERIs) for different ERI types. Next we describe the porting of the SCF gradient calculation to GPUs, which results in an acceleration of the computation of the first-order derivative of the ERIs. However, only s, p, and d ERIs and s and p derivatives could be executed simultaneously on GPUs using the current version of CUDA and generation of NVidia GPUs using a previously described algorithm [Miao and Merz J. Chem. Theory Comput. 2013, 9, 965-976.]. Hence, we developed an algorithm to compute f type ERIs and d type ERI derivatives on GPUs. Our benchmarks shows the performance GPU enable ERI and ERI derivative computation yielded speedups of 10-18 times relative to traditional CPU execution. An accuracy analysis using double-precision calculations demonstrates that the overall accuracy is satisfactory for most applications.
Sepúlveda, Nuno; Drakeley, Chris
2015-04-03
In the last decade, several epidemiological studies have demonstrated the potential of using seroprevalence (SP) and seroconversion rate (SCR) as informative indicators of malaria burden in low transmission settings or in populations on the cusp of elimination. However, most of studies are designed to control ensuing statistical inference over parasite rates and not on these alternative malaria burden measures. SP is in essence a proportion and, thus, many methods exist for the respective sample size determination. In contrast, designing a study where SCR is the primary endpoint, is not an easy task because precision and statistical power are affected by the age distribution of a given population. Two sample size calculators for SCR estimation are proposed. The first one consists of transforming the confidence interval for SP into the corresponding one for SCR given a known seroreversion rate (SRR). The second calculator extends the previous one to the most common situation where SRR is unknown. In this situation, data simulation was used together with linear regression in order to study the expected relationship between sample size and precision. The performance of the first sample size calculator was studied in terms of the coverage of the confidence intervals for SCR. The results pointed out to eventual problems of under or over coverage for sample sizes ≤250 in very low and high malaria transmission settings (SCR ≤ 0.0036 and SCR ≥ 0.29, respectively). The correct coverage was obtained for the remaining transmission intensities with sample sizes ≥ 50. Sample size determination was then carried out for cross-sectional surveys using realistic SCRs from past sero-epidemiological studies and typical age distributions from African and non-African populations. For SCR < 0.058, African studies require a larger sample size than their non-African counterparts in order to obtain the same precision. The opposite happens for the remaining transmission intensities. With respect to the second sample size calculator, simulation unravelled the likelihood of not having enough information to estimate SRR in low transmission settings (SCR ≤ 0.0108). In that case, the respective estimates tend to underestimate the true SCR. This problem is minimized by sample sizes of no less than 500 individuals. The sample sizes determined by this second method highlighted the prior expectation that, when SRR is not known, sample sizes are increased in relation to the situation of a known SRR. In contrast to the first sample size calculation, African studies would now require lesser individuals than their counterparts conducted elsewhere, irrespective of the transmission intensity. Although the proposed sample size calculators can be instrumental to design future cross-sectional surveys, the choice of a particular sample size must be seen as a much broader exercise that involves weighting statistical precision with ethical issues, available human and economic resources, and possible time constraints. Moreover, if the sample size determination is carried out on varying transmission intensities, as done here, the respective sample sizes can also be used in studies comparing sites with different malaria transmission intensities. In conclusion, the proposed sample size calculators are a step towards the design of better sero-epidemiological studies. Their basic ideas show promise to be applied to the planning of alternative sampling schemes that may target or oversample specific age groups.
Research: Comparison of the Accuracy of a Pocket versus Standard Pulse Oximeter.
da Costa, João Cordeiro; Faustino, Paula; Lima, Ricardo; Ladeira, Inês; Guimarães, Miguel
2016-01-01
Pulse oximetry has become an essential tool in clinical practice. With patient self-management becoming more prevalent, pulse oximetry self-monitoring has the potential to become common practice in the near future. This study sought to compare the accuracy of two pulse oximeters, a high-quality standard pulse oximeter and an inexpensive pocket pulse oximeter, and to compare both devices with arterial blood co-oximetry oxygen saturation. A total of 95 patients (35.8% women; mean [±SD] age 63.1 ± 13.9 years; mean arterial pressure was 92 ± 12.0 mmHg; mean axillar temperature 36.3 ± 0.4°C) presenting to our hospital for blood gas analysis was evaluated. The Bland-Altman technique was performed to calculate bias and precision, as well as agreement limits. Student's t test was performed. Standard oximeter presented 1.84% bias and a precision error of 1.80%. Pocket oximeter presented a bias of 1.85% and a precision error of 2.21%. Agreement limits were -1.69% to 5.37% (standard oximeter) and -2.48% to 6.18% (pocket oximeter). Both oximeters presented bias, which was expected given previous research. The pocket oximeter was less precise but had agreement limits that were comparable with current evidence. Pocket oximeters can be powerful allies in clinical monitoring of patients based on a self-monitoring/efficacy strategy.
Evaluation of jamming efficiency for the protection of a single ground object
NASA Astrophysics Data System (ADS)
Matuszewski, Jan
2018-04-01
The electronic countermeasures (ECM) include methods to completely prevent or restrict the effective use of the electromagnetic spectrum by the opponent. The most widespread means of disorganizing the operation of electronic devices is to create active and passive radio-electronic jamming. The paper presents the way of jamming efficiency calculations for protecting ground objects against the radars mounted on the airborne platforms. The basic mathematical formulas for calculating the efficiency of active radar jamming are presented. The numerical calculations for ground object protection are made for two different electronic warfare scenarios: the jammer is placed very closely and in a determined distance from the protecting object. The results of these calculations are presented in the appropriate figures showing the minimal distance of effective jamming. The realization of effective radar jamming in electronic warfare systems depends mainly on the precise knowledge of radar and the jammer's technical parameters, the distance between them, the assumed value of the degradation coefficient, the conditions of electromagnetic energy propagation and the applied jamming method. The conclusions from these calculations facilitate making a decision regarding how jamming should be conducted to achieve high efficiency during the electronic warfare training.
Neural computing thermal comfort index PMV for the indoor environment intelligent control system
NASA Astrophysics Data System (ADS)
Liu, Chang; Chen, Yifei
2013-03-01
Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.
Automated calculation of matrix elements and physics motivated observables
NASA Astrophysics Data System (ADS)
Was, Z.
2017-11-01
The central aspect of my personal scientific activity, has focused on calculations useful for interpretation of High Energy accelerator experimental results, especially in a domain of precision tests of the Standard Model. My activities started in early 80’s, when computer support for algebraic manipulations was in its infancy. But already then it was important for my work. It brought a multitude of benefits, but at the price of some inconvenience for physics intuition. Calculations became more complex, work had to be distributed over teams of researchers and due to automatization, some aspects of the intermediate results became more difficult to identify. In my talk I will not be very exhaustive, I will present examples from my personal research only: (i) calculations of spin effects for the process e + e - → τ + τ - γ at Petra/PEP energies, calculations (with the help of the Grace system of Minami-tateya group) and phenomenology of spin amplitudes for (ii) e + e - → 4f and for (iii) e + e - → νeν¯eγγ processes, (iv) phenomenology of CP-sensitive observables for Higgs boson parity in H → τ + τ -, τ ± → ν2(3)π cascade decays.
Reduction procedures for accurate analysis of MSX surveillance experiment data
NASA Technical Reports Server (NTRS)
Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.
1994-01-01
Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.
Comment on "Modified quantum-speed-limit bounds for open quantum dynamics in quantum channels"
NASA Astrophysics Data System (ADS)
Mirkin, Nicolás; Toscano, Fabricio; Wisniacki, Diego A.
2018-04-01
In a recent paper [Phys. Rev. A 95, 052118 (2017), 10.1103/PhysRevA.95.052118], the authors claim that our criticism, in Phys. Rev. A 94, 052125 (2016), 10.1103/PhysRevA.94.052125, to some quantum speed limit bounds for open quantum dynamics that appeared recently in literature are invalid. According to the authors, the problem with our analysis would be generated by an artifact of the finite-precision numerical calculations. We analytically show here that it is not possible to have any inconsistency associated with the numerical precision of calculations. Therefore, our criticism of the quantum speed limit bounds continues to be valid.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baines, Ellyn K.; Armstrong, J. Thomas; Schmitt, Henrique R.
Using the Navy Precision Optical Interferometer, we measured the angular diameters of 10 stars that have previously measured solar-like oscillations. Our sample covered a range of evolutionary stages but focused on evolved subgiant and giant stars. We combined our angular diameters with Hipparcos parallaxes to determine the stars' physical radii, and used photometry from the literature to calculate their bolometric fluxes, luminosities, and effective temperatures. We then used our results to test the scaling relations used by asteroseismology groups to calculate radii and found good agreement between the radii measured here and the radii predicted by stellar oscillation studies. Themore » precision of the relations is not as well constrained for giant stars as it is for less evolved stars.« less
NASA Astrophysics Data System (ADS)
Saqib, Najam us; Faizan Mysorewala, Muhammad; Cheded, Lahouari
2017-12-01
In this paper, we propose a novel monitoring strategy for a wireless sensor networks (WSNs)-based water pipeline network. Our strategy uses a multi-pronged approach to reduce energy consumption based on the use of two types of vibration sensors and pressure sensors, all having different energy levels, and a hierarchical adaptive sampling mechanism to determine the sampling frequency. The sampling rate of the sensors is adjusted according to the bandwidth of the vibration signal being monitored by using a wavelet-based adaptive thresholding scheme that calculates the new sampling frequency for the following cycle. In this multimodal sensing scheme, the duty-cycling approach is used for all sensors to reduce the sampling instances, such that the high-energy, high-precision (HE-HP) vibration sensors have low duty cycles, and the low-energy, low-precision (LE-LP) vibration sensors have high duty cycles. The low duty-cycling (HE-HP) vibration sensor adjusts the sampling frequency of the high duty-cycling (LE-LP) vibration sensor. The simulated test bed considered here consists of a water pipeline network which uses pressure and vibration sensors, with the latter having different energy consumptions and precision levels, at various locations in the network. This is all the more useful for energy conservation for extended monitoring. It is shown that by using the novel features of our proposed scheme, a significant reduction in energy consumption is achieved and the leak is effectively detected by the sensor node that is closest to it. Finally, both the total energy consumed by monitoring as well as the time to detect the leak by a WSN node are computed, and show the superiority of our proposed hierarchical adaptive sampling algorithm over a non-adaptive sampling approach.
Seiberl, Wolfgang; Jensen, Elisabeth; Merker, Josephine; Leitel, Marco; Schwirtz, Ansgar
2018-05-29
Force plates represent the "gold standard" in measuring running kinetics to predict performance or to identify the sources of running-related injuries. As these measurements are generally limited to laboratory analyses, wireless high-quality sensors for measuring in the field are needed. This work analysed the accuracy and precision of a new wireless insole forcesensor for quantifying running-related kinetic parameters. Vertical ground reaction force (GRF) was simultaneously measured with pit-mounted force plates (1 kHz) and loadsol ® sensors (100 Hz) under unshod forefoot and rearfoot running-step conditions. GRF data collections were repeated four times, each separated by 30 min treadmill running, to test influence of extended use. A repeated-measures ANOVA was used to identify differences between measurement devices. Additionally, mean bias and Bland-Altman limits of agreement (LoA) were calculated. We found a significant difference (p < .05) in ground contact time, peak force, and force rate, while there was no difference in parameters impulse, time to peak, and negative force rate. There was no influence of time point of measurement. The mean bias of ground contact time, impulse, peak force, and time to peak ranged between 0.6% and 3.4%, demonstrating high accuracy of loadsol ® devices for these parameters. For these same parameters, the LoA analysis showed that 95% of all measurement differences between insole and force plate measurements were less than 12%, demonstrating high precision of the sensors. However, highly dynamic behaviour of GRF, such as force rate, is not yet sufficiently resolved by the insole devices, which is likely explained by the low sampling rate.
A calibration method based on virtual large planar target for cameras with large FOV
NASA Astrophysics Data System (ADS)
Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu
2018-02-01
In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.
Utilization of a Terrestrial Laser Scanner for the Calibration of Mobile Mapping Systems
Hong, Seunghwan; Park, Ilsuk; Lee, Jisang; Lim, Kwangyong; Choi, Yoonjo; Sohn, Hong-Gyoo
2017-01-01
This paper proposes a practical calibration solution for estimating the boresight and lever-arm parameters of the sensors mounted on a Mobile Mapping System (MMS). On our MMS devised for conducting the calibration experiment, three network video cameras, one mobile laser scanner, and one Global Navigation Satellite System (GNSS)/Inertial Navigation System (INS) were mounted. The geometric relationships between three sensors were solved by the proposed calibration, considering the GNSS/INS as one unit sensor. Our solution basically uses the point cloud generated by a 3-dimensional (3D) terrestrial laser scanner rather than using conventionally obtained 3D ground control features. With the terrestrial laser scanner, accurate and precise reference data could be produced and the plane features corresponding with the sparse mobile laser scanning data could be determined with high precision. Furthermore, corresponding point features could be extracted from the dense terrestrial laser scanning data and the images captured by the video cameras. The parameters of the boresight and the lever-arm were calculated based on the least squares approach and the precision of the boresight and lever-arm could be achieved by 0.1 degrees and 10 mm, respectively. PMID:28264457
High-Precision Ionosphere Monitoring Using Continuous Measurements from BDS GEO Satellites
Yang, Haiyan; Yang, Xuhai; Zhang, Zhe; Zhao, Kunjuan
2018-01-01
The current constellation of the BeiDou Navigation Satellite System (BDS) consists of five geostationary earth orbit (GEO) satellites, five inclined geosynchronous satellite orbit (IGSO) satellites, and four medium earth orbit (MEO) satellites. The advantage of using GEO satellites to monitor the ionosphereis the almost motionless ionospheric pierce point (IPP), which is analyzed in comparison with the MEO and IGSO satellites. The results from the analysis of the observations using eight tracking sites indicate that the ionospheric total electron content (TEC) sequence derived from each GEO satellite at their respective fixed IPPs is always continuous. The precision of calculated vertical TEC (VTEC) using BDS B1/B2, B1/B3, and B2/B3 dual-frequency combinationsis compared and analyzed. The VTEC12 precision based on the B1/B2 dual-frequency measurements using the smoothed code and the raw code combination is 0.69 and 5.54 TECU, respectively, which is slightly higher than VTEC13 and much higher than VTEC23. Furthermore, the ionospheric monitoring results of site JFNG in the northern hemisphere, and CUT0 in the southern hemisphere during the period from 1 January to 31 December 2015 are presented and discussed briefly. PMID:29495506
High-Precision Ionosphere Monitoring Using Continuous Measurements from BDS GEO Satellites.
Yang, Haiyan; Yang, Xuhai; Zhang, Zhe; Zhao, Kunjuan
2018-02-27
The current constellation of the BeiDou Navigation Satellite System (BDS) consists of five geostationary earth orbit (GEO) satellites, five inclined geosynchronous satellite orbit (IGSO) satellites, and four medium earth orbit (MEO) satellites. The advantage of using GEO satellites to monitor the ionosphereis the almost motionless ionospheric pierce point (IPP), which is analyzed in comparison with the MEO and IGSO satellites. The results from the analysis of the observations using eight tracking sites indicate that the ionospheric total electron content (TEC) sequence derived from each GEO satellite at their respective fixed IPPs is always continuous. The precision of calculated vertical TEC (VTEC) using BDS B1/B2, B1/B3, and B2/B3 dual-frequency combinationsis compared and analyzed. The VTEC 12 precision based on the B1/B2 dual-frequency measurements using the smoothed code and the raw code combination is 0.69 and 5.54 TECU, respectively, which is slightly higher than VTEC 13 and much higher than VTEC 23 . Furthermore, the ionospheric monitoring results of site JFNG in the northern hemisphere, and CUT0 in the southern hemisphere during the period from 1 January to 31 December 2015 are presented and discussed briefly.
Gaia luminosities of pulsating A-F stars in the Kepler field
NASA Astrophysics Data System (ADS)
Balona, L. A.
2018-06-01
All stars in the Kepler field brighter than 12.5 magnitude have been classified according to variability type. A catalogue of δ Scuti and γ Doradus stars is presented. The problem of low frequencies in δ Sct stars, which occurs in over 98 percent of these stars, is discussed. Gaia DR2 parallaxes were used to obtain precise luminosities, enabling the instability strips of the two classes of variable to be precisely defined. Surprisingly, it turns out that the instability region of the γ Dor stars is entirely within the δ Sct instability strip. Thus γDor stars should not be considered a separate class of variable. The observed red and blue edges of the instability strip do not agree with recent model calculations. Stellar pulsation occurs in less than half of the stars in the instability region and arguments are presented to show that this cannot be explained by assuming pulsation at a level too low to be detected. Precise Gaia DR2 luminosities of high-amplitude δ Sct stars (HADS) show that most of these are normal δ Sct stars and not transition objects. It is argued that current ideas on A star envelopes need to be revised.
A classification model of Hyperion image base on SAM combined decision tree
NASA Astrophysics Data System (ADS)
Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin
2009-10-01
Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.
The photon PDF from high-mass Drell–Yan data at the LHC
Giuli, F.
2017-06-15
Achieving the highest precision for theoretical predictions at the LHC requires the calculation of hard-scattering cross sections that include perturbative QCD corrections up to (N)NNLO and electroweak (EW) corrections up to NLO. Parton distribution functions (PDFs) need to be provided with matching accuracy, which in the case of QED effects involves introducing the photon parton distribution of the proton, xγ(x,Q2) . In this work a determination of the photon PDF from fits to recent ATLAS measurements of high-mass Drell–Yan dilepton production atmore » $$\\sqrt{s}$$=8 TeV is presented. This analysis is based on the xFitter framework, and has required improvements both in the APFEL program, to account for NLO QED effects, and in the aMCfast interface to account for the photon-initiated contributions in the EW calculations within MadGraph5_aMC@NLO. The results are compared with other recent QED fits and determinations of the photon PDF, consistent results are found.« less
Research and Development of High-performance Explosives
Cornell, Rodger; Wrobel, Erik; Anderson, Paul E.
2016-01-01
Developmental testing of high explosives for military applications involves small-scale formulation, safety testing, and finally detonation performance tests to verify theoretical calculations. small-scale For newly developed formulations, the process begins with small-scale mixes, thermal testing, and impact and friction sensitivity. Only then do subsequent larger scale formulations proceed to detonation testing, which will be covered in this paper. Recent advances in characterization techniques have led to unparalleled precision in the characterization of early-time evolution of detonations. The new technique of photo-Doppler velocimetry (PDV) for the measurement of detonation pressure and velocity will be shared and compared with traditional fiber-optic detonation velocity and plate-dent calculation of detonation pressure. In particular, the role of aluminum in explosive formulations will be discussed. Recent developments led to the development of explosive formulations that result in reaction of aluminum very early in the detonation product expansion. This enhanced reaction leads to changes in the detonation velocity and pressure due to reaction of the aluminum with oxygen in the expanding gas products. PMID:26966969
The photon PDF from high-mass Drell-Yan data at the LHC.
Giuli, F
2017-01-01
Achieving the highest precision for theoretical predictions at the LHC requires the calculation of hard-scattering cross sections that include perturbative QCD corrections up to (N)NNLO and electroweak (EW) corrections up to NLO. Parton distribution functions (PDFs) need to be provided with matching accuracy, which in the case of QED effects involves introducing the photon parton distribution of the proton, [Formula: see text]. In this work a determination of the photon PDF from fits to recent ATLAS measurements of high-mass Drell-Yan dilepton production at [Formula: see text] TeV is presented. This analysis is based on the xFitter framework, and has required improvements both in the APFEL program, to account for NLO QED effects, and in the aMCfast interface to account for the photon-initiated contributions in the EW calculations within MadGraph5_aMC@NLO. The results are compared with other recent QED fits and determinations of the photon PDF, consistent results are found.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
NASA Astrophysics Data System (ADS)
Schröder, Markus; Meyer, Hans-Dieter
2017-08-01
We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.
Precision corrections to fine tuning in SUSY
Buckley, Matthew R.; Monteux, Angelo; Shih, David
2017-06-20
Requiring that the contributions of supersymmetric particles to the Higgs mass are not highly tuned places upper limits on the masses of superpartners — in particular the higgsino, stop, and gluino. We revisit the details of the tuning calculation and introduce a number of improvements, including RGE resummation, two-loop effects, a proper treatment of UV vs. IR masses, and threshold corrections. This improved calculation more accurately connects the tuning measure with the physical masses of the superpartners at LHC-accessible energies. After these refinements, the tuning bound on the stop is now also sensitive to the masses of the 1st andmore » 2nd generation squarks, which limits how far these can be decoupled in Effective SUSY scenarios. We find that, for a fixed level of tuning, our bounds can allow for heavier gluinos and stops than previously considered. Despite this, the natural region of supersymmetry is under pressure from the LHC constraints, with high messenger scales particularly disfavored.« less
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Precision corrections to fine tuning in SUSY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, Matthew R.; Monteux, Angelo; Shih, David
Requiring that the contributions of supersymmetric particles to the Higgs mass are not highly tuned places upper limits on the masses of superpartners — in particular the higgsino, stop, and gluino. We revisit the details of the tuning calculation and introduce a number of improvements, including RGE resummation, two-loop effects, a proper treatment of UV vs. IR masses, and threshold corrections. This improved calculation more accurately connects the tuning measure with the physical masses of the superpartners at LHC-accessible energies. After these refinements, the tuning bound on the stop is now also sensitive to the masses of the 1st andmore » 2nd generation squarks, which limits how far these can be decoupled in Effective SUSY scenarios. We find that, for a fixed level of tuning, our bounds can allow for heavier gluinos and stops than previously considered. Despite this, the natural region of supersymmetry is under pressure from the LHC constraints, with high messenger scales particularly disfavored.« less
Claims-Based Definition of Death in Japanese Claims Database: Validity and Implications
Ooba, Nobuhiro; Setoguchi, Soko; Ando, Takashi; Sato, Tsugumichi; Yamaguchi, Takuhiro; Mochizuki, Mayumi; Kubota, Kiyoshi
2013-01-01
Background For the pending National Claims Database in Japan, researchers will not have access to death information in the enrollment files. We developed and evaluated a claims-based definition of death. Methodology/Principal Findings We used healthcare claims and enrollment data between January 2005 and August 2009 for 195,193 beneficiaries aged 20 to 74 in 3 private health insurance unions. We developed claims-based definitions of death using discharge or disease status and Charlson comorbidity index (CCI). We calculated sensitivity, specificity and positive predictive values (PPVs) using the enrollment data as a gold standard in the overall population and subgroups divided by demographic and other factors. We also assessed bias and precision in two example studies where an outcome was death. The definition based on the combination of discharge/disease status and CCI provided moderate sensitivity (around 60%) and high specificity (99.99%) and high PPVs (94.8%). In most subgroups, sensitivity of the preferred definition was also around 60% but varied from 28 to 91%. In an example study comparing death rates between two anticancer drug classes, the claims-based definition provided valid and precise hazard ratios (HRs). In another example study comparing two classes of anti-depressants, the HR with the claims-based definition was biased and had lower precision than that with the gold standard definition. Conclusions/Significance The claims-based definitions of death developed in this study had high specificity and PPVs while sensitivity was around 60%. The definitions will be useful in future studies when used with attention to the possible fluctuation of sensitivity in some subpopulations. PMID:23741526
NASA Astrophysics Data System (ADS)
Kim, Moojong; Kim, Jinyoung; Lee, Moon G.
Recently, in micro/nano fabrication equipments, linear motors are widely used as an actuator to position workpiece, machining tool and measurement head. To control them faster and more precise, the motor should have high actuating force and small force ripple. High actuating force enable us to more workpiece with high acceleration. Eventually, it may provide higher throughput. Force ripple gives detrimental effect on the precision and tracking performance of the equipments. In order to accomplish more precise motion, it is important to make lower the force ripple. Force ripple is categorized into cogging and mutual ripple. First is dependent on the shape of magnets and/or core. The second is not dependent on them but dependent on current commutation. In this work, coreless mover i.e. coil winding is applied to the linear motor to avoid the cogging ripple. Therefore, the mutual ripple is only considered to be minimized. Ideal Halbach magnet array has continuously varying magnetization. The THMA (Halbach magnet array with T shape magnets) is proposed to approximate the ideal one. The THMA can not produce ideal sinusoidal flux, therefore, the linear motor with THMA and sinusoidal commutation of current generates the mutual force ripple. In this paper, in order to compensate mutual force ripple by feedforward(FF) controller, we calculate the optimized commutation of input current. The ripple is lower than 1.17% of actuating force if the commutation current agree with the magnetic flux from THMA. The performance of feedforward(FF) controller is verified by experiment.
Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.
1987-01-01
The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)
Using an electronic compass to determine telemetry azimuths
Cox, R.R.; Scalf, J.D.; Jamison, B.E.; Lutz, R.S.
2002-01-01
Researchers typically collect azimuths from known locations to estimate locations of radiomarked animals. Mobile, vehicle-mounted telemetry receiving systems frequently are used to gather azimuth data. Use of mobile systems typically involves estimating the vehicle's orientation to grid north (vehicle azimuth), recording an azimuth to the transmitter relative to the vehicle azimuth from a fixed rosette around the antenna mast (relative azimuth), and subsequently calculating an azimuth to the transmitter (animal azimuth). We incorporated electronic compasses into standard null-peak antenna systems by mounting the compass sensors atop the antenna masts and evaluated the precision of this configuration. This system increased efficiency by eliminating vehicle orientation and calculations to determine animal azimuths and produced estimates of precision (azimuth SD=2.6 deg., SE=0.16 deg.) similar to systems that required orienting the mobile system to grid north. Using an electronic compass increased efficiency without sacrificing precision and should produce more accurate estimates of locations when marked animals are moving or when vehicle orientation is problematic.
Estimation of suspended-sediment rating curves and mean suspended-sediment loads
Crawford, Charles G.
1991-01-01
A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.
Precision of guided scanning procedures for full-arch digital impressions in vivo.
Zimmermann, Moritz; Koller, Christina; Rumetsch, Moritz; Ender, Andreas; Mehl, Albert
2017-11-01
System-specific scanning strategies have been shown to influence the accuracy of full-arch digital impressions. Special guided scanning procedures have been implemented for specific intraoral scanning systems with special regard to the digital orthodontic workflow. The aim of this study was to evaluate the precision of guided scanning procedures compared to conventional impression techniques in vivo. Two intraoral scanning systems with implemented full-arch guided scanning procedures (Cerec Omnicam Ortho; Ormco Lythos) were included along with one conventional impression technique with irreversible hydrocolloid material (alginate). Full-arch impressions were taken three times each from 5 participants (n = 15). Impressions were then compared within the test groups using a point-to-surface distance method after best-fit model matching (OraCheck). Precision was calculated using the (90-10%)/2 quantile and statistical analysis with one-way repeated measures ANOVA and post hoc Bonferroni test was performed. The conventional impression technique with alginate showed the lowest precision for full-arch impressions with 162.2 ± 71.3 µm. Both guided scanning procedures performed statistically significantly better than the conventional impression technique (p < 0.05). Mean values for group Cerec Omnicam Ortho were 74.5 ± 39.2 µm and for group Ormco Lythos 91.4 ± 48.8 µm. The in vivo precision of guided scanning procedures exceeds conventional impression techniques with the irreversible hydrocolloid material alginate. Guided scanning procedures may be highly promising for clinical applications, especially for digital orthodontic workflows.
Direct calculation of liquid-vapor phase equilibria from transition matrix Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Errington, Jeffrey R.
2003-06-01
An approach for directly determining the liquid-vapor phase equilibrium of a model system at any temperature along the coexistence line is described. The method relies on transition matrix Monte Carlo ideas developed by Fitzgerald, Picard, and Silver [Europhys. Lett. 46, 282 (1999)]. During a Monte Carlo simulation attempted transitions between states along the Markov chain are monitored as opposed to tracking the number of times the chain visits a given state as is done in conventional simulations. Data collection is highly efficient and very precise results are obtained. The method is implemented in both the grand canonical and isothermal-isobaric ensemble. The main result from a simulation conducted at a given temperature is a density probability distribution for a range of densities that includes both liquid and vapor states. Vapor pressures and coexisting densities are calculated in a straightforward manner from the probability distribution. The approach is demonstrated with the Lennard-Jones fluid. Coexistence properties are directly calculated at temperatures spanning from the triple point to the critical point.
Polarographic determination of lead hydroxide formation constants at low ionic strength
Lind, Carol J.
1978-01-01
Values of formation constants for lead hydroxide at 25 ??C were calculated from normal pulse polarographic measurements of 10-6 M lead in 0.01 M sodium perchlorate. The low concentrations simulate those found in many freshwaters, permitting direct application of the values when considering distributions of lead species. The precise evaluation of species distribution in waters at other ionic strengths requires activity coefficient corrections. As opposed to much of the previously published work done at high ionic strength, the values reported here were obtained at low ionic strength, permitting use of smaller and better defined activity coefficient corrections. These values were further confirmed by differential-pulse polarography and differential-pulse anodic stripping voltammetry data. The logs of the values for ??1??? ??2???, and ??3??? were calculated to be 6.59, 10.80, and 13.63, respectively. When corrected to zero ionic strength these values were calculated to be 6.77, 11.07, and 13.89, respectively.
Improving Charging-Breeding Simulations with Space-Charge Effects
NASA Astrophysics Data System (ADS)
Bilek, Ryan; Kwiatkowski, Ania; Steinbrügge, René
2016-09-01
Rare-isotope-beam facilities use Highly Charged Ions (HCI) for accelerators accelerating heavy ions and to improve measurement precision and resolving power of certain experiments. An Electron Beam Ion Trap (EBIT) is able to create HCI through successive electron impact, charge breeding trapped ions into higher charge states. CBSIM was created to calculate successive charge breeding with an EBIT. It was augmented by transferring it into an object-oriented programming language, including additional elements, improving ion-ion collision factors, and exploring the overlap of the electron beam with the ions. The calculation is enhanced with the effects of residual background gas by computing the space charge due to charge breeding. The program assimilates background species, ionizes and charge breeds them alongside the element being studied, and allows them to interact with the desired species through charge exchange, giving fairer overview of realistic charge breeding. Calculations of charge breeding will be shown for realistic experimental conditions. We reexamined the implementation of ionization energies, cross sections, and ion-ion interactions when charge breeding.
Musil, Karel; Florianova, Veronika; Bucek, Pavel; Dohnal, Vlastimil; Kuca, Kamil; Musilek, Kamil
2016-01-05
Acetylcholinesterase reactivators (oximes) are compounds used for antidotal treatment in case of organophosphorus poisoning. The dissociation constants (pK(a1)) of ten standard or promising acetylcholinesterase reactivators were determined by ultraviolet absorption spectrometry. Two methods of spectra measurement (UV-vis spectrometry, FIA/UV-vis) were applied and compared. The soft and hard models for calculation of pK(a1) values were performed. The pK(a1) values were recommended in the range 7.00-8.35, where at least 10% of oximate anion is available for organophosphate reactivation. All tested oximes were found to have pK(a1) in this range. The FIA/UV-vis method provided rapid sample throughput, low sample consumption, high sensitivity and precision compared to standard UV-vis method. The hard calculation model was proposed as more accurate for pK(a1) calculation. Copyright © 2015 Elsevier B.V. All rights reserved.
Nuclear recoil effect on the binding energies in highly charged He-like ions
NASA Astrophysics Data System (ADS)
Malyshev, A. V.; Popov, R. V.; Shabaev, V. M.; Zubova, N. A.
2018-04-01
The most precise to-date evaluation of the nuclear recoil effect on the n = 1 and n = 2 energy levels of He-like ions is presented in the range Z = 12–100. The one-electron recoil contribution is calculated within the framework of the rigorous quantum electrodynamics approach to first order in the electron-to-nucleus mass ratio m/M and to all orders in the parameter αZ. The two-electron m/M recoil term is calculated employing the 1/Z perturbation theory. The recoil contribution of the zeroth order in 1/Z is evaluated to all orders in αZ, while the 1/Z term is calculated using the Breit approximation. The recoil corrections of the second and higher orders in 1/Z are taken into account within the nonrelativistic approach. The obtained results are compared with the previous evaluation of this effect (Artemyev et al 2005 Phys. Rev. A 71 062104).
Precision and accuracy of commonly used dental age estimation charts for the New Zealand population.
Baylis, Stephanie; Bassed, Richard
2017-08-01
Little research has been undertaken for the New Zealand population in the field of dental age estimation. This research to date indicates there are differences in dental developmental rates between the New Zealand population and other global population groups, and within the New Zealand population itself. Dental age estimation methods range from dental development charts to complex biometric analysis. Dental development charts are not the most accurate method of dental age estimation, but are time saving in their use. They are an excellent screening tool, particularly for post-mortem identification purposes, and for assessing variation from population norms in living individuals. The aim of this study was to test the precision and accuracy of three dental development charts (Schour and Massler, Blenkin and Taylor, and the London Atlas), used to estimate dental age of a sample of New Zealand juveniles between the ages of 5 and 18 years old (n=875). Percentage 'best fit' to correct age category and to expected chart stage were calculated to determine which chart was the most precise for the sample. Chronological ages were compared to estimated dental ages using a two-tailed paired t-test (P<0.05) for each of the three methods. The mean differences between CA and DA were calculated to determine bias and the absolute mean differences were calculated to indicate accuracy. The results of this study show that while accuracy and precision were low for all charts tested against the New Zealand population sample, the Blenkin and Taylor Australian charts performed best overall. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bryant, Justin; Park, Hyo In; Nica, Ninel; Iacob, Victor; Hardy, John
2017-09-01
We have extended our series of precision measurements of internal conversion coefficients (ICC) to include the 39.76-keV, E3 transition in 103Rh. Our goal has been to test the Dirac-Fock ICC calculations, specifically with respect to the role of the atomic vacancy created in the conversion process. We prepared a sample from pure (natural) ruthenium chloride by converting the sample to ruthenium oxide, electrochemically depositing it on an aluminum backing, and subsequently activating it with thermal neutrons at the Texas A&M TRIGA reactor for 20 hours. Decay spectra were then recorded for roughly 120 hours with a HPGe detector that has been precisely efficiency calibrated (+/-.15% relative precision). In the acquired spectra, all impurities were identified and corrected for accordingly. A program was written using the ROOT framework developed by CERN to extract the area of the 39.76-keV gamma-ray peak from 103Rh, which partially overlapped the Kα x-ray peaks from a 153Gd impurity. From the ratio of the 39.76-keV peak to the Ruthenium K x rays, we determined a preliminary value for the ICC: αk(39.76) =134.6(19). This result agrees well with the theoretical calculation including the atomic vacancy, 135.2, and disagrees with the calculation excluding the vacancy, 127.4. This is consistent with our previous measurements, indicating that the atomic vacancy must be taken into account. Thanks to the NSF, DOE and Welch Foundation.
An Approach for High-precision Stand-alone Positioning in a Dynamic Environment
NASA Astrophysics Data System (ADS)
Halis Saka, M.; Metin Alkan, Reha; Ozpercin, Alişir
2015-04-01
In this study, an algorithm is developed for precise positioning in dynamic environment utilizing a single geodetic GNSS receiver using carrier phase data. In this method, users should start the measurement on a known point near the project area for a couple of seconds making use of a single dual-frequency geodetic-grade receiver. The technique employs iono-free carrier phase observations with precise products. The equation of the algorithm is given below; Sm(t(i+1))=SC(ti)+[ΦIF (t(i+1) )-ΦIF (ti)] where, Sm(t(i+1)) is the phase-range between satellites and the receiver, SC(ti) is the initial range computed from the initial known point coordinates and the satellite coordinates and ΦIF is the ionosphere-free phase measurement (in meters). Tropospheric path delays are modelled using the standard tropospheric model. To accomplish the process, an in-house program was coded and some functions were adopted from Easy-Suite available at http://kom.aau.dk/~borre/easy. In order to assess the performance of the introduced algorithm in a dynamic environment, a dataset from a kinematic test measurement was used. The data were collected from a kinematic test measurement in Istanbul, Turkey. In the test measurement, a geodetic dual-frequency GNSS receiver, Ashtech Z-Xtreme, was set up on a known point on the shore and a couple of epochs were recorded for initialization. The receiver was then moved to a vessel and data were collected for approximately 2.5 hours and the measurement was finalized on a known point on the shore. While the kinematic measurement on the vessel were carried out, another GNSS receiver was set up on a geodetic point with known coordinates on the shore and data were collected in static mode to calculate the reference trajectory of the vessel using differential technique. The coordinates of the vessel were calculated for each measurement epoch with the introduced method. With the purpose of obtaining more robust results, all coordinates were calculated once again by inversely, i.e. from the last epoch to the first one. In this way, the estimated coordinates were also controlled. The average of both computed coordinates were used as vessel coordinates and then compared with the known-coordinates those of geodetic receiver epoch by epoch. The results indicate that the calculated coordinates from the introduced method are consistent with the reference trajectory with an accuracy of about 1 decimeter. In contrast, the findings imply lower accuracy for height components with an accuracy of about 2 decimeters. This accuracy level meets the requirement of many applications including some marine applications, precise hydrographic surveying, dredging, attitude control of ships, buoys and floating platforms, marine geodesy, navigation and oceanography.
Sim, Julius; Lewis, Martyn
2012-03-01
To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.
Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication
Chen, Chien-Sheng
2015-01-01
To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755
Air Traffic Management Technology Demonstration-1 Concept of Operations (ATD-1 ConOps)
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Johnson, William C.; Swenson, Harry; Robinson, John E.; Prevot, Thomas; Callantine, Todd; Scardina, John; Greene, Michael
2012-01-01
The operational goal of the ATD-1 ConOps is to enable aircraft, using their onboard FMS capabilities, to fly Optimized Profile Descents (OPDs) from cruise to the runway threshold at a high-density airport, at a high throughput rate, using primarily speed control to maintain in-trail separation and the arrival schedule. The three technologies in the ATD-1 ConOps achieve this by calculating a precise arrival schedule, using controller decision support tools to provide terminal controllers with speeds for aircraft to fly to meet times at a particular meter points, and onboard software providing flight crews with speeds for the aircraft to fly to achieve a particular spacing behind preceding aircraft.
Radial rescaling approach for the eigenvalue problem of a particle in an arbitrarily shaped box.
Lijnen, Erwin; Chibotaru, Liviu F; Ceulemans, Arnout
2008-01-01
In the present work we introduce a methodology for solving a quantum billiard with Dirichlet boundary conditions. The procedure starts from the exactly known solutions for the particle in a circular disk, which are subsequently radially rescaled in such a way that they obey the new boundary conditions. In this way one constructs a complete basis set which can be used to obtain the eigenstates and eigenenergies of the corresponding quantum billiard to a high level of precision. Test calculations for several regular polygons show the efficiency of the method which often requires one or two basis functions to describe the lowest eigenstates with high accuracy.
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Van Inghelandt, Delphine; Melchinger, Albrecht E; Lebreton, Claude; Stich, Benjamin
2010-05-01
Information about the genetic diversity and population structure in elite breeding material is of fundamental importance for the improvement of crops. The objectives of our study were to (a) examine the population structure and the genetic diversity in elite maize germplasm based on simple sequence repeat (SSR) markers, (b) compare these results with those obtained from single nucleotide polymorphism (SNP) markers, and (c) compare the coancestry coefficient calculated from pedigree records with genetic distance estimates calculated from SSR and SNP markers. Our study was based on 1,537 elite maize inbred lines genotyped with 359 SSR and 8,244 SNP markers. The average number of alleles per locus, of group specific alleles, and the gene diversity (D) were higher for SSRs than for SNPs. Modified Roger's distance (MRD) estimates and membership probabilities of the STRUCTURE matrices were higher for SSR than for SNP markers but the germplasm organization in four heterotic pools was consistent with STRUCTURE results based on SSRs and SNPs. MRD estimates calculated for the two marker systems were highly correlated (0.87). Our results suggested that the same conclusions regarding the structure and the diversity of heterotic pools could be drawn from both markers types. Furthermore, although our results suggested that the ratio of the number of SSRs and SNPs required to obtain MRD or D estimates with similar precision is not constant across the various precision levels, we propose that between 7 and 11 times more SNPs than SSRs should be used for analyzing population structure and genetic diversity.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Measurement of the 1s2s ^1S0 - 1s2p ^3P1 interval in helium-like silicon.
NASA Astrophysics Data System (ADS)
Redshaw, M.; Harry, R.; Myers, E. G.; Weatherford, C. A.
2001-05-01
Accurate calculation of the energy levels of helium-like ions is a basic problem in relativistic atomic theory. For the n=3D2 levels at moderate Z, published calculations give all ``structure'' but not all explicit QED contributions to order (Zα)^4 a.u.(D.R. Plante, W.R. Johnson and J. Sapirstein, Phys. Rev. A 49), 3519 (1994).^, (K.T. Cheng, M.H. Chen, W.R. Johnson and J. Sapirstein, Phys. Rev. A 50), 247 (1994).. Measurements of the 1s2p ^3P - 1s2s ^3S transitions, which lie in the vacuum ultra-violet, are barely precise enough to challenge the theory. However, the intercombination 1s2s ^1S0 - 1s2p ^3P1 interval lies in the infra-red for Z<40 and enables precision measurements using laser spectroscopy(E.G. Myers, J.K. Thompson, E.P. Gavathas, N.R. Claussen, J.D. Silver and D.J.H. Howie, Phys. Rev. Lett. 75), 3637 (1995).. We aim to measure this interval in Si^12+ using a foil-stripped 1 MeV/u ion beam from the Florida State Van de Graaff accelerator and a single-mode c.w. Nd:YAG laser at 1.319 μm. To obtain a sufficient transition probability, the Si^12+ beam is merged co-linearly with the laser light inside an ultra-high finesse build-up cavity. The results should provide a clear test of current and developing calculations of QED contributions in two-electron ions.
Epilepsy Treatment Simplified through Mobile Ketogenic Diet Planning
Li, Hanzhou; Jauregui, Jeffrey L.; Fenton, Cagla; Chee, Claire M.; Bergqvist, A.G. Christina
2017-01-01
Background The Ketogenic Diet (KD) is an effective, alternative treatment for refractory epilepsy. This high fat, low protein and carbohydrate diet mimics the metabolic and hormonal changes that are associated with fasting. Aims To maximize the effectiveness of the KD, each meal is precisely planned, calculated, and weighed to within 0.1 gram for the average three-year duration of treatment. Managing the KD is time-consuming and may deter caretakers and patients from pursuing or continuing this treatment. Thus, we investigated methods of planning KD faster and making the process more portable through mobile applications. Methods Nutritional data was gathered from the United States Department of Agriculture (USDA) Nutrient Database. User selected foods are converted into linear equations with n variables and three constraints: prescribed fat content, prescribed protein content, and prescribed carbohydrate content. Techniques are applied to derive the solutions to the underdetermined system depending on the number of foods chosen. Results The method was implemented on an iOS device and tested with varieties of foods and different number of foods selected. With each case, the application’s constructed meal plan was within 95% precision of the KD requirements. Conclusion In this study, we attempt to reduce the time needed to calculate a meal by automating the computation of the KD via a linear algebra model. We improve upon previous KD calculators by offering optimal suggestions and incorporating the USDA database. We believe this mobile application will help make the KD and other dietary treatment preparations less time consuming and more convenient. PMID:28794808
NASA Technical Reports Server (NTRS)
Parzen, Benjamin
1992-01-01
The theory of oscillator analysis in the immittance domain should be read in conjunction with the additional theory presented here. The combined theory enables the computer simulation of the steady state oscillator. The simulation makes the calculation of the oscillator total steady state performance practical, including noise at all oscillator locations. Some specific precision oscillators are analyzed.
High-Accuracy Comparison Between the Post-Newtonian and Self-Force Dynamics of Black-Hole Binaries
NASA Astrophysics Data System (ADS)
Blanchet, Luc; Detweiler, Steven; Le Tiec, Alexandre; Whiting, Bernard F.
The relativistic motion of a compact binary system moving in circular orbit is investigated using the post-Newtonian (PN) approximation and the perturbative self-force (SF) formalism. A particular gauge-invariant observable quantity is computed as a function of the binary's orbital frequency. The conservative effect induced by the gravitational SF is obtained numerically with high precision, and compared to the PN prediction developed to high order. The PN calculation involves the computation of the 3PN regularized metric at the location of the particle. Its divergent self-field is regularized by means of dimensional regularization. The poles ∝ {(d - 3)}^{-1} that occur within dimensional regularization at the 3PN order disappear from the final gauge-invariant result. The leading 4PN and next-to-leading 5PN conservative logarithmic contributions originating from gravitational wave tails are also obtained. Making use of these exact PN results, some previously unknown PN coefficients are measured up to the very high 7PN order by fitting to the numerical SF data. Using just the 2PN and new logarithmic terms, the value of the 3PN coefficient is also confirmed numerically with very high precision. The consistency of this cross-cultural comparison provides a crucial test of the very different regularization methods used in both SF and PN formalisms, and illustrates the complementarity of these approximation schemes when modeling compact binary systems.
NASA Astrophysics Data System (ADS)
Dickenson, G. D.; Salumbides, E. J.; Niu, M.; Jungen, Ch.; Ross, S. C.; Ubachs, W.
2012-09-01
Recently a high precision spectroscopic investigation of the EF1Σg+-X1Σg+ system of molecular hydrogen was reported yielding information on QED and relativistic effects in a sequence of rotational quantum states in the X1Σg+ ground state of the H2 molecule [Salumbides , Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.107.043005 107, 043005 (2011)]. The present paper presents a more detailed description of the methods and results. Furthermore, the paper serves as a stepping stone towards a continuation of the previous study by extending the known level structure of the EF1Σg+ state to highly excited rovibrational levels through Doppler-free two-photon spectroscopy. Based on combination differences between vibrational levels in the ground state, and between three rotational branches (O, Q, and S branches) assignments of excited EF1Σg+ levels, involving high vibrational and rotational quantum numbers, can be unambiguously made. For the higher EF1Σg+ levels, where no combination differences are available, calculations were performed using the multichannel quantum defect method, for a broad class of vibrational and rotational levels up to J=19. These predictions were used for assigning high-J EF levels and are found to be accurate within 5 cm-1.
ERIC Educational Resources Information Center
Hagedorn, Linda Serra
1998-01-01
A study explored two distinct methods of calculating a precise measure of gender-based wage differentials among college faculty. The first estimation considered wage differences using a formula based on human capital; the second included compensation for past discriminatory practices. Both measures were used to predict three specific aspects of…
ResBos2: Precision Resummation for the LHC ERA
NASA Astrophysics Data System (ADS)
Isaacson, Joshua Paul
With the precision of data at the LHC, it is important to advance theoretical calculations to match it. Previously, the ResBos code was insufficient to adequately describe the data at the LHC. This requires an advancement in the ResBos code, and led to the development of the ResBos2 package. This thesis discusses some of the major improvements that were implemented into the code to advance it and prepare it for the precision of the LHC. The resummation for color singlet particles is improved from approximate NNLL+NLO accuracy to an accuracy of N3LL+NNLO accuracy. The ResBos2 code is validated against the calculation of the total cross-section for Drell-Yan processes against fixed order calculations, to ensure that the calculations are performed correctly. This allows for a prediction of the transverse momentum and φ*eta distributions for the Z boson to be consistent with the data from ATLAS at a collider energy of √s = 8 TeV. Also, the effects of choice of resummation scheme are investigated for the Collins-Soper-Sterman and Catani-deFlorian-Grazzini formalisms. It is shown that as long as the calculation of each of these is performed such that the order of the B coefficient is exactly 1 order higher than that of the C and H coefficients, then the two formalisms are consistent. Additionally, using the improved theoretical prediction will help to reduce the theoretical uncertainty on the mass of the W boson, by reducing the uncertainty in extrapolating the dsigma/dpTW distribution from the data for the dsigma/dpT Z distribution by taking the ratio of the theory predictions for the Z and W transverse momentum. In addition to improving the accuracy of the color singlet final state resummation calculations, the ResBos2 code introduces the resummation of non-color singlet states in the final state. Here the details for the Higgs plus jet calculation are illustrated as an example of one such process. It is shown that it is possible to perform this resummation, but the resummation formalism needs to be modified in order to do so. The major modification that is made is the inclusion of the jet cone-size dependence in the Sudakov form factor. This result resolves, analytically, the Sudakov shoulder singularity. The results of the ResBos2 prediction are compared to both the fixed order and parton shower calculations. The calculations are shown to be consistent for all of the distributions considered up to the theoretical uncertainty. As the LHC continues to increase their data, and their precision on these observables, the ability to have analytic resummation calculations for non-color singlet final states will provide a strong check of perturbative QCD. Finally, the calculation of the terms needed to match to N3LO are done in this work. Once the results become sufficiently publicly available for the perturbative calculation, the ResBos2 code can easily be extended to include these corrections, and be used as a means to predict the total cross-section at N3LO as well.
Influence of ion chamber response on in-air profile measurements in megavoltage photon beams.
Tonkopi, E; McEwen, M R; Walters, B R B; Kawrakow, I
2005-09-01
This article presents an investigation of the influence of the ion chamber response, including buildup caps, on the measurement of in-air off-axis ratio (OAR) profiles in megavoltage photon beams using Monte Carlo simulations with the EGSnrc system. Two new techniques for the calculation of OAR profiles are presented. Results of the Monte Carlo simulations are compared to measurements performed in 6, 10 and 25 MV photon beams produced by an Elekta Precise linac and shown to agree within the experimental and simulation uncertainties. Comparisons with calculated in-air kerma profiles demonstrate that using a plastic mini phantom gives more accurate air-kerma measurements than using high-Z material buildup caps and that the variation of chamber response with distance from the central axis must be taken into account.
NASA Astrophysics Data System (ADS)
Vitali, Ettore; Shi, Hao; Qin, Mingpu; Zhang, Shiwei
2017-12-01
Experiments with ultracold atoms provide a highly controllable laboratory setting with many unique opportunities for precision exploration of quantum many-body phenomena. The nature of such systems, with strong interaction and quantum entanglement, makes reliable theoretical calculations challenging. Especially difficult are excitation and dynamical properties, which are often the most directly relevant to experiment. We carry out exact numerical calculations, by Monte Carlo sampling of imaginary-time propagation of Slater determinants, to compute the pairing gap in the two-dimensional Fermi gas from first principles. Applying state-of-the-art analytic continuation techniques, we obtain the spectral function and the density and spin structure factors providing unique tools to visualize the BEC-BCS crossover. These quantities will allow for a direct comparison with experiments.
Small Radiation Beam Dosimetry for Radiosurgery of Trigeminal Neuralgia: One Case Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia-Garduno, O. A.; Larraga-Gutierrez, J. M.; Unidad de Radioneurocirugia, Instituto Nacional de Neurologia y Neurocirugia. Insurgentes Sur 3677, Col. La Fama, C. P. 14269, Tlalpan, Mexico, D. F.
2008-08-11
The use of small radiation beams for trigeminal neuralgia (TN) treatment requires high precision and accuracy in dose distribution calculations and delivery. Special attention must be kept on the type of detector to be used. In this work, the use of GafChromic EBT registered radiochromic and X-OMAT V2 radiographic films for small radiation beam characterization is reported. The dosimetric information provided by the films (total output factors, tissue maximum ratios and off axis ratios) is compared against measurements with a shielded solid state (diode) reference detector. The film dosimetry was used for dose distribution calculations for the treatment of trigeminalmore » neuralgia radiosurgery. Comparison of the isodose curves shows that the dosimetry produced with the X-OMAT radiographic film overestimates the dose distributions in the penumbra region.« less
Validity of the "Laplace Swindle" in Calculation of Giant-Planet Gravity Fields
NASA Astrophysics Data System (ADS)
Hubbard, William B.
2014-11-01
Jupiter and Saturn have large rotation-induced distortions, providing an opportunity to constrain interior structure via precise measurement of external gravity. Anticipated high-precision gravity measurements close to the surfaces of Jupiter (Juno spacecraft) and Saturn (Cassini spacecraft), possibly detecting zonal harmonics to J10 and beyond, will place unprecedented requirements on gravitational modeling via the theory of figures (TOF). It is not widely appreciated that the traditional TOF employs a formally nonconvergent expansion attributed to Laplace. This suspect expansion is intimately related to the standard zonal harmonic (J-coefficient) expansion of the external gravity potential. It can be shown (Hubbard, Schubert, Kong, and Zhang: Icarus, in press) that both Jupiter and Saturn are in the domain where Laplace's "swindle" works exactly, or at least as well as necessary. More highly-distorted objects such as rapidly spinning asteroids may not be in this domain, however. I present a numerical test for the validity and precision of TOF via polar "audit points". I extend the audit-point test to objects rotating differentially on cylinders, obtaining zonal harmonics to J20 and beyond. Models with only low-order differential rotation do not exhibit dramatic effects in the shape of the zonal harmonic spectrum. However, a model with Jupiter-like zonal winds exhibits a break in the zonal harmonic spectrum above about J10, and generally follows the more shallow Kaula power rule at higher orders. This confirms an earlier result obtained by a different method (Hubbard: Icarus 137, 357-359, 1999).
Lattice Calculations and the Muon Anomalous Magnetic Moment
NASA Astrophysics Data System (ADS)
Marinković, Marina Krstić
2017-07-01
Anomalous magnetic moment of the muon, a_{μ }=(g_{μ }-2)/2, is one of the most precisely measured quantities in particle physics and it provides a stringent test of the Standard Model. The planned improvements of the experimental precision at Fermilab and at J-PARC propel further reduction of the theoretical uncertainty of a_{μ }. The hope is that the efforts on both sides will help resolve the current discrepancy between the experimental measurement of a_{μ } and its theoretical prediction, and potentially gain insight into new physics. The dominant sources of the uncertainty in the theoretical prediction of a_{μ } are the errors of the hadronic contributions. I will discuss recent progress on determination of hadronic contributions to a_{μ } from lattice calculations.
Validation d'un nouveau calcul de reference en evolution pour les reacteurs thermiques
NASA Astrophysics Data System (ADS)
Canbakan, Axel
Resonance self-shielding calculations are an essential component of a deterministic lattice code calculation. Even if their aim is to correct the cross sections deviation, they introduce a non negligible error in evaluated parameters such as the flux. Until now, French studies for light water reactors are based on effective reaction rates obtained using an equivalence in dilution technique. With the increase of computing capacities, this method starts to show its limits in precision and can be replaced by a subgroup method. Originally used for fast neutron reactor calculations, the subgroup method has many advantages such as using an exact slowing down equation. The aim of this thesis is to suggest a validation as precise as possible without burnup, and then with an isotopic depletion study for the subgroup method. In the end, users interested in implementing a subgroup method in their scheme for Pressurized Water Reactors can rely on this thesis to justify their modelization choices. Moreover, other parameters are validated to suggest a new reference scheme for fast execution and precise results. These new techniques are implemented in the French lattice scheme SHEM-MOC, composed of a Method Of Characteristics flux calculation and a SHEM-like 281-energy group mesh. First, the libraries processed by the CEA are compared. Then, this thesis suggests the most suitable energetic discretization for a subgroup method. Finally, other techniques such as the representation of the anisotropy of the scattering sources and the spatial representation of the source in the MOC calculation are studied. A DRAGON5 scheme is also validated as it shows interesting elements: the DRAGON5 subgroup method is run with a 295-eenergy group mesh (compared to 361 groups for APOLLO2). There are two reasons to use this code. The first involves offering a new reference lattice scheme for Pressurized Water Reactors to DRAGON5 users. The second is to study parameters that are not available in APOLLO2 such as self-shielding in a temperature gradient and using a flux calculation based on MOC in the self-shielding part of the simulation. This thesis concludes that: (1) The subgroup method is at least more precise than a technique based on effective reaction rates, only if we use a 361-energy group mesh; (2) MOC with a linear source in a geometrical region gives better results than a MOC with a constant model. A moderator discretization is compulsory; (3) A P3 choc law is satisfactory, ensuring a coherence with 2D full core calculations; (4) SHEM295 is viable with a Subgroup Projection Method for DRAGON5.
Application of hybrid artificial fish swarm algorithm based on similar fragments in VRP
NASA Astrophysics Data System (ADS)
Che, Jinnuo; Zhou, Kang; Zhang, Xueyu; Tong, Xin; Hou, Lingyun; Jia, Shiyu; Zhen, Yiting
2018-03-01
Focused on the issue that the decrease of convergence speed and the precision of calculation at the end of the process in Artificial Fish Swarm Algorithm(AFSA) and instability of results, a hybrid AFSA based on similar fragments is proposed. Traditional AFSA enjoys a lot of obvious advantages in solving complex optimization problems like Vehicle Routing Problem(VRP). AFSA have a few limitations such as low convergence speed, low precision and instability of results. In this paper, two improvements are introduced. On the one hand, change the definition of the distance for artificial fish, as well as increase vision field of artificial fish, and the problem of speed and precision can be improved when solving VRP. On the other hand, mix artificial bee colony algorithm(ABC) into AFSA - initialize the population of artificial fish by the ABC, and it solves the problem of instability of results in some extend. The experiment results demonstrate that the optimal solution of the hybrid AFSA is easier to approach the optimal solution of the standard database than the other two algorithms. In conclusion, the hybrid algorithm can effectively solve the problem that instability of results and decrease of convergence speed and the precision of calculation at the end of the process.
A comparative study of measurements from radiosondes, rocketsondes, and satellites
NASA Technical Reports Server (NTRS)
Nestler, M. S.
1983-01-01
Direct comparisons of operational products derived from measurements of radiance by satellites to measurements from conventional in situ sensors are important for the evaluation of satellite systems. However, errors in the in situ measurements themselves complicate such comparisons. Atmospheric temporal and spatial variability are also influential. These issues are investigated by means of a special field program composed of flights of dual radiosondes and multiple radiosondes launched near the time of NOAA-6 overpasses. Satellite derived mean layer temperatures, geopotential heights, and winds are compared with the same quantities determined from the in situ sensors. Of particular interest is the impact of in situ errors on these comparisons. It is shown that the radiosonde provides a precise pressure height relationship and therefore precise data for synoptic type use. Radar tracking of the radiosondes reveals, however, an imprecise pressure measurement which causes large differences between the actual altitude of the radiosonde and the altitude at which it is calculated to be. Radiosondes should be radar tracked and pressures calculated if the data are to be used for purposes other than synoptic use. Evaluation of rocketsonde data reveals a temperature precision of 1 to 2 K below about 55 km. Above 55 km, the precision decreases rapidly; rms differences of up to 11 K are obtained.
Does choice of estimators influence conclusions from true metabolizable energy feeding trials?
Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.
2005-01-01
True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.
Active Optics: stress polishing of toric mirrors for the VLT SPHERE adaptive optics system.
Hugot, Emmanuel; Ferrari, Marc; El Hadi, Kacem; Vola, Pascal; Gimenez, Jean Luc; Lemaitre, Gérard R; Rabou, Patrick; Dohlen, Kjetil; Puget, Pascal; Beuzit, Jean Luc; Hubin, Norbert
2009-05-20
The manufacturing of toric mirrors for the Very Large Telescope-Spectro-Polarimetric High-Contrast Exoplanet Research instrument (SPHERE) is based on Active Optics and stress polishing. This figuring technique allows minimizing mid and high spatial frequency errors on an aspherical surface by using spherical polishing with full size tools. In order to reach the tight precision required, the manufacturing error budget is described to optimize each parameter. Analytical calculations based on elasticity theory and finite element analysis lead to the mechanical design of the Zerodur blank to be warped during the stress polishing phase. Results on the larger (366 mm diameter) toric mirror are evaluated by interferometry. We obtain, as expected, a toric surface within specification at low, middle, and high spatial frequencies ranges.
Wolever, Thomas M S
2004-02-01
To evaluate the suitability for glycaemic index (GI) calculations of using blood sampling schedules and methods of calculating area under the curve (AUC) different from those recommended, the GI values of five foods were determined by recommended methods (capillary blood glucose measured seven times over 2.0 h) in forty-seven normal subjects and different calculations performed on the same data set. The AUC was calculated in four ways: incremental AUC (iAUC; recommended method), iAUC above the minimum blood glucose value (AUCmin), net AUC (netAUC) and iAUC including area only before the glycaemic response curve cuts the baseline (AUCcut). In addition, iAUC was calculated using four different sets of less than seven blood samples. GI values were derived using each AUC calculation. The mean GI values of the foods varied significantly according to the method of calculating GI. The standard deviation of GI values calculating using iAUC (20.4), was lower than six of the seven other methods, and significantly less (P<0.05) than that using netAUC (24.0). To be a valid index of food glycaemic response independent of subject characteristics, GI values in subjects should not be related to their AUC after oral glucose. However, calculating GI using AUCmin or less than seven blood samples resulted in significant (P<0.05) relationships between GI and mean AUC. It is concluded that, in subjects without diabetes, the recommended blood sampling schedule and method of AUC calculation yields more valid and/or more precise GI values than the seven other methods tested here. The only method whose results agreed reasonably well with the recommended method (ie. within +/-5 %) was AUCcut.
Change Detection in High-Resolution Remote Sensing Images Using Levene-Test and Fuzzy Evaluation
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Liu, H. J.
2018-04-01
High-resolution remote sensing images possess complex spatial structure and rich texture information, according to these, this paper presents a new method of change detection based on Levene-Test and Fuzzy Evaluation. It first got map-spots by segmenting two overlapping images which had been pretreated, extracted features such as spectrum and texture. Then, changed information of all map-spots which had been treated by the Levene-Test were counted to obtain the candidate changed regions, hue information (H component) was extracted through the IHS Transform and conducted change vector analysis combined with the texture information. Eventually, the threshold was confirmed by an iteration method, the subject degrees of candidate changed regions were calculated, and final change regions were determined. In this paper experimental results on multi-temporal ZY-3 high-resolution images of some area in Jiangsu Province show that: Through extracting map-spots of larger difference as the candidate changed regions, Levene-Test decreases the computing load, improves the precision of change detection, and shows better fault-tolerant capacity for those unchanged regions which are of relatively large differences. The combination of Hue-texture features and fuzzy evaluation method can effectively decrease omissions and deficiencies, improve the precision of change detection.
$B$- and $D$-meson leptonic decay constants from four-flavor lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazavov, A.; Bernard, C.; Brown, N.
We calculate the leptonic decay constants of heavy-light pseudoscalar mesons with charm and bottom quarks in lattice quantum chromodynamics on four-flavor QCD gauge-field configurations with dynamicalmore » $u$, $d$, $s$, and $c$ quarks. We analyze over twenty isospin-symmetric ensembles with six lattice spacings down to $$a\\approx 0.03$$~fm and several values of the light-quark mass down to the physical value $$\\frac{1}{2}(m_u+m_d)$$. We employ the highly-improved staggered-quark (HISQ) action for the sea and valence quarks; on the finest lattice spacings, discretization errors are sufficiently small that we can calculate the $B$-meson decay constants with the HISQ action for the first time directly at the physical $b$-quark mass. We obtain the most precise determinations to-date of the $D$- and $B$-meson decay constants and their ratios, $$f_{D^+} = 212.6 (0.5)$$~MeV, $$f_{D_s} = 249.8(0.4)$$~MeV, $$f_{D_s}/f_{D^+} = 1.1749(11)$$, $$f_{B^+} = 189.4(1.4)$$~MeV, $$f_{B_s} = 230.7(1.2)$$~MeV, $$f_{B_s}/f_{B^+} = 1.2180(49)$$, where the errors include statistical and all systematic uncertainties. Our results for the $B$-meson decay constants are three times more precise than the previous best lattice-QCD calculations, and bring the QCD errors in the Standard-Model predictions for the rare leptonic decays $$\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 3.65(11) \\times 10^{-9}$$, $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-) = 1.00(3) \\times 10^{-11}$$, and $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-)/\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 0.00264(7)$$ to well below other sources of uncertainty. As a byproduct of our analysis, we also update our previously published results for the light-quark-mass ratios and the scale-setting quantities $$f_{p4s}$$, $$M_{p4s}$$, and $$R_{p4s}$$. We obtain the most precise lattice-QCD determination to date of the ratio $$f_{K^+}/f_{\\pi^+} = 1.1950(^{+15}_{-22})$$~MeV.« less
Elwan, Ahmed; Singh, Ranvir; Patterson, Maree; Roygard, Jon; Horne, Dave; Clothier, Brent; Jones, Geoffrey
2018-01-11
Better management of water quality in streams, rivers and lakes requires precise and accurate estimates of different contaminant loads. We assessed four sampling frequencies (2 days, weekly, fortnightly and monthly) and five load calculation methods (global mean (GM), rating curve (RC), ratio estimator (RE), flow-stratified (FS) and flow-weighted (FW)) to quantify loads of nitrate-nitrogen (NO 3 - -N), soluble inorganic nitrogen (SIN), total nitrogen (TN), dissolved reactive phosphorus (DRP), total phosphorus (TP) and total suspended solids (TSS), in the Manawatu River, New Zealand. The estimated annual river loads were compared to the reference 'true' loads, calculated using daily measurements of flow and water quality from May 2010 to April 2011, to quantify bias (i.e. accuracy) and root mean square error 'RMSE' (i.e. accuracy and precision). The GM method resulted into relatively higher RMSE values and a consistent negative bias (i.e. underestimation) in estimates of annual river loads across all sampling frequencies. The RC method resulted in the lowest RMSE for TN, TP and TSS at monthly sampling frequency. Yet, RC highly overestimated the loads for parameters that showed dilution effect such as NO 3 - -N and SIN. The FW and RE methods gave similar results, and there was no essential improvement in using RE over FW. In general, FW and RE performed better than FS in terms of bias, but FS performed slightly better than FW and RE in terms of RMSE for most of the water quality parameters (DRP, TP, TN and TSS) using a monthly sampling frequency. We found no significant decrease in RMSE values for estimates of NO 3 - N, SIN, TN and DRP loads when the sampling frequency was increased from monthly to fortnightly. The bias and RMSE values in estimates of TP and TSS loads (estimated by FW, RE and FS), however, showed a significant decrease in the case of weekly or 2-day sampling. This suggests potential for a higher sampling frequency during flow peaks for more precise and accurate estimates of annual river loads for TP and TSS, in the study river and other similar conditions.
Precise orbit determination of the Fengyun-3C satellite using onboard GPS and BDS observations
NASA Astrophysics Data System (ADS)
Li, Min; Li, Wenwen; Shi, Chuang; Jiang, Kecai; Guo, Xiang; Dai, Xiaolei; Meng, Xiangguang; Yang, Zhongdong; Yang, Guanglin; Liao, Mi
2017-11-01
The GNSS Occultation Sounder instrument onboard the Chinese meteorological satellite Fengyun-3C (FY-3C) tracks both GPS and BDS signals for orbit determination. One month's worth of the onboard dual-frequency GPS and BDS data during March 2015 from the FY-3C satellite is analyzed in this study. The onboard BDS and GPS measurement quality is evaluated in terms of data quantity as well as code multipath error. Severe multipath errors for BDS code ranges are observed especially for high elevations for BDS medium earth orbit satellites (MEOs). The code multipath errors are estimated as piecewise linear model in 2{°}× 2{°} grid and applied in precise orbit determination (POD) calculations. POD of FY-3C is firstly performed with GPS data, which shows orbit consistency of approximate 2.7 cm in 3D RMS (root mean square) by overlap comparisons; the estimated orbits are then used as reference orbits for evaluating the orbit precision of GPS and BDS combined POD as well as BDS-based POD. It is indicated that inclusion of BDS geosynchronous orbit satellites (GEOs) could degrade POD precision seriously. The precisions of orbit estimates by combined POD and BDS-based POD are 3.4 and 30.1 cm in 3D RMS when GEOs are involved, respectively. However, if BDS GEOs are excluded, the combined POD can reach similar precision with respect to GPS POD, showing orbit differences about 0.8 cm, while the orbit precision of BDS-based POD can be improved to 8.4 cm. These results indicate that the POD performance with onboard BDS data alone can reach precision better than 10 cm with only five BDS inclined geosynchronous satellite orbit satellites and three MEOs. As the GNOS receiver can only track six BDS satellites for orbit positioning at its maximum channel, it can be expected that the performance of POD with onboard BDS data can be further improved if more observations are generated without such restrictions.
Research on the tool holder mode in high speed machining
NASA Astrophysics Data System (ADS)
Zhenyu, Zhao; Yongquan, Zhou; Houming, Zhou; Xiaomei, Xu; Haibin, Xiao
2018-03-01
High speed machining technology can improve the processing efficiency and precision, but also reduce the processing cost. Therefore, the technology is widely regarded in the industry. With the extensive application of high-speed machining technology, high-speed tool system has higher and higher requirements on the tool chuck. At present, in high speed precision machining, several new kinds of clip heads are as long as there are heat shrinkage tool-holder, high-precision spring chuck, hydraulic tool-holder, and the three-rib deformation chuck. Among them, the heat shrinkage tool-holder has the advantages of high precision, high clamping force, high bending rigidity and dynamic balance, etc., which are widely used. Therefore, it is of great significance to research the new requirements of the machining tool system. In order to adapt to the requirement of high speed machining precision machining technology, this paper expounds the common tool holder technology of high precision machining, and proposes how to select correctly tool clamping system in practice. The characteristics and existing problems are analyzed in the tool clamping system.
Etude des performances de solveurs deterministes sur un coeur rapide a caloporteur sodium
NASA Astrophysics Data System (ADS)
Bay, Charlotte
The reactors of next generation, in particular SFR model, represent a true challenge for current codes and solvers, used mainly for thermic cores. There is no guarantee that their competences could be straight adapted to fast neutron spectrum, or to major design differences. Thus it is necessary to assess the validity of solvers and their potential shortfall in the case of fast neutron reactors. As part of an internship with CEA (France), and at the instigation of EPM Nuclear Institute, this study concerns the following codes : DRAGON/DONJON, ERANOS, PARIS and APOLLO3. The precision assessment has been performed using Monte Carlo code TRIPOLI4. Only core calculation was of interest, namely numerical methods competences in precision and rapidity. Lattice code was not part of the study, that is to say nuclear data, self-shielding, or isotopic compositions. Nor was tackled burnup or time evolution effects. The study consists in two main steps : first evaluating the sensitivity of each solver to calculation parameters, and obtain its optimal calculation set ; then compare their competences in terms of precision and rapidity, by collecting usual quantities (effective multiplication factor, reaction rates map), but also more specific quantities which are crucial to the SFR design, namely control rod worth and sodium void effect. The calculation time is also a key factor. Whatever conclusion or recommendation that could be drawn from this study, they must first of all be applied within similar frameworks, that is to say small fast neutron cores with hexagonal geometry. Eventual adjustments for big cores will have to be demonstrated in developments of this study.
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.
2018-04-01
The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).
NASA Astrophysics Data System (ADS)
Laflamme Janssen, Jonathan
This thesis studies the limitations of density functional theory. These limits are explored in the context of a traditional implementation using a plane waves basis set. First, we investigate the limit of the size of the systems that can be treated. Cutting edge methods that assess these limitations are then used to simulate nanoscale systems. More specifically, the grafting of bromophenyl molecules on the sidewall of carbon nanotubes is studied with these methods, as a better understanding of this procedure could have substantial impact on the electronic industry. Second, the limitations of the precision of density functional theory are explored. We begin with a quantitative study of the uncertainty of this method for the case of electron-phonon coupling calculations and find it to be substantially higher than what is widely presumed in the literature. The uncertainty on electronphonon coupling calculations is then explored within the G0W0 method, which is found to be a substantially more precise alternative. However, this method has the drawback of being severely limitated in the size of systems that can be computed. In the following, theoretical solutions to overcome these limitations are developed and presented. The increased performance and precision of the resulting implementation opens new possibilities for the study and design of materials, such as superconductors, polymers for organic photovoltaics and semiconductors. Keywords: Condensed matter physics, ab initio calculations, density functional theory, nanotechnology, carbon nanotubes, many-body perturbation theory, G0W0 method..
Burst strength of tubing and casing based on twin shear unified strength theory.
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells.
Burst Strength of Tubing and Casing Based on Twin Shear Unified Strength Theory
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells. PMID:25397886
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.
NASA Astrophysics Data System (ADS)
Sauli, Vladimir
2018-05-01
The interference effect between leptonic radiative corrections and hadronic polarization functions is calculated via optical theorem for μ-pair production in vicinity of narrow resonances. Within seven most dominant exclusive channels of the production cross section σh(e+e- → hadrons) one achieves high acuracy which is necessary for the comparison with experiments. The result is compared with KLOE and KLOE2 experiments for μ-μ+ and μ-μ+γ productions at φ and ω/ρ meson energy.
NASA Astrophysics Data System (ADS)
Savin, A. A.; Guba, V. G.; Ladur, A. A.; Bykova, O. N.
2018-05-01
This paper is dedicated to a new method of high frequency circuits material properties extraction based on the reflection measurements of a line shorted two or more times along its length. The line should be fabricated on the material under test. To achieve more precise calculation results, the proposed method uses processing in the time domain. The experimental results section shows obtained assessments for relative permittivity and dielectric loss tangent of the RO4350B hydrocarbon ceramic laminate. Measurements have been conducted over the frequency range up to 20 GHz.
Terao, Takamichi
2010-08-01
We propose a numerical method to calculate interior eigenvalues and corresponding eigenvectors for nonsymmetric matrices. Based on the subspace projection technique onto expanded Ritz subspace, it becomes possible to obtain eigenvalues and eigenvectors with sufficiently high precision. This method overcomes the difficulties of the traditional nonsymmetric Lanczos algorithm, and improves the accuracy of the obtained interior eigenvalues and eigenvectors. Using this algorithm, we investigate three-dimensional metamaterial composites consisting of positive and negative refractive index materials, and it is demonstrated that the finite-difference frequency-domain algorithm is applicable to analyze these metamaterial composites.
Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.
Confirmation of shutdown cooling effects
NASA Astrophysics Data System (ADS)
Sato, Kotaro; Tabuchi, Masato; Sugimura, Naoki; Tatsumi, Masahiro
2015-12-01
After the Fukushima accidents, all nuclear power plants in Japan have gradually stopped their operations and have long periods of shutdown. During those periods, reactivity of fuels continues to change significantly especially for high-burnup UO2 fuels and MOX fuels due to radioactive decays. It is necessary to consider these isotopic changes precisely, to predict neutronics characteristics accurately. In this paper, shutdown cooling (SDC) effects of UO2 and MOX fuels that have unusual operation histories are confirmed by the advanced lattice code, AEGIS. The calculation results show that the effects need to be considered even after nuclear power plants come back to normal operation.
Derivative expansion of one-loop effective energy of stiff membranes with tension
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Kleinert, H.; Schakel, Adriaan M. J.
1999-03-01
With help of a derivative expansion, the one-loop corrections to the energy functional of a nearly flat, stiff membrane with tension due to thermal fluctuations are calculated in the Monge parametrization. Contrary to previous studies, an arbitrary tilt of the surface is allowed to exhibit the nontrivial relations between the different, highly nonlinear terms accompanying the ultraviolet divergences. These terms are shown to have precisely the same form as those in the original energy functional, as necessary for renormalizability. Also infrared divergences arise. These, however, are shown to cancel in a nontrivial way.
Contribution from individual nearby sources to the spectrum of high-energy cosmic-ray electrons
NASA Astrophysics Data System (ADS)
Sedrati, R.; Attallah, R.
2014-04-01
In the last few years, very important data on high-energy cosmic-ray electrons and positrons from high-precision space-born and ground-based experiments have attracted a great deal of interest. These particles represent a unique probe for studying local comic-ray accelerators because they lose energy very rapidly. These energy losses reduce the lifetime so drastically that high-energy cosmic-ray electrons can attain the Earth only from rather local astrophysical sources. This work aims at calculating, by means of Monte Carlo simulation, the contribution from some known nearby astrophysical sources to the cosmic-ray electron/positron spectra at high energy (≥ 10 GeV). The background to the electron energy spectrum from distant sources is determined with the help of the GALPROP code. The obtained numerical results are compared with a set of experimental data.
2017-01-01
The World Health Organization (WHO) enzyme-linked immunosorbent assay (ELISA) guideline is currently accepted as the gold standard for the evaluation of immunoglobulin G (IgG) antibodies specific to pneumococcal capsular polysaccharide. We conducted validation of the WHO ELISA for 7 pneumococcal serotypes (4, 6B, 9V, 14, 18C, 19F, and 23F) by evaluating its specificity, precision (reproducibility and intermediate precision), accuracy, spiking recovery test, lower limit of quantification (LLOQ), and stability at the Ewha Center for Vaccine Evaluation and Study, Seoul, Korea. We found that the specificity, reproducibility, and intermediate precision were within acceptance ranges (reproducibility, coefficient of variability [CV] ≤ 15%; intermediate precision, CV ≤ 20%) for all serotypes. Comparisons between the provisional assignments of calibration sera and the results from this laboratory showed a high correlation > 94% for all 7 serotypes, supporting the accuracy of the ELISA. The spiking recovery test also fell within an acceptable range. The quantification limit, calculated using the LLOQ, for each of the serotypes was 0.05–0.093 μg/mL. The freeze-thaw stability and the short-term temperature stability were also within an acceptable range. In conclusion, we showed good performance using the standardized WHO ELISA for the evaluation of serotype-specific anti-pneumococcal IgG antibodies; the WHO ELISA can evaluate the immune response against pneumococcal vaccines with consistency and accuracy. PMID:28875600
Lee, Hyunju; Lim, Soo Young; Kim, Kyung Hyo
2017-10-01
The World Health Organization (WHO) enzyme-linked immunosorbent assay (ELISA) guideline is currently accepted as the gold standard for the evaluation of immunoglobulin G (IgG) antibodies specific to pneumococcal capsular polysaccharide. We conducted validation of the WHO ELISA for 7 pneumococcal serotypes (4, 6B, 9V, 14, 18C, 19F, and 23F) by evaluating its specificity, precision (reproducibility and intermediate precision), accuracy, spiking recovery test, lower limit of quantification (LLOQ), and stability at the Ewha Center for Vaccine Evaluation and Study, Seoul, Korea. We found that the specificity, reproducibility, and intermediate precision were within acceptance ranges (reproducibility, coefficient of variability [CV] ≤ 15%; intermediate precision, CV ≤ 20%) for all serotypes. Comparisons between the provisional assignments of calibration sera and the results from this laboratory showed a high correlation > 94% for all 7 serotypes, supporting the accuracy of the ELISA. The spiking recovery test also fell within an acceptable range. The quantification limit, calculated using the LLOQ, for each of the serotypes was 0.05-0.093 μg/mL. The freeze-thaw stability and the short-term temperature stability were also within an acceptable range. In conclusion, we showed good performance using the standardized WHO ELISA for the evaluation of serotype-specific anti-pneumococcal IgG antibodies; the WHO ELISA can evaluate the immune response against pneumococcal vaccines with consistency and accuracy. © 2017 The Korean Academy of Medical Sciences.
High precision Hugoniot measurements of D2 near maximum compression
NASA Astrophysics Data System (ADS)
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Droplet digital PCR technology promises new applications and research areas.
Manoj, P
2016-01-01
Digital Polymerase Chain Reaction (dPCR) is used to quantify nucleic acids and its applications are in the detection and precise quantification of low-level pathogens, rare genetic sequences, quantification of copy number variants, rare mutations and in relative gene expressions. Here the PCR is performed in large number of reaction chambers or partitions and the reaction is carried out in each partition individually. This separation allows a more reliable collection and sensitive measurement of nucleic acid. Results are calculated by counting amplified target sequence (positive droplets) and the number of partitions in which there is no amplification (negative droplets). The mean number of target sequences was calculated by Poisson Algorithm. Poisson correction compensates the presence of more than one copy of target gene in any droplets. The method provides information with accuracy and precision which is highly reproducible and less susceptible to inhibitors than qPCR. It has been demonstrated in studying variations in gene sequences, such as copy number variants and point mutations, distinguishing differences between expression of nearly identical alleles, assessment of clinically relevant genetic variations and it is routinely used for clonal amplification of samples for NGS methods. dPCR enables more reliable predictors of tumor status and patient prognosis by absolute quantitation using reference normalizations. Rare mitochondrial DNA deletions associated with a range of diseases and disorders as well as aging can be accurately detected with droplet digital PCR.
Patni, Nidhi; Burela, Nagarjuna; Pasricha, Rajesh; Goyal, Jaishree; Soni, Tej Prakash; Kumar, T Senthil; Natarajan, T
2017-01-01
To achieve the best possible therapeutic ratio using high-precision techniques (image-guided radiation therapy/volumetric modulated arc therapy [IGRT/VMAT]) of external beam radiation therapy in cases of carcinoma cervix using kilovoltage cone-beam computed tomography (kV-CBCT). One hundred and five patients of gynecological malignancies who were treated with IGRT (IGRT/VMAT) were included in the study. CBCT was done once a week for intensity-modulated radiation therapy and daily in IGRT/VMAT. These images were registered with the planning CT scan images and translational errors were applied and recorded. In all, 2078 CBCT images were studied. The margins of planning target volume were calculated from the variations in the setup. The setup variation was 5.8, 10.3, and 5.6 mm in anteroposterior, superoinferior, and mediolateral direction. This allowed adequate dose delivery to the clinical target volume and the sparing of organ at risks. Daily kV-CBCT is a satisfactory method of accurate patient positioning in treating gynecological cancers with high-precision techniques. This resulted in avoiding geographic miss.
Animal research as a basis for clinical trials.
Faggion, Clovis M
2015-04-01
Animal experiments are critical for the development of new human therapeutics because they provide mechanistic information, as well as important information on efficacy and safety. Some evidence suggests that authors of animal research in dentistry do not observe important methodological issues when planning animal experiments, for example sample-size calculation. Low-quality animal research directly interferes with development of the research process in which multiple levels of research are interconnected. For example, high-quality animal experiments generate sound information for the further planning and development of randomized controlled trials in humans. These randomized controlled trials are the main source for the development of systematic reviews and meta-analyses, which will generate the best evidence for the development of clinical guidelines. Therefore, adequate planning of animal research is a sine qua non condition for increasing efficacy and efficiency in research. Ethical concerns arise when animal research is not performed with high standards. This Focus article presents the latest information on the standards of animal research in dentistry, more precisely in the field of implant dentistry. Issues on precision and risk of bias are discussed, and strategies to reduce risk of bias in animal research are reported. © 2015 Eur J Oral Sci.
NASA Astrophysics Data System (ADS)
Nie, Xuqing; Li, Shengyi; Song, Ci; Hu, Hao
2014-08-01
Due to the different curvature everywhere, the aspheric surface is hard to achieve high-precision accuracy by the traditional polishing process. Controlling of the mid-spatial frequency errors (MSFR), in particular, is almost unapproachable. In this paper, the combined fabrication process based on the smoothing polishing (SP) and magnetorheological finishing (MRF) is proposed. The pressure distribution of the rigid polishing lap and semi-flexible polishing lap is calculated. The shape preserving capacity and smoothing effect are compared. The feasibility of smoothing aspheric surface with the semi-flexible polishing lap is verified, and the key technologies in the SP process are discussed. Then, A K4 parabolic surface with the diameter of 500mm is fabricated based on the combined fabrication process. A Φ150 mm semi-flexible lap is used in the SP process to control the MSFR, and the deterministic MRF process is applied to figure the surface error. The root mean square (RMS) error of the aspheric surface converges from 0.083λ (λ=632.8 nm) to 0.008λ. The power spectral density (PSD) result shows that the MSFR are well restrained while the surface error has a great convergence.
High Sensitivity Gravity Measurements in the Adverse Environment of Oil Wells
NASA Astrophysics Data System (ADS)
Pfutzner, Harold
2014-03-01
Bulk density is a primary measurement within oil and gas reservoirs and is the basis of most reserves calculations by oil companies. The measurement is performed with a gamma-ray source and two scintillation gamma-ray detectors from within newly drilled exploration and production wells. This nuclear density measurement, while very precise is also very shallow and is therefore susceptible to errors due to any alteration of the formation and fluids in the vicinity of the borehole caused by the drilling process. Measuring acceleration due to gravity along a well provides a direct measure of bulk density with a very large depth of investigation that makes it practically immune to errors from near-borehole effects. Advances in gravity sensors and associated mechanics and electronics provide an opportunity for routine borehole gravity measurements with comparable density precision to the nuclear density measurement and with sufficient ruggedness to survive the rough handling and high temperatures experienced in oil well logging. We will describe a borehole gravity meter and its use under very realistic conditions in an oil well in Saudi Arabia. The density measurements will be presented. Alberto Marsala (2), Paul Wanjau (1), Olivier Moyal (1), and Justin Mlcak (1); (1) Schlumberger, (2) Saudi Aramco.
Quadrature mixture LO suppression via DSW DAC noise dither
Dubbert, Dale F [Cedar Crest, NM; Dudley, Peter A [Albuquerque, NM
2007-08-21
A Quadrature Error Corrected Digital Waveform Synthesizer (QECDWS) employs frequency dependent phase error corrections to, in effect, pre-distort the phase characteristic of the chirp to compensate for the frequency dependent phase nonlinearity of the RF and microwave subsystem. In addition, the QECDWS can employ frequency dependent correction vectors to the quadrature amplitude and phase of the synthesized output. The quadrature corrections cancel the radars' quadrature upconverter (mixer) errors to null the unwanted spectral image. A result is the direct generation of an RF waveform, which has a theoretical chirp bandwidth equal to the QECDWS clock frequency (1 to 1.2 GHz) with the high Spurious Free Dynamic Range (SFDR) necessary for high dynamic range radar systems such as SAR. To correct for the problematic upconverter local oscillator (LO) leakage, precision DC offsets can be applied over the chirped pulse using a pseudo-random noise dither. The present dither technique can effectively produce a quadrature DC bias which has the precision required to adequately suppress the LO leakage. A calibration technique can be employed to calculate both the quadrature correction vectors and the LO-nulling DC offsets using the radar built-in test capability.
Quantitative analysis of pork and chicken products by droplet digital PCR.
Cai, Yicun; Li, Xiang; Lv, Rong; Yang, Jielin; Li, Jian; He, Yuping; Pan, Liangwen
2014-01-01
In this project, a highly precise quantitative method based on the digital polymerase chain reaction (dPCR) technique was developed to determine the weight of pork and chicken in meat products. Real-time quantitative polymerase chain reaction (qPCR) is currently used for quantitative molecular analysis of the presence of species-specific DNAs in meat products. However, it is limited in amplification efficiency and relies on standard curves based Ct values, detecting and quantifying low copy number target DNA, as in some complex mixture meat products. By using the dPCR method, we find the relationships between the raw meat weight and DNA weight and between the DNA weight and DNA copy number were both close to linear. This enabled us to establish formulae to calculate the raw meat weight based on the DNA copy number. The accuracy and applicability of this method were tested and verified using samples of pork and chicken powder mixed in known proportions. Quantitative analysis indicated that dPCR is highly precise in quantifying pork and chicken in meat products and therefore has the potential to be used in routine analysis by government regulators and quality control departments of commercial food and feed enterprises.
Granton, Patrick V; Verhaegen, Frank
2013-05-21
Precision image-guided small animal radiotherapy is rapidly advancing through the use of dedicated micro-irradiation devices. However, precise modeling of these devices in model-based dose-calculation algorithms such as Monte Carlo (MC) simulations continue to present challenges due to a combination of very small beams, low mechanical tolerances on beam collimation, positioning and long calculation times. The specific intent of this investigation is to introduce and demonstrate the viability of a fast analytical source model (AM) for use in either investigating improvements in collimator design or for use in faster dose calculations. MC models using BEAMnrc were developed for circular and square fields sizes from 1 to 25 mm in diameter (or side) that incorporated the intensity distribution of the focal spot modeled after an experimental pinhole image. These MC models were used to generate phase space files (PSFMC) at the exit of the collimators. An AM was developed that included the intensity distribution of the focal spot, a pre-calculated x-ray spectrum, and the collimator-specific entrance and exit apertures. The AM was used to generate photon fluence intensity distributions (ΦAM) and PSFAM containing photons radiating at angles according to the focal spot intensity distribution. MC dose calculations using DOSXYZnrc in a water and mouse phantom differing only by source used (PSFMC versus PSFAM) were found to agree within 7% and 4% for the smallest 1 and 2 mm collimator, respectively, and within 1% for all other field sizes based on depth dose profiles. PSF generation times were approximately 1200 times faster for the smallest beam and 19 times faster for the largest beam. The influence of the focal spot intensity distribution on output and on beam shape was quantified and found to play a significant role in calculated dose distributions. Beam profile differences due to collimator alignment were found in both small and large collimators sensitive to shifts of 1 mm with respect to the central axis.
Accuracy of a hexapod parallel robot kinematics based external fixator.
Faschingbauer, Maximilian; Heuer, Hinrich J D; Seide, Klaus; Wendlandt, Robert; Münch, Matthias; Jürgens, Christian; Kirchner, Rainer
2015-12-01
Different hexapod-based external fixators are increasingly used to treat bone deformities and fractures. Accuracy has not been measured sufficiently for all models. An infrared tracking system was applied to measure positioning maneuvers with a motorized Precision Hexapod® fixator, detecting three-dimensional positions of reflective balls mounted in an L-arrangement on the fixator, simulating bone directions. By omitting one dimension of the coordinates, projections were simulated as if measured on standard radiographs. Accuracy was calculated as the absolute difference between targeted and measured positioning values. In 149 positioning maneuvers, the median values for positioning accuracy of translations and rotations (torsions/angulations) were below 0.3 mm and 0.2° with quartiles ranging from -0.5 mm to 0.5 mm and -1.0° to 0.9°, respectively. The experimental setup was found to be precise and reliable. It can be applied to compare different hexapod-based fixators. Accuracy of the investigated hexapod system was high. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Kreslavsky, Mikhail A.; Head, James W.; Neumann, Gregory A.; Zuber, Maria T.; Smith, David E.
2016-01-01
Global lunar topographic data derived from ranging measurements by the Lunar Orbiter Laser Altimeter (LOLA) onboard LRO mission to the Moon have extremely high vertical precision. We use detrended topography as a means for utilization of this precision in geomorphological analysis. The detrended topography was calculated as a difference between actual topography and a trend surface defined as a median topography in a circular sliding window. We found that despite complicated distortions caused by the non-linear nature of the detrending procedure, visual inspection of these data facilitates identification of low-amplitude gently-sloping geomorphic features. We present specific examples of patterns of lava flows forming the lunar maria and revealing compound flow fields, a new class of lava flow complex on the Moon. We also highlight the identification of linear tectonic features that otherwise are obscured in the images and topographic data processed in a more traditional manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alhroob, M.; Boyd, G.; Hasib, A.
Precision ultrasonic measurements in binary gas systems provide continuous real-time monitoring of mixture composition and flow. Using custom micro-controller-based electronics, we have developed an ultrasonic instrument, with numerous potential applications, capable of making continuous high-precision sound velocity measurements. The instrument measures sound transit times along two opposite directions aligned parallel to - or obliquely crossing - the gas flow. The difference between the two measured times yields the gas flow rate while their average gives the sound velocity, which can be compared with a sound velocity vs. molar composition look-up table for the binary mixture at a given temperature andmore » pressure. The look-up table may be generated from prior measurements in known mixtures of the two components, from theoretical calculations, or from a combination of the two. We describe the instrument and its performance within numerous applications in the ATLAS experiment at the CERN Large Hadron Collider (LHC). The instrument can be of interest in other areas where continuous in-situ binary gas analysis and flowmetry are required. (authors)« less
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Field, Nicholas; Konstantinidis, Spyridon; Velayudhan, Ajoy
2017-08-11
The combination of multi-well plates and automated liquid handling is well suited to the rapid measurement of the adsorption isotherms of proteins. Here, single and binary adsorption isotherms are reported for BSA, ovalbumin and conalbumin on a strong anion exchanger over a range of pH and salt levels. The impact of the main experimental factors at play on the accuracy and precision of the adsorbed protein concentrations is quantified theoretically and experimentally. In addition to the standard measurement of liquid concentrations before and after adsorption, the amounts eluted from the wells are measured directly. This additional measurement corroborates the calculation based on liquid concentration data, and improves precision especially under conditions of weak or moderate interaction strength. The traditional measurement of multicomponent isotherms is limited by the speed of HPLC analysis; this analytical bottleneck is alleviated by careful multivariate analysis of UV spectra. Copyright © 2017. Published by Elsevier B.V.
Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua
2016-05-30
Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique.
NASA Astrophysics Data System (ADS)
Yang, Lin; Zhang, Feng; Wang, Cai-Zhuang; Ho, Kai-Ming; Travesset, Alex
2018-04-01
We present an implementation of EAM and FS interatomic potentials, which are widely used in simulating metallic systems, in HOOMD-blue, a software designed to perform classical molecular dynamics simulations using GPU accelerations. We first discuss the details of our implementation and then report extensive benchmark tests. We demonstrate that single-precision floating point operations efficiently implemented on GPUs can produce sufficient accuracy when compared against double-precision codes, as demonstrated in test simulations of calculations of the glass-transition temperature of Cu64.5Zr35.5, and pair correlation function g (r) of liquid Ni3Al. Our code scales well with the size of the simulating system on NVIDIA Tesla M40 and P100 GPUs. Compared with another popular software LAMMPS running on 32 cores of AMD Opteron 6220 processors, the GPU/CPU performance ratio can reach as high as 4.6. The source code can be accessed through the HOOMD-blue web page for free by any interested user.
Equilibrium mass-dependent fractionation relationships for triple oxygen isotopes
NASA Astrophysics Data System (ADS)
Cao, Xiaobin; Liu, Yun
2011-12-01
With a growing interest in small 17O-anomaly, there is a pressing need for the precise ratio, ln 17α/ln 18α, for a particular mass-dependent fractionation process (MDFP) (e.g., for an equilibrium isotope exchange reaction). This ratio (also denoted as " θ") can be determined experimentally, however, such efforts suffer from the demand of well-defined process or a set of processes in addition to high precision analytical capabilities. Here, we present a theoretical approach from which high-precision ratios for MDFPs can be obtained. This approach will complement and serve as a benchmark for experimental studies. We use oxygen isotope exchanges in equilibrium processes as an example. We propose that the ratio at equilibrium, θE ≡ ln 17α/ln 18α, can be calculated through the equation below: θa-bE=κa+(κa-κb){ln18βb}/{ln18α} where 18βb is the fractionation factor between a compound "b" and the mono-atomic ideal reference material "O", 18αa-b is the fractionation factor between a and b and it equals to 18βa/ 18βb and κ is a new concept defined in this study as κ ≡ ln 17β/ln 18β. The relationship between θ and κ is similar to that between α and β. The advantages of using κ include the convenience in documenting a large number of θ values for MDFPs and in estimating any θ values using a small data set due to the fact that κ values are similar among O-bearing compounds with similar chemical groups. Frequency scaling factor, anharmonic corrections and clumped isotope effects are found insignificant to the κ value calculation. However, the employment of the rule of geometric mean (RGM) can significantly affect the κ value. There are only small differences in κ values among carbonates and the structural effect is smaller than that of chemical compositions. We provide κ values for most O-bearing compounds, and we argue that κ values for Mg-bearing and S-bearing compounds should be close to their high temperature limitation (i.e., 0.5210 for Mg and 0.5159 for S). We also provide θ values for CO 2(g)-water, quartz-water and calcite-water oxygen isotope exchange reactions at temperature from 0 to 100 °C.
NASA Astrophysics Data System (ADS)
Aziz Zanjani, F.; Lin, G.
2016-12-01
Seismic activity in Oklahoma has greatly increased since 2013, when the number of wastewater disposal wells associated with oil and gas production was significantly increased in the area. An M5.8 earthquake at about 5 km depth struck near Pawnee, Oklahoma on September 3, 2016. This earthquake is postulated to be related with the anthropogenic activity in Oklahoma. In this study, we investigate the seismic characteristics in Oklahoma by using high-precision earthquake relocations and focal mechanisms. We acquire the seismic data between January 2013 and October 2016 recorded by the local and regional (within 200 km distance from the Pawnee mainshock) seismic stations from the Incorporated Research Institutions for Seismology (IRIS). We relocate all the earthquakes by applying the source-specific station term method and a differential time relocation method based on waveform cross-correlation data. The high-precision earthquake relocation catalog is then used to perform full-waveform modeling. We use Muller's reflection method for Green's function construction and the mtinvers program for moment tensor inversion. The sensitivity of the solution to the station and component distribution is evaluated by carrying out the Jackknife resampling. These earthquake relocation and focal mechanism results will help constrain the fault orientation and the earthquake rupture length. In order to examine the static Coulomb stress change due to the 2016 Pawnee earthquake, we utilize the Coulomb 3 software in the vicinity of the mainshock and compare the aftershock pattern with the calculated stress variation. The stress change in the study area can be translated into probability of seismic failure on other parts of the designated fault.
Model for intensity calculation in electron guns
NASA Astrophysics Data System (ADS)
Doyen, O.; De Conto, J. M.; Garnier, J. P.; Lefort, M.; Richard, N.
2007-04-01
The calculation of the current in an electron gun structure is one of the main investigations involved in the electron gun physics understanding. In particular, various simulation codes exist but often present some important discrepancies with experiments. Moreover, those differences cannot be reduced because of the lack of physical information in these codes. We present a simple physical three-dimensional model, valid for all kinds of gun geometries. This model presents a better precision than all the other simulation codes and models encountered and allows the real understanding of the electron gun physics. It is based only on the calculation of the Laplace electric field at the cathode, the use of the classical Child-Langmuir's current density, and a geometrical correction to this law. Finally, the intensity versus voltage characteristic curve can be precisely described with only a few physical parameters. Indeed, we have showed that only the shape of the electric field at the cathode without beam, and a distance of an equivalent infinite planar diode gap, govern mainly the electron gun current generation.
Hakim, B M; Beard, B B; Davis, C C
2018-01-01
Specific absorption rate (SAR) measurements require accurate calculations of the dielectric properties of tissue-equivalent liquids and associated calibration of E-field probes. We developed a precise tissue-equivalent dielectric measurement and E-field probe calibration system. The system consists of a rectangular waveguide, electric field probe, and data control and acquisition system. Dielectric properties are calculated using the field attenuation factor inside the tissue-equivalent liquid and power reflectance inside the waveguide at the air/dielectric-slab interface. Calibration factors were calculated using isotropicity measurements of the E-field probe. The frequencies used are 900 MHz and 1800 MHz. The uncertainties of the measured values are within ±3%, at the 95% confidence level. Using the same waveguide for dielectric measurements as well as calibrating E-field probes used in SAR assessments eliminates a source of uncertainty. Moreover, we clearly identified the system parameters that affect the overall uncertainty of the measurement system. PMID:29520129
Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof
2016-05-01
Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
López-Miguel, Alberto; Martínez-Almeida, Loreto; González-García, María J; Coco-Martín, María B; Sobrado-Calvo, Paloma; Maldonado, Miguel J
2013-02-01
To assess the intrasession and intersession precision of ocular, corneal, and internal higher-order aberrations (HOAs) measured using an integrated topographer and Hartmann-Shack wavefront sensor (Topcon KR-1W) in refractive surgery candidates. IOBA-Eye Institute, Valladolid, Spain. Evaluation of diagnostic technology. To analyze intrasession repeatability, 1 experienced examiner measured eyes 9 times successively. To study intersession reproducibility, the same clinician obtained measurements from another set of eyes in 2 consecutive sessions 1 week apart. Ocular, corneal, and internal HOAs were obtained. Coma and spherical aberrations, 3rd- and 4th-order aberrations, and total HOAs were calculated for a 6.0 mm pupil diameter. For intrasession repeatability (75 eyes), excellent intraclass correlation coefficients (ICCs) were obtained (ICC >0.87), except for internal primary coma (ICC = 0.75) and 3rd-order (ICC = 0.72) HOAs. Repeatability precision (1.96 × S(w)) values ranged from 0.03 μm (corneal primary spherical) to 0.08 μm (ocular primary coma). For intersession reproducibility (50 eyes), ICCs were good (>0.8) for ocular primary spherical, 3rd-order, and total higher-order aberrations; reproducibility precision values ranged from 0.06 μm (corneal primary spherical) to 0.21 μm (internal 3rd order), with internal HOAs having the lowest precision (≥0.12 μm). No systematic bias was found between examinations on different days. The intrasession repeatability was high; therefore, the device's ability to measure HOAs in a reliable way was excellent. Under intersession reproducibility conditions, dependable corneal primary spherical aberrations were provided. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Precision Column CO2 Measurement from Space Using Broad Band LIDAR
NASA Technical Reports Server (NTRS)
Heaps, William S.
2009-01-01
In order to better understand the budget of carbon dioxide in the Earth's atmosphere it is necessary to develop a global high precision understanding of the carbon dioxide column. To uncover the missing sink" that is responsible for the large discrepancies in the budget as we presently understand it, calculation has indicated that measurement accuracy of 1 ppm is necessary. Because typical column average CO2 has now reached 380 ppm this represents a precision on the order of 0.25% for these column measurements. No species has ever been measured from space at such a precision. In recognition of the importance of understanding the CO2 budget to evaluate its impact on global warming the National Research Council in its decadal survey report to NASA recommended planning for a laser based total CO2 mapping mission in the near future. The extreme measurement accuracy requirements on this mission places very strong constraints on the laser system used for the measurement. This work presents an overview of the characteristics necessary in a laser system used to make this measurement. Consideration is given to the temperature dependence, pressure broadening, and pressure shift of the CO2 lines themselves and how these impact the laser system characteristics. We are examining the possibility of making precise measurements of atmospheric carbon dioxide using a broad band source of radiation. This means that many of the difficulties in wavelength control can be treated in the detector portion of the system rather than the laser source. It also greatly reduces the number of individual lasers required to make a measurement. Simplifications such as these are extremely desirable for systems designed to operate from space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y. John
2016-06-15
Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less
Tashman, Scott; Anderst, William
2003-04-01
Dynamic assessment of three-dimensional (3D) skeletal kinematics is essential for understanding normal joint function as well as the effects of injury or disease. This paper presents a novel technique for measuring in-vivo skeletal kinematics that combines data collected from high-speed biplane radiography and static computed tomography (CT). The goals of the present study were to demonstrate that highly precise measurements can be obtained during dynamic movement studies employing high frame-rate biplane video-radiography, to develop a method for expressing joint kinematics in an anatomically relevant coordinate system and to demonstrate the application of this technique by calculating canine tibio-femoral kinematics during dynamic motion. The method consists of four components: the generation and acquisition of high frame rate biplane radiographs, identification and 3D tracking of implanted bone markers, CT-based coordinate system determination, and kinematic analysis routines for determining joint motion in anatomically based coordinates. Results from dynamic tracking of markers inserted in a phantom object showed the system bias was insignificant (-0.02 mm). The average precision in tracking implanted markers in-vivo was 0.064 mm for the distance between markers and 0.31 degree for the angles between markers. Across-trial standard deviations for tibio-femoral translations were similar for all three motion directions, averaging 0.14 mm (range 0.08 to 0.20 mm). Variability in tibio-femoral rotations was more dependent on rotation axis, with across-trial standard deviations averaging 1.71 degrees for flexion/extension, 0.90 degree for internal/external rotation, and 0.40 degree for varus/valgus rotation. Advantages of this technique over traditional motion analysis methods include the elimination of skin motion artifacts, improved tracking precision and the ability to present results in a consistent anatomical reference frame.
Thermodynamic properties by Equation of state of liquid sodium under pressure
NASA Astrophysics Data System (ADS)
Li, Huaming; Sun, Yongli; Zhang, Xiaoxiao; Li, Mo
Isothermal bulk modulus, molar volume and speed of sound of molten sodium are calculated through an equation of state of a power law form within good precision as compared with the experimental data. The calculated internal energy data show the minimum along the isothermal lines as the previous result but with slightly larger values. The calculated values of isobaric heat capacity show the unexpected minimum in the isothermal compression. The temperature and pressure derivative of various thermodynamic quantities in liquid Sodium are derived. It is discussed about the contribution from entropy to the temperature and pressure derivative of isothermal bulk modulus. The expressions for acoustical parameter and nonlinearity parameter are obtained based on thermodynamic relations from the equation of state. Both parameters for liquid Sodium are calculated under high pressure along the isothermal lines by using the available thermodynamic data and numeric derivations. By comparison with the results from experimental measurements and quasi-thermodynamic theory, the calculated values are found to be very close at melting point at ambient condition. Furthermore, several other thermodynamic quantities are also presented. Scientific Research Starting Foundation from Taiyuan university of Technology, Shanxi Provincial government (``100-talents program''), China Scholarship Council and National Natural Science Foundation of China (NSFC) under Grant No. 11204200.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif, E-mail: ertekin@illinois.edu
2015-12-14
The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlledmore » and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.« less
Ab initio calculations for non-strange and strange few-baryon systems
NASA Astrophysics Data System (ADS)
Leidemann, Winfried
2018-03-01
Concerning the non-strange particle systems the low-energy excitation spectra of the three- and four-body helium isotopes are studied. Objects of the study are the astrophysical S-factor S12 of the radiative proton deuteron capture d(p, )3He and the width of the 4He isoscalar monopole resonance. Both observables are calculated using the Lorentz integral transform (LIT) method. The LIT equations are solved via expansions of the LIT states on a specifically modified hyperspherical harmonics (HH) basis. It is illustrated that at low energies such a modification allows to work with much higher LIT resolutions than with an unmodified HH basis. It is discussed that this opens up the possibility to determine astrophysical S-factors as well as the width of low-lying resonances with the LIT method. In the sector of strange baryon systems binding energies of the hypernucleus _Λ ^3{{H}} H are calculated using a nonsymmetrized HH basis. The results are compared with those calculated by various other groups with different methods. For all the considered non-strange and strange baryon systems it is shown that high-precision results are obtained.
Nonuniform fast Fourier transform method for numerical diffraction simulation on tilted planes.
Xiao, Yu; Tang, Xiahui; Qin, Yingxiong; Peng, Hao; Wang, Wei; Zhong, Lijing
2016-10-01
The method, based on the rotation of the angular spectrum in the frequency domain, is generally used for the diffraction simulation between the tilted planes. Due to the rotation of the angular spectrum, the interval between the sampling points in the Fourier domain is not even. For the conventional fast Fourier transform (FFT)-based methods, a spectrum interpolation is needed to get the approximate sampling value on the equidistant sampling points. However, due to the numerical error caused by the spectrum interpolation, the calculation accuracy degrades very quickly as the rotation angle increases. Here, the diffraction propagation between the tilted planes is transformed into a problem about the discrete Fourier transform on the uneven sampling points, which can be evaluated effectively and precisely through the nonuniform fast Fourier transform method (NUFFT). The most important advantage of this method is that the conventional spectrum interpolation is avoided and the high calculation accuracy can be guaranteed for different rotation angles, even when the rotation angle is close to π/2. Also, its calculation efficiency is comparable with that of the conventional FFT-based methods. Numerical examples as well as a discussion about the calculation accuracy and the sampling method are presented.
Comparative Study of button BPM Trapped Mode Heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron,P.; Singh, O.
2009-05-04
The combination of short bunches and high currents found in modern light sources and colliders can result in the deposition of tens of watts of power in BPM buttons. The resulting thermal distortion is potentially problematic for maintaining high precision beam position stability, and in the extreme case can result in mechanical damage. We present a simple algorithm that uses the input parameters of beam current, bunch length, button diameter, beampipe aperture, and fill pattern to calculate a relative figure-of-merit for button heating. Data for many of the world's light sources and colliders is compiled in a table. Using themore » algorithm, the table is sorted in order of the relative magnitude of button heating.« less
Towards future high performance computing: What will change? How can we be efficient?
NASA Astrophysics Data System (ADS)
Düben, Peter
2017-04-01
How can we make the most out of "exascale" supercomputers that will be available soon and enable us to calculate an amazing number of 1,000,000,000,000,000,000 real numbers operations within a single second? How do we need to design applications to use these machines efficiently? What are the limits? We will discuss opportunities and limits of the use of future high performance computers from the perspective of Earth System Modelling. We will provide an overview about future challenges and outline how numerical application will need to be changed to run efficiently on supercomputers in the future. We will also discuss how different disciplines can support each other and talk about data handling and numerical precision of data.
The use of heterodyne speckle photogrammetry to measure high-temperature strain distributions
NASA Technical Reports Server (NTRS)
Stetson, K. A.
1983-01-01
Thermal and mechanical strains have been measured on samples of a common material used in jet engine burner liners, which were heated from room temperature to 870 C and cooled back to 220 C, in a laboratory furnace. The physical geometry of the sample surface was recorded to select temperatures by means of a set of twelve single-exposure specklegrams. Sequential pairs of specklegrams were compared in a heterodyne interferometer which allowed high-precision measurement of differential displacements. Good speckle correlation was observed between the first and last specklegrams also, which showed the durability of the surface microstructure, and permitted a check on accumulated errors. Agreement with calculated thermal expansion was to within a few hundred microstrain over a range of fourteen thousand.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Weiwei; Kaminski, Clemens F., E-mail: cfk23@cam.ac.uk
2014-01-20
This paper proposes a technique that can simultaneously retrieve distributions of temperature, concentration of chemical species, and pressure based on broad bandwidth, frequency-agile tomographic absorption spectroscopy. The technique holds particular promise for the study of dynamic combusting flows. A proof-of-concept numerical demonstration is presented, using representative phantoms to model conditions typically prevailing in near-atmospheric or high pressure flames. The simulations reveal both the feasibility of the proposed technique and its robustness. Our calculations indicate precisions of ∼70 K at flame temperatures and ∼0.05 bars at high pressure from reconstructions featuring as much as 5% Gaussian noise in the projections.
Precision bounds for gradient magnetometry with atomic ensembles
NASA Astrophysics Data System (ADS)
Apellaniz, Iagoba; Urizar-Lanz, Iñigo; Zimborás, Zoltán; Hyllus, Philipp; Tóth, Géza
2018-05-01
We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom, we also discuss the case of gradient metrology with a single Bose-Einstein condensate.
Stability Analysis of Receiver ISB for BDS/GPS
NASA Astrophysics Data System (ADS)
Zhang, H.; Hao, J. M.; Tian, Y. G.; Yu, H. L.; Zhou, Y. L.
2017-07-01
Stability analysis of receiver ISB (Inter-System Bias) is essential for understanding the feature of ISB as well as the ISB modeling and prediction. In order to analyze the long-term stability of ISB, the data from MGEX (Multi-GNSS Experiment) covering 3 weeks, which are from 2014, 2015 and 2016 respectively, are processed with the precise satellite clock and orbit products provided by Wuhan University and GeoForschungsZentrum (GFZ). Using the ISB calculated by BDS (BeiDou Navigation Satellite System)/GPS (Global Positioning System) combined PPP (Precise Point Positioning), the daily stability and weekly stability of ISB are investigated. The experimental results show that the diurnal variation of ISB is stable, and the average of daily standard deviation is about 0.5 ns. The weekly averages and standard deviations of ISB vary greatly in different years. The weekly averages of ISB are relevant to receiver types. There is a system bias between ISB calculated from the precise products provided by Wuhan University and GFZ. In addition, the system bias of the weekly average ISB of different stations is consistent with each other.
NASA Astrophysics Data System (ADS)
Alparone, S.; Gambino, S.; Mostaccio, A.; Spampinato, S.; Tuvè, T.; Ursino, A.
2009-04-01
The north-eastern flank of Mt. Etna is crossed by an important and active tectonic structure, the Pernicana Fault having a mean strike WNW-ESE. It links westward to the active NE Rift and seems to have an important role in controlling instability processes affecting the eastern flank of the volcano. Recent studies suggest that Pernicana Fault is very active through sinistral, oblique-slip movements and is also characterised by frequent shallow seismicity (depth < 2 km bsl) on the uphill western segment and by remarkable creeping on the downhill eastern one. The Pernicana Fault earthquakes, which can reach magnitudes up to 4.2, sometimes with coseismic surface faulting, caused severe damages to tourist resorts and villages along or close this structure. In the last years, a strong increase of seismicity, also characterized by swarms, was recorded by INGV-CT permanent local seismic network close the Pernicana Fault. A three-step procedure was applied to calculate precise hypocentre locations. In a first step, we chose to apply cross-correlation analysis, in order to easily evaluate the similarity of waveforms useful to identify earthquakes families. In a second step, we calculate probabilistic earthquake locations using the software package NONLINLOC, which includes systematic, complete grid search and global, non-linear search methods. Subsequently, we perform relative relocation of correlated event pairs using the double-difference earthquake algorithm and the program HypoDD. The double-difference algorithm minimizes the residuals between observed and calculated travel time difference for pairs of earthquakes at common stations by iteratively adjusting the vector difference between the hypocenters. We show the recognized spatial seismic clusters identifying the most active and hazarding sectors of the structure, their geometry and depth. Finally, in order to clarify the geodynamic framework of the area, we associate these results with calculated focal mechanisms for the most energetic earthquakes.