Correcting incompatible DN values and geometric errors in nighttime lights time series images
Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.
2014-09-19
The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.
Accurate adiabatic correction in the hydrogen molecule
NASA Astrophysics Data System (ADS)
Pachucki, Krzysztof; Komasa, Jacek
2014-12-01
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.
Accurate adiabatic correction in the hydrogen molecule.
Pachucki, Krzysztof; Komasa, Jacek
2014-12-14
A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728
Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions
ERIC Educational Resources Information Center
Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara
2012-01-01
This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241
Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan
2015-09-15
The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle
2016-04-01
In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.
NASA Astrophysics Data System (ADS)
Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.
2016-10-01
Declared North Korean nuclear tests in 2006, 2009, 2013, and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-dimensional global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25% shorter than the distances between events estimated using regional Pn phases. The 2009, 2013, and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of meters. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio, and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-d velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The
Grimme, Stefan
2004-09-01
An empirical method to account for van der Waals interactions in practical calculations with the density functional theory (termed DFT-D) is tested for a wide variety of molecular complexes. As in previous schemes, the dispersive energy is described by damped interatomic potentials of the form C6R(-6). The use of pure, gradient-corrected density functionals (BLYP and PBE), together with the resolution-of-the-identity (RI) approximation for the Coulomb operator, allows very efficient computations for large systems. Opposed to previous work, extended AO basis sets of polarized TZV or QZV quality are employed, which reduces the basis set superposition error to a negligible extend. By using a global scaling factor for the atomic C6 coefficients, the functional dependence of the results could be strongly reduced. The "double counting" of correlation effects for strongly bound complexes is found to be insignificant if steep damping functions are employed. The method is applied to a total of 29 complexes of atoms and small molecules (Ne, CH4, NH3, H2O, CH3F, N2, F2, formic acid, ethene, and ethine) with each other and with benzene, to benzene, naphthalene, pyrene, and coronene dimers, the naphthalene trimer, coronene. H2O and four H-bonded and stacked DNA base pairs (AT and GC). In almost all cases, very good agreement with reliable theoretical or experimental results for binding energies and intermolecular distances is obtained. For stacked aromatic systems and the important base pairs, the DFT-D-BLYP model seems to be even superior to standard MP2 treatments that systematically overbind. The good results obtained suggest the approach as a practical tool to describe the properties of many important van der Waals systems in chemistry. Furthermore, the DFT-D data may either be used to calibrate much simpler (e.g., force-field) potentials or the optimized structures can be used as input for more accurate ab initio calculations of the interaction energies.
Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography
Liu, J.; Xia, J.; Chen, C.; Zhang, G.
2005-01-01
The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.
Band-Filling Correction Method for Accurate Adsorption Energy Calculations: A Cu/ZnO Case Study.
Hellström, Matti; Spångberg, Daniel; Hermansson, Kersti; Broqvist, Peter
2013-11-12
We present a simple method, the "band-filling correction", to calculate accurate adsorption energies (Eads) in the low coverage limit from finite-size supercell slab calculations using DFT. We show that it is necessary to use such a correction if charge transfer takes place between the adsorbate and the substrate, resulting in the substrate bands either filling up or becoming depleted. With this correction scheme, we calculate Eads of an isolated Cu atom adsorbed on the ZnO(101̅0) surface. Without the correction, the calculated Eads is highly coverage-dependent, even for surface supercells that would typically be considered very large (in the range from 1 nm × 1 nm to 2.5 nm × 2.5 nm). The correction scheme works very well for semilocal functionals, where the corrected Eads is converged within 0.01 eV for all coverages. The correction scheme also works well for hybrid functionals if a large supercell is used and the exact exchange interaction is screened. PMID:26583386
Surface EMG measurements during fMRI at 3T: accurate EMG recordings after artifact correction.
van Duinen, Hiske; Zijdewind, Inge; Hoogduin, Hans; Maurits, Natasha
2005-08-01
In this experiment, we have measured surface EMG of the first dorsal interosseus during predefined submaximal isometric contractions (5, 15, 30, 50, and 70% of maximal force) of the index finger simultaneously with fMRI measurements. Since we have used sparse sampling fMRI (3-s scanning; 2-s non-scanning), we were able to compare the mean amplitude of the undisturbed EMG (non-scanning) intervals with the mean amplitude of the EMG intervals during scanning, after MRI artifact correction. The agreement between the mean amplitudes of the corrected and the undisturbed EMG was excellent and the mean difference between the two amplitudes was not significantly different. Furthermore, there was no significant difference between the corrected and undisturbed amplitude at different force levels. In conclusion, we have shown that it is feasible to record surface EMG during scanning and that, after MRI artifact correction, the EMG recordings can be used to quantify isometric muscle activity, even at very low activation intensities.
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-28
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
NASA Astrophysics Data System (ADS)
Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.
2014-01-01
Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.
Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin
2014-12-20
We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.
NASA Astrophysics Data System (ADS)
Zohoun, Sylvain; Agoua, Eusèbe; Degan, Gérard; Perre, Patrick
2002-08-01
This paper presents an experimental study of the mass diffusion in the hygroscopic region of four temperate species and three tropical ones. In order to simplify the interpretation of the phenomena, a dimensionless parameter called reduced diffusivity is defined. This parameter varies from 0 to 1. The method used is firstly based on the determination of that parameter from results of the measurement of the mass flux which takes into account the conditions of operating standard device (tightness, dimensional variations and easy installation of samples of wood, good stability of temperature and humidity). Secondly the reasons why that parameter has to be corrected are presented. An abacus for this correction of mass diffusivity of wood in steady regime has been plotted. This work constitutes an advanced deal nowadays for characterising forest species.
Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice
2015-06-01
Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and
Peng, Xiangda; Zhang, Yuebin; Chu, Huiying; Li, Yan; Zhang, Dinglin; Cao, Liaoran; Li, Guohui
2016-06-14
Classical molecular dynamic (MD) simulation of membrane proteins faces significant challenges in accurately reproducing and predicting experimental observables such as ion conductance and permeability due to its incapability of precisely describing the electronic interactions in heterogeneous systems. In this work, the free energy profiles of K(+) and Na(+) permeating through the gramicidin A channel are characterized by using the AMOEBA polarizable force field with a total sampling time of 1 μs. Our results indicated that by explicitly introducing the multipole terms and polarization into the electrostatic potentials, the permeation free energy barrier of K(+) through the gA channel is considerably reduced compared to the overestimated results obtained from the fixed-charge model. Moreover, the estimated maximum conductance, without any corrections, for both K(+) and Na(+) passing through the gA channel are much closer to the experimental results than any classical MD simulations, demonstrating the power of AMOEBA in investigating the membrane proteins. PMID:27171823
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.
1993-01-01
We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.
2015-11-01
In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278
NASA Astrophysics Data System (ADS)
Liu, Miao; Yin, Shibin; Yang, Shourui; Zhang, Zonghua
2015-10-01
Digital projector is frequently applied to generate fringe pattern in phase calculation-based three dimensional (3D) imaging systems. Digital projector often works with camera in this kind of systems so the intensity response of a projector should be linear in order to ensure the measurement precision especially in Phase-Measuring Profilometry (PMP). Some correction methods are often applied to cope with the non-linear intensity response of the digital projector. These methods usually rely on camera and gamma function is often applied to compensate the non-linear response so the correction performance is restricted by the dynamic range of camera. In addition, the gamma function is not suitable to compensate the nonmonotonicity intensity response. This paper propose a gamma correction method by the precisely detecting the optical energy instead of using a plate and camera. A photodiode with high dynamic range and linear response is used to directly capture the light optical from the digital projector. After obtaining the real gamma curve precisely by photodiode, a gray level look-up table (LUT) is generated to correct the image to be projected. Finally, this proposed method is verified experimentally.
Karton, A.; Martin, J. M. L.; Ruscic, B.; Chemistry; Weizmann Institute of Science
2007-06-01
A benchmark calculation of the atomization energy of the 'simple' organic molecule C2H6 (ethane) has been carried out by means of W4 theory. While the molecule is straightforward in terms of one-particle and n-particle basis set convergence, its large zero-point vibrational energy (and anharmonic correction thereto) and nontrivial diagonal Born-Oppenheimer correction (DBOC) represent interesting challenges. For the W4 set of molecules and C2H6, we show that DBOCs to the total atomization energy are systematically overestimated at the SCF level, and that the correlation correction converges very rapidly with the basis set. Thus, even at the CISD/cc-pVDZ level, useful correlation corrections to the DBOC are obtained. When applying such a correction, overall agreement with experiment was only marginally improved, but a more significant improvement is seen when hydrogen-containing systems are considered in isolation. We conclude that for closed-shell organic molecules, the greatest obstacles to highly accurate computational thermochemistry may not lie in the solution of the clamped-nuclei Schroedinger equation, but rather in the zero-point vibrational energy and the diagonal Born-Oppenheimer correction.
NASA Astrophysics Data System (ADS)
Yuan, Hong-Lin; Gao, Shan; Zong, Chun-Lei; Dai, Meng-Ning
2009-11-01
In this study, we employ a sectional power-law (SPL) correction that provides accurate and precise measurements of 176Lu/ 175Lu ratios in geological samples using multiple collector-inductively coupled plasma-mass spectrometry (MC-ICP-MS). Three independent power laws were adopted based on the 176Lu/ 176Yb ratios of samples measured after chemical chromatography. Using isotope dilution (ID) techniques and the SPL correction method, the measured lutetium contents of United States Geological Survey rock standards (BHVO-1, BHVO-2, BCR-2, AGV-1, and G-2) agree well with the recommended values. Results obtained by conventional ICP-MS and INAA are generally higher than those obtained by ID-TIMS and ID-MC-ICP-MS; this discrepancy probably reflects oxide interference and inaccurate corrections.
NASA Astrophysics Data System (ADS)
Liu, Xinming; Shaw, Chris C.; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C.; Kappadath, S. Cheenu
2006-03-01
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images. Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
NASA Astrophysics Data System (ADS)
Sun, Yuansheng; Periasamy, Ammasi
2010-03-01
Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.
NASA Astrophysics Data System (ADS)
Liu, Zhangweiyi; Wang, Xiaocheng; Sun, Dongning; Dong, Yi; Hu, Weisheng
2015-08-01
We have demonstrated an optical generation of highly stable millimeter-wave signal distribution system, which transfers a 300GHz signal to two remote ends over different optical fiber links for signal stability comparison. The transmission delay variations of each fiber link caused by temperature and mechanical perturbations are compensated by high-precise phase-correction system. The residual phase noise between two remote end signals is detected by dual-heterodyne phase error transfer and reaches -46dBc/Hz at 1 Hz frequency offset from the carrier. The relative instability is 8×10-17 at 1000s averaging time.
NASA Astrophysics Data System (ADS)
Tranchida, Davide; Piccarolo, Stefano; Loos, Joachim; Alexeev, Alexander
2006-10-01
The Oliver and Pharr [J. Mater. Res. 7, 1564 (1992)] procedure is a widely used tool to analyze nanoindentation force curves obtained on metals or ceramics. Its application to polymers is, however, difficult, as Young's moduli are commonly overestimated mainly because of viscoelastic effects and pileup. However, polymers spanning a large range of morphologies have been used in this work to introduce a phenomenological correction factor. It depends on indenter geometry: sets of calibration indentations have to be performed on some polymers with known elastic moduli to characterize each indenter.
Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin
2016-01-01
Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203
Hagen, Nils T
2008-01-01
Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement.
NASA Astrophysics Data System (ADS)
2012-09-01
The feature article "Material advantage?" on the effects of technology and rule changes on sporting performance (July pp28-30) stated that sprinters are less affected by lower oxygen levels at high altitudes because they run "aerobically". They run anaerobically. The feature about the search for the Higgs boson (August pp22-26) incorrectly gave the boson's mass as roughly 125 MeV it is 125 GeV, as correctly stated elsewhere in the issue. The article also gave a wrong value for the intended collision energy of the Superconducting Super Collider, which was designed to collide protons with a total energy of 40 TeV.
2015-05-22
The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426
Fang, Changming; Li, Wun-Fan; Koster, Rik S; Klimeš, Jiří; van Blaaderen, Alfons; van Huis, Marijn A
2015-01-01
Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initio approaches can be applied only to small numbers of atoms, while large numbers of atoms are required for having configurations that are representative of a liquid. Here we show that a high-accuracy value for the electronic band gap of water can be obtained by combining beyond-DFT methods and statistical time-averaging. Liquid water is simulated at 300 K using a plane-wave density functional theory molecular dynamics (PW-DFT-MD) simulation and a van der Waals density functional (optB88-vdW). After applying a self-consistent GW correction the band gap of liquid water at 300 K is calculated as 7.3 eV, in good agreement with recent experimental observations in the literature (6.9 eV). For simulations of phase transformations and chemical reactions in water or aqueous solutions whereby an accurate description of the electronic structure is required, we suggest to use these advanced GW corrections in combination with the statistical analysis of quantum mechanical MD simulations.
Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; Heuvel, Frank van den
2010-04-15
Purpose: A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Methods: Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. Results: The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4{+-}1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images
Darby, B J; Todd, T C; Herman, M A
2013-11-01
Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available. PMID:24103081
Darby, B J; Todd, T C; Herman, M A
2013-11-01
Nematodes are abundant consumers in grassland soils, but more sensitive and specific methods of enumeration are needed to improve our understanding of how different nematode species affect, and are affected by, ecosystem processes. High-throughput amplicon sequencing is used to enumerate microbial and invertebrate communities at a high level of taxonomic resolution, but the method requires validation against traditional specimen-based morphological identifications. To investigate the consistency between these approaches, we enumerated nematodes from a 25-year field experiment using both morphological and molecular identification techniques in order to determine the long-term effects of annual burning and nitrogen enrichment on soil nematode communities. Family-level frequencies based on amplicon sequencing were not initially consistent with specimen-based counts, but correction for differences in rRNA gene copy number using a genetic algorithm improved quantitative accuracy. Multivariate analysis of corrected sequence-based abundances of nematode families was consistent with, but not identical to, analysis of specimen-based counts. In both cases, herbivores, fungivores and predator/omnivores generally were more abundant in burned than nonburned plots, while bacterivores generally were more abundant in nonburned or nitrogen-enriched plots. Discriminate analysis of sequence-based abundances identified putative indicator species representing each trophic group. We conclude that high-throughput amplicon sequencing can be a valuable method for characterizing nematode communities at high taxonomic resolution as long as rRNA gene copy number variation is accounted for and accurate sequence databases are available.
Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A
2013-01-01
Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.
NASA Astrophysics Data System (ADS)
Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.
2007-05-01
The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere
Frolov, Alexei M.; Wardlaw, David M.
2014-09-14
Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.
Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )
1991-11-15
The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.
Osbahr, Inga; Krause, Joachim; Bachmann, Kai; Gutzmer, Jens
2015-10-01
Identification and accurate characterization of platinum-group minerals (PGMs) is usually a very cumbersome procedure due to their small grain size (typically below 10 µm) and inconspicuous appearance under reflected light. A novel strategy for finding PGMs and quantifying their composition was developed. It combines a mineral liberation analyzer (MLA), a point logging system, and electron probe microanalysis (EPMA). As a first step, the PGMs are identified using the MLA. Grains identified as PGMs are then marked and coordinates recorded and transferred to the EPMA. Case studies illustrate that the combination of MLA, point logging, and EPMA results in the identification of a significantly higher number of PGM grains than reflected light microscopy. Analysis of PGMs by EPMA requires considerable effort due to the often significant overlaps between the X-ray spectra of almost all platinum-group and associated elements. X-ray lines suitable for quantitative analysis need to be carefully selected. As peak overlaps cannot be avoided completely, an offline overlap correction based on weight proportions has been developed. Results obtained with the procedure proposed in this study attain acceptable totals and atomic proportions, indicating that the applied corrections are appropriate.
Szidarovszky, Tamás; Császár, Attila G.
2015-01-07
The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.
NASA Astrophysics Data System (ADS)
Zhang, T.; Zhou, L.; Tong, S.
2015-12-01
The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.
Trinquier, Anne; Touboul, Mathieu; Walker, Richard J
2016-02-01
Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.
Political Correctness--Correct?
ERIC Educational Resources Information Center
Boase, Paul H.
1993-01-01
Examines the phenomenon of political correctness, its roots and objectives, and its successes and failures in coping with the conflicts and clashes of multicultural campuses. Argues that speech codes indicate failure in academia's primary mission to civilize and educate through talk, discussion, thought,166 and persuasion. (SR)
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2015-08-01
The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in
NASA Astrophysics Data System (ADS)
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-01
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB
Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
NASA Astrophysics Data System (ADS)
Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.
2016-02-01
The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases
New analysis of the light time effect in TU Ursae Majoris
NASA Astrophysics Data System (ADS)
Liška, J.; Skarka, M.; Mikulášek, Z.; Zejda, M.; Chrastina, M.
2016-05-01
Context. Recent statistical studies prove that the percentage of RR Lyrae pulsators that are located in binaries or multiple stellar systems is considerably lower than might be expected. This can be better understood from an in-depth analysis of individual candidates. We investigate in detail the light time effect of the most probable binary candidate TU UMa. This is complicated because the pulsation period shows secular variation. Aims: We model possible light time effect of TU UMa using a new code applied on previously available and newly determined maxima timings to confirm binarity and refine parameters of the orbit of the RRab component in the binary system. The binary hypothesis is also tested using radial velocity measurements. Methods: We used new approach to determine brightness maxima timings based on template fitting. This can also be used on sparse or scattered data. This approach was successfully applied on measurements from different sources. To determine the orbital parameters of the double star TU UMa, we developed a new code to analyse light time effect that also includes secular variation in the pulsation period. Its usability was successfully tested on CL Aur, an eclipsing binary with mass-transfer in a triple system that shows similar changes in the O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations), we computed and compared our model with centre-of-mass velocities. They were determined using high-quality templates of radial velocity curves of RRab stars. Results: Maxima timings adopted from the GEOS database (168) together with those newly determined from sky surveys and new measurements (85) were used to construct an O-C diagram spanning almost five proposed orbital cycles. This data set is three times larger than data sets used by previous authors. Modelling of the O-C dependence resulted in 23.3-yr orbital period, which translates into a minimum mass of the second component of
NASA Astrophysics Data System (ADS)
Deng, Xue-Mei
2016-05-01
The light time equation and frequency shift are worked out in the framework of a second parametrized post-Newtonian (2PPN) formalism in the Solar System barycentric reference system (SSBRS) developed in a recently published paper. Effects of each body’s oblateness, spin and translational motion are taken into account for the light propagation. It is found that, at the second post-Newtonian (2PN) approximation, the light time and frequency shift depend on the parameter η only.
Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan
2014-01-01
On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries.
The Near-contact Binary RZ Draconis with Two Possible Light-time Orbits
NASA Astrophysics Data System (ADS)
Yang, Y.-G.; Li, H.-L.; Dai, H.-F.; Zhang, L.-Y.
2010-12-01
We present new multicolor photometry for RZ Draconis, observed in 2009 at the Xinglong Station of the National Astronomical Observatories of China. By using the updated version of the Wilson-Devinney Code, the photometric-spectroscopic elements were deduced from new photometric observations and published radial velocity data. The mass ratio and orbital inclination are q = 0.375(±0.002) and i = 84fdg60(±0fdg13), respectively. The fill-out factor of the primary is f = 98.3%, implying that RZ Dra is an Algol-like near-contact binary. Based on 683 light minimum times from 1907 to 2009, the orbital period change was investigated in detail. From the O - C curve, it is discovered that two quasi-sinusoidal variations may exist (i.e., P 3 = 75.62(±2.20) yr and P 4 = 27.59(±0.10) yr), which likely result from light-time effects via the presence of two additional bodies. In a coplanar orbit with the binary system, the third and fourth bodies may be low-mass drafts (i.e., M 3 = 0.175 M sun and M 4 = 0.074 M sun). If this is true, RZ Dra may be a quadruple star. The additional body could extract angular momentum from the binary system, which may cause the orbit to shrink. With the orbit shrinking, the primary may fill its Roche lobe and RZ Dra evolves into a contact configuration.
Accurate monotone cubic interpolation
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1991-01-01
Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
NASA Astrophysics Data System (ADS)
Barbieri, Riccardo
2016-10-01
The test of the electroweak corrections has played a major role in providing evidence for the gauge and the Higgs sectors of the Standard Model. At the same time the consideration of the electroweak corrections has given significant indirect information on the masses of the top and the Higgs boson before their discoveries and important orientation/constraints on the searches for new physics, still highly valuable in the present situation. The progression of these contributions is reviewed.
Accurate quantum chemical calculations
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Accurate Optical Reference Catalogs
NASA Astrophysics Data System (ADS)
Zacharias, N.
2006-08-01
Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
Accurate hydrogen depth profiling by reflection elastic recoil detection analysis
Verda, R. D.; Tesmer, Joseph R.; Nastasi, Michael Anthony,; Bower, R. W.
2001-01-01
A technique to convert reflection elastic recoil detection analysis spectra to depth profiles, the channel-depth conversion, was introduced by Verda, et al [1]. But the channel-depth conversion does not correct for energy spread, the unwanted broadening in the energy of the spectra, which can lead to errors in depth profiling. A work in progress introduces a technique that corrects for energy spread in elastic recoil detection analysis spectra, the energy spread correction [2]. Together, the energy spread correction and the channel-depth conversion comprise an accurate and convenient hydrogen depth profiling method.
Timebias corrections to predictions
NASA Technical Reports Server (NTRS)
Wood, Roger; Gibbs, Philip
1993-01-01
The importance of an accurate knowledge of the time bias corrections to predicted orbits to a satellite laser ranging (SLR) observer, especially for low satellites, is highlighted. Sources of time bias values and the optimum strategy for extrapolation are discussed from the viewpoint of the observer wishing to maximize the chances of getting returns from the next pass. What is said may be seen as a commercial encouraging wider and speedier use of existing data centers for mutually beneficial exchange of time bias data.
Johnson, D
1940-03-22
IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404
Johnson, D
1940-03-22
IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.
The importance of accurate atmospheric modeling
NASA Astrophysics Data System (ADS)
Payne, Dylan; Schroeder, John; Liang, Pang
2014-11-01
This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.
NASA Astrophysics Data System (ADS)
Yang, Y.-G.; Li, H.-L.; Dai, H.-F.
2012-01-01
We present the CCD photometry of two Algol-type binaries, AL Gem and BM Mon, observed from 2008 November to 2011 January. With the updated Wilson-Devinney program, photometric solutions were deduced from their EA-type light curves. The mass ratios and fill-out factors of the primaries are found to be q ph = 0.090(± 0.005) and f 1 = 47.3%(± 0.3%) for AL Gem, and q ph = 0.275(± 0.007) and f 1 = 55.4%(± 0.5%) for BM Mon, respectively. By analyzing the O-C curves, we discovered that the periods of AL Gem and BM Mon change in a quasi-sinusoidal mode, which may possibly result from the light-time effect via the presence of a third body. Periods, amplitudes, and eccentricities of light-time orbits are 78.83(± 1.17) yr, 0fd0204(±0fd0007), and 0.28(± 0.02) for AL Gem and 97.78(± 2.67) yr, 0fd0175(±0fd0006), and 0.29(± 0.02) for BM Mon, respectively. Assumed to be in a coplanar orbit with the binary, the masses of the third bodies would be 0.29 M ⊙ for AL Gem and 0.26 M ⊙ for BM Mon. This kind of additional companion can extract angular momentum from the close binary orbit, and such processes may play an important role in multiple star evolution.
Accurate measurement of unsteady state fluid temperature
NASA Astrophysics Data System (ADS)
Jaremkiewicz, Magdalena
2016-07-01
In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
Device accurately measures and records low gas-flow rates
NASA Technical Reports Server (NTRS)
Branum, L. W.
1966-01-01
Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception. PMID:24549293
Accurate shear measurement with faint sources
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.
Universality of quantum gravity corrections.
Das, Saurya; Vagenas, Elias C
2008-11-28
We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
77 FR 72199 - Technical Corrections; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-05
...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899), and effective on August 6, 2012. That final rule amended the NRC regulations to make technical... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear...
NASA Astrophysics Data System (ADS)
Jiang, Tian-Yu; Li, Li-Fang; Han, Zhan-Wen; Jiang, Deng-Kai
2010-04-01
The first complete charge-coupled device (CCD) light curves in B and V passbands of a neglected contact binary system, CW Cassiopeiae (CW Cas), are presented. They were analyzed simultaneously by using the Wilson and Devinney (WD) code (1971, ApJ, 166, 605). The photometric solution indicates that CW Cas is a W-type W UMa system with a mass ratio of m2/m1 2.234, and that it is in a marginal contact state with a contact degree of ˜6.5% and a relatively large temperature difference of ˜327K between its two components. Based on the minimum times collected from the literature, together with the new ones obtained in this study, the orbital period changes of CW Cas were investigated in detail. It was found that a periodical variation overlaps with a secular period decrease in its orbital period. The long-term period decrease with a rate of dP/dt = -3.44 × 10-8d yr-1 can be interpreted either by mass transfer from the more-massive component to the less-massive with a rate of dm2/dt = -3.6 × 10-8M⊙ yr-1, or by mass and angular-momentum losses through magnetic braking due to a magnetic stellar wind. A low-amplitude cyclic variation with a period of T = 63.7 yr might be caused by the light-time effect due to the presence of a third body.
Profitable capitation requires accurate costing.
West, D A; Hicks, L L; Balas, E A; West, T D
1996-01-01
In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799
... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...
Correctness issues in workflow management
NASA Astrophysics Data System (ADS)
Kamath, Mohan; Ramamritham, Krithi
1996-12-01
Workflow management is a technique to integrate and automate the execution of steps that comprise a complex process, e.g., a business process. Workflow management systems (WFMSs) primarily evolved from industry to cater to the growing demand for office automation tools among businesses. Coincidentally, database researchers developed several extended transaction models to handle similar applications. Although the goals of both the communities were the same, the issues they focused on were different. The workflow community primarily focused on modelling aspects to accurately capture the data and control flow requirements between the steps that comprise a workflow, while the database community focused on correctness aspects to ensure data consistency of sub-transactions that comprise a transaction. However, we now see a confluence of some of the ideas, with additional features being gradually offered by WFMSs. This paper provides an overview of correctness in workflow management. Correctness is an important aspect of WFMSs and a proper understanding of the available concepts and techniques by WFMS developers and workflow designers will help in building workflows that are flexible enough to capture the requirements of real world applications and robust enough to provide the necessary correctness and reliability properties. We first enumerate the correctness issues that have to be considered to ensure data consistency. Then we survey techniques that have been proposed or are being used in WFMSs for ensuring correctness of workflows. These techniques emerge from the areas of workflow management, extended transaction models, multidatabases and transactional workflows. Finally, we present some open issues related to correctness of workflows in the presence of concurrency and failures.
Rethinking political correctness.
Ely, Robin J; Meyerson, Debra E; Davidson, Martin N
2006-09-01
Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.
Accurate documentation and wound measurement.
Hampton, Sylvie
This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.
Thermodynamics of Error Correction
NASA Astrophysics Data System (ADS)
Sartori, Pablo; Pigolotti, Simone
2015-10-01
Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.
SPLASH: Accurate OH maser positions
NASA Astrophysics Data System (ADS)
Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney
2013-10-01
The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.
Accurate thickness measurement of graphene
NASA Astrophysics Data System (ADS)
Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.
2016-03-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate thickness measurement of graphene.
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-03-29
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.
Accurate SHAPE-directed RNA structure determination
Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.
2009-01-01
Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441
Accurate Fiber Length Measurement Using Time-of-Flight Technique
NASA Astrophysics Data System (ADS)
Terra, Osama; Hussein, Hatem
2016-06-01
Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.
Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.
ERIC Educational Resources Information Center
Zuckerman, June Trop
This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…
ERIC Educational Resources Information Center
McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric
2014-01-01
This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…
Accurate, meshless methods for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Raives, Matthias J.
2016-01-01
Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.
Eyeglasses for Vision Correction
... Stories Español Eye Health / Glasses & Contacts Eyeglasses for Vision Correction Dec. 12, 2015 Wearing eyeglasses is an easy way to correct refractive errors. Improving your vision with eyeglasses offers the opportunity to select from ...
Illinois Corrections Project Report
ERIC Educational Resources Information Center
Hungerford, Jack
1974-01-01
The Illinois Corrections Project for Law-Focused Education, which brings law-focused curriculum into corrections institutions, was initiated in 1973 with a summer institute and includes programs in nine particpating institutions. (JH)
Teaching Politically Correct Language
ERIC Educational Resources Information Center
Tsehelska, Maryna
2006-01-01
This article argues that teaching politically correct language to English learners provides them with important information and opportunities to be exposed to cultural issues. The author offers a brief review of how political correctness became an issue and how being politically correct influences the use of language. The article then presents…
Research in Correctional Rehabilitation.
ERIC Educational Resources Information Center
Rehabilitation Services Administration (DHEW), Washington, DC.
Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…
How flatbed scanners upset accurate film dosimetry
NASA Astrophysics Data System (ADS)
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
How flatbed scanners upset accurate film dosimetry.
van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S
2016-01-21
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
ERIC Educational Resources Information Center
Jewell, Jennifer; Malecki, Christine Kerres
2005-01-01
This study examined the utility of three categories of CBM written language indices including production-dependent indices (Total Words Written, Words Spelled Correctly, and Correct Writing Sequences), production-independent indices (Percentage of Words Spelled Correctly and Percentage of Correct Writing Sequences), and an accurate-production…
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
38 CFR 4.46 - Accurate measurement.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2]. PMID:27671933
BASIC: A Simple and Accurate Modular DNA Assembly Method.
Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S
2017-01-01
Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].
Source distribution dependent scatter correction for PVI
Barney, J.S.; Harrop, R.; Dykstra, C.J. . School of Computing Science TRIUMF, Vancouver, British Columbia )
1993-08-01
Source distribution dependent scatter correction methods which incorporate different amounts of information about the source position and material distribution have been developed and tested. The techniques use image to projection integral transformation incorporating varying degrees of information on the distribution of scattering material, or convolution subtraction methods, with some information about the scattering material included in one of the convolution methods. To test the techniques, the authors apply them to data generated by Monte Carlo simulations which use geometric shapes or a voxelized density map to model the scattering material. Source position and material distribution have been found to have some effect on scatter correction. An image to projection method which incorporates a density map produces accurate scatter correction but is computationally expensive. Simpler methods, both image to projection and convolution, can also provide effective scatter correction.
Shuttle program: Computing atmospheric scale height for refraction corrections
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.
[Orthognathic surgery: corrective bone operations].
Reuther, J
2000-05-01
The article reviews the history of orthognathic surgery from the middle of the last century up to the present. Initially, mandibular osteotomies were only performed in cases of severe malformations. But during the last century a precise and standardized procedure for correction of the mandible was established. Multiple modifications allowed control of small fragments, functionally stable osteosynthesis, and finally a precise positioning of the condyle. In 1955 Obwegeser and Trauner introduced the sagittal split osteotomy by an intraoral approach. It was the final breakthrough for orthognathic surgery as a standard treatment for corrections of the mandible. Surgery of the maxilla dates back to the nineteenth century. B. von Langenbeck from Berlin is said to have performed the first Le Fort I osteotomy in 1859. After minor changes, Wassmund corrected a posttraumatic malocclusion by a Le Fort I osteotomy in 1927. But it was Axhausen who risked the total mobilization of the maxilla in 1934. By additional modifications and further refinements, Obwegeser paved the way for this approach to become a standard procedure in maxillofacial surgery. Tessier mobilized the whole midface by a Le Fort III osteotomy and showed new perspectives in the correction of severe malformations of the facial bones, creating the basis of modern craniofacial surgery. While the last 150 years were distinguished by the creation and standardization of surgical methods, the present focus lies on precise treatment planning and the consideration of functional aspects of the whole stomatognathic system. To date, 3D visualization by CT scans, stereolithographic models, and computer-aided treatment planning and simulation allow surgery of complex cases and accurate predictions of soft tissue changes.
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
An accurate method for two-point boundary value problems
NASA Technical Reports Server (NTRS)
Walker, J. D. A.; Weigand, G. G.
1979-01-01
A second-order method for solving two-point boundary value problems on a uniform mesh is presented where the local truncation error is obtained for use with the deferred correction process. In this simple finite difference method the tridiagonal nature of the classical method is preserved but the magnitude of each term in the truncation error is reduced by a factor of two. The method is applied to a number of linear and nonlinear problems and it is shown to produce more accurate results than either the classical method or the technique proposed by Keller (1969).
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Attenuation correction for small animal PET tomographs
NASA Astrophysics Data System (ADS)
Chow, Patrick L.; Rannou, Fernando R.; Chatziioannou, Arion F.
2005-04-01
Attenuation correction is one of the important corrections required for quantitative positron emission tomography (PET). This work will compare the quantitative accuracy of attenuation correction using a simple global scale factor with traditional transmission-based methods acquired either with a small animal PET or a small animal x-ray computed tomography (CT) scanner. Two phantoms (one mouse-sized and one rat-sized) and two animal subjects (one mouse and one rat) were scanned in CTI Concorde Microsystem's microPET® Focus™ for emission and transmission data and in ImTek's MicroCAT™ II for transmission data. PET emission image values were calibrated against a scintillation well counter. Results indicate that the scale factor method of attenuation correction places the average measured activity concentration about the expected value, without correcting for the cupping artefact from attenuation. Noise analysis in the phantom studies with the PET-based method shows that noise in the transmission data increases the noise in the corrected emission data. The CT-based method was accurate and delivered low-noise images suitable for both PET data correction and PET tracer localization.
Accurate theoretical chemistry with coupled pair models.
Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan
2009-05-19
Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now
Symon, K.
1987-11-01
There are various reasons for preferring local (e.g., three bump) orbit correction methods to global corrections. One is the difficulty of solving the mN equations for the required mN correcting bumps, where N is the number of superperiods and m is the number of bumps per superperiod. The latter is not a valid reason for avoiding global corrections, since, we can take advantage of the superperiod symmetry to reduce the mN simultaneous equations to N separate problems, each involving only m simultaneous equations. Previously, I have shown how to solve the general problem when the machine contains unknown magnet errors of known probability distribution; we made measurements of known precision of the orbit displacements at a set of points, and we wish to apply correcting bumps to minimize the weighted rms orbit deviations. In this report, we will consider two simpler problems, using similar methods. We consider the case when we make M beam position measurements per superperiod, and we wish to apply an equal number M of orbit correcting bumps to reduce the measured position errors to zero. We also consider the problem when the number of correcting bumps is less than the number of measurements, and we wish to minimize the weighted rms position errors. We will see that the latter problem involves solving equations of a different form, but involving the same matrices as the former problem.
Contrast image correction method
NASA Astrophysics Data System (ADS)
Schettini, Raimondo; Gasparini, Francesca; Corchs, Silvia; Marini, Fabrizio; Capra, Alessandro; Castorina, Alfio
2010-04-01
A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.
Molnar, Michael; Ilie, Lucian
2015-07-01
Next-generation sequencing technologies revolutionized the ways in which genetic information is obtained and have opened the door for many essential applications in biomedical sciences. Hundreds of gigabytes of data are being produced, and all applications are affected by the errors in the data. Many programs have been designed to correct these errors, most of them targeting the data produced by the dominant technology of Illumina. We present a thorough comparison of these programs. Both HiSeq and MiSeq types of Illumina data are analyzed, and correcting performance is evaluated as the gain in depth and breadth of coverage, as given by correct reads and k-mers. Time and memory requirements, scalability and parallelism are considered as well. Practical guidelines are provided for the effective use of these tools. We also evaluate the efficiency of the current state-of-the-art programs for correcting Illumina data and provide research directions for further improvement.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
...'' (Presidential Sig.) [FR Doc. C1-2010-27668 Filed 11-5-10; 8:45 am] Billing Code 1505-01-D ..., 2010--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-06
.... Drug Interdiction Assistance to the Government of Colombia''. (Presidential Sig.) [FR Doc. C1-2013...--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction In...
ERIC Educational Resources Information Center
Shaw, John M.; Sheahen, Thomas P.
1994-01-01
Describes the theory behind the workings of the Hubble Space Telescope, the spherical aberration in the primary mirror that caused a reduction in image quality, and the corrective device that compensated for the error. (JRH)
Method of absorbance correction in a spectroscopic heating value sensor
Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John
2013-09-17
A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.
moco: Fast Motion Correction for Calcium Imaging.
Dubbs, Alexander; Guevara, James; Yuste, Rafael
2016-01-01
Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035
moco: Fast Motion Correction for Calcium Imaging
Dubbs, Alexander; Guevara, James; Yuste, Rafael
2016-01-01
Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035
moco: Fast Motion Correction for Calcium Imaging.
Dubbs, Alexander; Guevara, James; Yuste, Rafael
2016-01-01
Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.
Respiration correction by clustering in ultrasound images
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong
2016-03-01
Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.
Mobile image based color correction using deblurring
NASA Astrophysics Data System (ADS)
Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.
2015-03-01
Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.
Adaptable DC offset correction
NASA Technical Reports Server (NTRS)
Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)
2009-01-01
Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.
NASA Astrophysics Data System (ADS)
Lidar, Daniel A.; Brun, Todd A.
2013-09-01
Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and
Accurate transition rates for intercombination lines of singly ionized nitrogen
Tayal, S. S.
2011-01-15
The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.
Accurate ab initio vibrational energies of methyl chloride
Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2015-06-28
Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup HL}, and CBS-37{sup HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup HL} and CBS-37{sup HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.
Accurate thermoelastic tensor and acoustic velocities of NaCl
NASA Astrophysics Data System (ADS)
Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.
2015-12-01
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Accurate thermoelastic tensor and acoustic velocities of NaCl
Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.
2015-12-15
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.
Videometric terminal guidance method and system for UAV accurate landing
NASA Astrophysics Data System (ADS)
Zhou, Xiang; Lei, Zhihui; Yu, Qifeng; Zhang, Hongliang; Shang, Yang; Du, Jing; Gui, Yang; Guo, Pengyu
2012-06-01
We present a videometric method and system to implement terminal guidance for Unmanned Aerial Vehicle(UAV) accurate landing. In the videometric system, two calibrated cameras attached to the ground are used, and a calibration method in which at least 5 control points are applied is developed to calibrate the inner and exterior parameters of the cameras. Cameras with 850nm spectral filter are used to recognize a 850nm LED target fixed on the UAV which can highlight itself in images with complicated background. NNLOG (normalized negative laplacian of gaussian) operator is developed for automatic target detection and tracking. Finally, 3-D position of the UAV with high accuracy can be calculated and transfered to control system to direct UAV accurate landing. The videometric system can work in the rate of 50Hz. Many real flight and static accuracy experiments demonstrate the correctness and veracity of the method proposed in this paper, and they also indicate the reliability and robustness of the system proposed in this paper. The static accuracy experiment results show that the deviation is less-than 10cm when target is far from the cameras and lessthan 2cm in 100m region. The real flight experiment results show that the deviation from DGPS is less-than 20cm. The system implement in this paper won the first prize in the AVIC Cup-International UAV Innovation Grand Prix, and it is the only one that achieved UAV accurate landing without GPS or DGPS.
Geological Corrections in Gravimetry
NASA Astrophysics Data System (ADS)
Mikuška, J.; Marušiak, I.
2015-12-01
Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.
Mill profiler machines soft materials accurately
NASA Technical Reports Server (NTRS)
Rauschl, J. A.
1966-01-01
Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds.
Aureolegraph internal scattering correction.
DeVore, John; Villanucci, Dennis; LePage, Andrew
2012-11-20
Two methods of determining instrumental scattering for correcting aureolegraph measurements of particulate solar scattering are presented. One involves subtracting measurements made with and without an external occluding ball and the other is a modification of the Langley Plot method and involves extrapolating aureolegraph measurements collected through a large range of solar zenith angles. Examples of internal scattering correction determinations using the latter method show similar power-law dependencies on scattering, but vary by roughly a factor of 8 and suggest that changing aerosol conditions during the determinations render this method problematic. Examples of corrections of scattering profiles using the former method are presented for a range of atmospheric particulate layers from aerosols to cumulus and cirrus clouds. PMID:23207299
Wang, S.T.
1994-11-01
A wire cable assembly adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies for the Superconducting Super Collider. The correction coil cables have wires collected in wire array with a center rib sandwiched therebetween to form a core assembly. The core assembly is surrounded by an assembly housing having an inner spiral wrap and a counter wound outer spiral wrap. An alternate embodiment of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable on a particle tube in a particle tube assembly. 7 figs.
Corrections and clarifications.
1994-11-11
The 1994 and 1995 federal science budget appropriations for two of the activities were inadvertently transposed in a table that accompanied the article "Hitting the President's target is mixed blessing for agencies" by Jeffrey Mervis (News & Comment, 14 Oct., p. 211). The correct figures for Defense Department spending on university research are $1.460 billion in 1994 and $1.279 billion in 1995; for research and development at NASA, the correct figures are $9.455 billion in 1994 and $9.824 billion in 1995.
Refraction corrections for surveying
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.
DNA barcode data accurately assign higher spider taxa
Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina
2016-01-01
The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of
DNA barcode data accurately assign higher spider taxa.
Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž
2016-01-01
The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the
On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals
NASA Astrophysics Data System (ADS)
Mohankumar, N.; Natarajan, A.
2016-10-01
We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.
Precise and accurate isotopic measurements using multiple-collector ICPMS
NASA Astrophysics Data System (ADS)
Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce
2004-06-01
New techniques of isotopic measurements by a new generation of mass spectrometers equipped with an inductively-coupled-plasma source, a magnetic mass filter, and multiple collection (MC-ICPMS) are quickly developing. These techniques are valuable because of (1) the ability of ICP sources to ionize virtually every element in the periodic table, and (2) the large sample throughout. However, because of the complex trajectories of multiple ion beams produced in the plasma source whether from the same or different elements, the acquisition of precise and accurate isotopic data with this type of instrument still requires a good understanding of instrumental fractionation processes, both mass-dependent and mass-independent. Although physical processes responsible for the instrumental mass bias are still to be understood more fully, we here present a theoretical framework that allows for most of the analytical limitations to high precision and accuracy to be overcome. After a presentation of unifying phenomenological theory for mass-dependent fractionation in mass spectrometers, we show how this theory accounts for the techniques of standard bracketing and of isotopic normalization by a ratio of either the same or a different element, such as the use of Tl to correct mass bias on Pb. Accuracy is discussed with reference to the concept of cup efficiencies. Although these can be simply calibrated by analyzing standards, we derive a straightforward, very general method to calculate accurate isotopic ratios from dynamic measurements. In this study, we successfully applied the dynamic method to Nd and Pb as examples. We confirm that the assumption of identical mass bias for neighboring elements (notably Pb and Tl, and Yb and Lu) is both unnecessary and incorrect. We further discuss the dangers of straightforward standard-sample bracketing when chemical purification of the element to be analyzed is imperfect. Pooling runs to improve precision is acceptable provided the pooled
Issues in Correctional Training and Casework. Correctional Monograph.
ERIC Educational Resources Information Center
Wolford, Bruce I., Ed.; Lawrenz, Pam, Ed.
The eight papers contained in this monograph were drawn from two national meetings on correctional training and casework. Titles and authors are: "The Challenge of Professionalism in Correctional Training" (Michael J. Gilbert); "A New Perspective in Correctional Training" (Jack Lewis); "Reasonable Expectations in Correctional Officer Training:…
Space charge stopband correction
Huang, Xiaobiao; Lee, S.Y.; /Indiana U.
2005-09-01
It is speculated that the space charge effect cause beam emittance growth through the resonant envelope oscillation. Based on this theory, we propose an approach, called space charge stopband correction, to reduce such emittance growth by compensation of the half-integer stopband width of the resonant oscillation. It is illustrated with the Fermilab Booster model.
Counselor Education for Corrections.
ERIC Educational Resources Information Center
Parsigian, Linda
Counselor education programs most often prepare their graduates to work in either a school setting, anywhere from the elementary level through higher education, or a community agency. There is little indication that counselor education programs have seriously undertaken the task of training counselors to enter the correctional field. If…
Refraction corrections for surveying
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Documents#0;#0; ] Presidential Determination No. 2010-14 of September 3, 2010--Unexpected Urgent Refugee And... on page 67015 in the issue of Monday, November 1, 2010, make the following correction: On page 67015, the Presidential Determination number should read ``2010-14'' (Presidential Sig.) [FR Doc....
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-08
... Documents#0;#0; ] Presidential Determination No. 2010-12 of August 26, 2010--Unexpected Urgent Refugee and... beginning on page 67013 in the issue of Monday, November 1, 2010, make the following correction: On page 67013, the Presidential Determination number should read ``2010-12'' (Presidential Sig.) [FR Doc....
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454
Accurate Determination of Conformational Transitions in Oligomeric Membrane Proteins
Sanz-Hernández, Máximo; Vostrikov, Vitaly V.; Veglia, Gianluigi; De Simone, Alfonso
2016-01-01
The structural dynamics governing collective motions in oligomeric membrane proteins play key roles in vital biomolecular processes at cellular membranes. In this study, we present a structural refinement approach that combines solid-state NMR experiments and molecular simulations to accurately describe concerted conformational transitions identifying the overall structural, dynamical, and topological states of oligomeric membrane proteins. The accuracy of the structural ensembles generated with this method is shown to reach the statistical error limit, and is further demonstrated by correctly reproducing orthogonal NMR data. We demonstrate the accuracy of this approach by characterising the pentameric state of phospholamban, a key player in the regulation of calcium uptake in the sarcoplasmic reticulum, and by probing its dynamical activation upon phosphorylation. Our results underline the importance of using an ensemble approach to characterise the conformational transitions that are often responsible for the biological function of oligomeric membrane protein states. PMID:26975211
Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.
Fuchs, Franz G; Hjelmervik, Jon M
2016-02-01
A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.
Neutron supermirrors: an accurate theory for layer thickness computation
NASA Astrophysics Data System (ADS)
Bray, Michael
2001-11-01
We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.
Fast and accurate determination of modularity and its effect size
NASA Astrophysics Data System (ADS)
Treviño, Santiago, III; Nyberg, Amy; Del Genio, Charo I.; Bassler, Kevin E.
2015-02-01
We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős-Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.
NASA Astrophysics Data System (ADS)
Tanrıver, Mehmet
2015-04-01
In this article, a period analysis of the late-type eclipsing binary VV UMa is presented. This work is based on the periodic variation of eclipse timings of the VV UMa binary. We determined the orbital properties and mass of a third orbiting body in the system by analyzing the light-travel time effect. The O-C diagram constructed for all available minima times of VV UMa exhibits a cyclic character superimposed on a linear variation. This variation includes three maxima and two minima within approximately 28,240 orbital periods of the system, which can be explained as the light-travel time effect (LITE) because of an unseen third body in a triple system that causes variations of the eclipse arrival times. New parameter values of the light-time travel effect because of the third body were computed with a period of 23.22 ± 0.17 years in the system. The cyclic-variation analysis produces a value of 0.0139 day as the semi-amplitude of the light-travel time effect and 0.35 as the orbital eccentricity of the third body. The mass of the third body that orbits the eclipsing binary stars is 0.787 ± 0.02 M⊙, and the semi-major axis of its orbit is 10.75 AU.
Modified chemiluminescent NO analyzer accurately measures NOX
NASA Technical Reports Server (NTRS)
Summers, R. L.
1978-01-01
Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.
Quasars as very-accurate clock synchronizers
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Goldstein, R. M.
1975-01-01
Quasars can be employed to synchronize global data communications, geophysical measurements, and atomic clocks. It is potentially two to three orders of magnitude better than presently-used Moon-bounce system. Comparisons between quasar and clock pulses are used to develop correction or synchronization factors for station clocks.
Can Appraisers Rate Work Performance Accurately?
ERIC Educational Resources Information Center
Hedge, Jerry W.; Laue, Frances J.
The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…
Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
A highly accurate ab initio potential energy surface for methane
NASA Astrophysics Data System (ADS)
Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-01
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.
A highly accurate ab initio potential energy surface for methane.
Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-14
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement. PMID:27634258
A highly accurate ab initio potential energy surface for methane.
Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-14
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.
Accurate estimation of sigma(exp 0) using AIRSAR data
NASA Technical Reports Server (NTRS)
Holecz, Francesco; Rignot, Eric
1995-01-01
During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.
Highly Accurate Inverse Consistent Registration: A Robust Approach
Reuter, Martin; Rosas, H. Diana; Fischl, Bruce
2010-01-01
The registration of images is a task that is at the core of many applications in computer vision. In computational neuroimaging where the automated segmentation of brain structures is frequently used to quantify change, a highly accurate registration is necessary for motion correction of images taken in the same session, or across time in longitudinal studies where changes in the images can be expected. This paper, inspired by Nestares and Heeger (2000), presents a method based on robust statistics to register images in the presence of differences, such as jaw movement, differential MR distortions and true anatomical change. The approach we present guarantees inverse consistency (symmetry), can deal with different intensity scales and automatically estimates a sensitivity parameter to detect outlier regions in the images. The resulting registrations are highly accurate due to their ability to ignore outlier regions and show superior robustness with respect to noise, to intensity scaling and outliers when compared to state-of-the-art registration tools such as FLIRT (in FSL) or the coregistration tool in SPM. PMID:20637289
Accurate phylogenetic classification of DNA fragments based onsequence composition
McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore
2006-05-01
Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.
Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.
Puzzarini, Cristina
2015-11-25
The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed.
Aberration corrected emittance exchange
NASA Astrophysics Data System (ADS)
Nanni, E. A.; Graves, W. S.
2015-08-01
Full exploitation of emittance exchange (EEX) requires aberration-free performance of a complex imaging system including active radio-frequency (rf) elements which can add temporal distortions. We investigate the performance of an EEX line where the exchange occurs between two dimensions with normalized emittances which differ by multiple orders of magnitude. The transverse emittance is exchanged into the longitudinal dimension using a double dogleg emittance exchange setup with a five cell rf deflector cavity. Aberration correction is performed on the four most dominant aberrations. These include temporal aberrations that are corrected with higher order magnetic optical elements located where longitudinal and transverse emittance are coupled. We demonstrate aberration-free performance of an EEX line with emittances differing by four orders of magnitude, i.e., an initial transverse emittance of 1 pm-rad is exchanged with a longitudinal emittance of 10 nm-rad.
Wang, Sou-Tien
1994-11-01
A wire cable assembly (10, 310) adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies (532) for the superconducting super collider. The correction coil cables (10, 310) have wires (14, 314) collected in wire arrays (12, 312) with a center rib (16, 316) sandwiched therebetween to form a core assembly (18, 318 ). The core assembly (18, 318) is surrounded by an assembly housing (20, 320) having an inner spiral wrap (22, 322) and a counter wound outer spiral wrap (24, 324). An alternate embodiment (410) of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable (410) on a particle tube (733) in a particle tube assembly (732).
Surgical correction of brachymetatarsia.
Bartolomei, F J
1990-02-01
Brachymetatarsia describes the condition of an abnormally short metatarsal. Although the condition has been recorded since antiquity, surgical options to correct the deformity have been available for only two decades. Most published procedures involve metaphyseal lengthening with autogenous grafts from different donor sites. The author discusses one such surgical technique. In addition, the author proposes specific criteria for the objective diagnosis of brachymetatarsia. PMID:2406417
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.
Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis
2016-01-01
Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956
Interventions to Correct Misinformation About Tobacco Products
Cappella, Joseph N.; Maloney, Erin; Ophir, Yotam; Brennan, Emily
2016-01-01
In 2006, the U.S. District Court held that tobacco companies had “falsely and fraudulently” denied: tobacco causes lung cancer; environmental smoke endangers children’s respiratory systems; nicotine is highly addictive; low tar cigarettes were less harmful when they were not; they marketed to children; they manipulated nicotine delivery to enhance addiction; and they concealed and destroyed evidence to prevent accurate public knowledge. The courts required the tobacco companies to repair this misinformation. Several studies evaluated types of corrective statements (CS). We argue that most CS proposed (“simple CS’s”) will fall prey to “belief echoes” leaving affective remnants of the misinformation untouched while correcting underlying knowledge. Alternative forms for CS (“enhanced CS’s”) are proposed that include narrative forms, causal linkage, and emotional links to the receiver. PMID:27135046
FIELD CORRECTION FACTORS FOR PERSONAL NEUTRON DOSEMETERS.
Luszik-Bhadra, M
2016-09-01
A field-dependent correction factor can be obtained by comparing the readings of two albedo neutron dosemeters fixed in opposite directions on a polyethylene sphere to the H*(10) reading as determined with a thermal neutron detector in the centre of the same sphere. The work shows that the field calibration technique as used for albedo neutron dosemeters can be generalised for all kind of dosemeters, since H*(10) is a conservative estimate of the sum of the personal dose equivalents Hp(10) in two opposite directions. This result is drawn from reference values as determined by spectrometers within the EVIDOS project at workplace of nuclear installations in Europe. More accurate field-dependent correction factors can be achieved by the analysis of several personal dosimeters on a phantom, but reliable angular responses of these dosemeters need to be taken into account. PMID:26493946
Refining atmospheric correction for aquatic remote spectroscopy
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.
2015-12-01
Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems. PMID:24058046
Exemplar-based human action pose correction.
Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen
2014-07-01
The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.
Quantum-electrodynamics corrections in pionic hydrogen
Schlesser, S.; Le Bigot, E.-O.; Indelicato, P.; Pachucki, K.
2011-07-15
We investigate all pure quantum-electrodynamics corrections to the np{yields}1s, n=2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order {alpha}{sup 5}. These values are needed to extract an accurate strong interaction shift from experiment. Many small effects, such as second-order and double vacuum polarization contribution, proton and pion self-energies, finite size and recoil effects are included with exact mass dependence. Our final value differs from previous calculations by up to {approx_equal}11 ppm for the 1s state, while a recent experiment aims at a 4 ppm accuracy.
ERIC Educational Resources Information Center
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-01-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…
Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi
2012-06-01
One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705
[An Algorithm for Correcting Fetal Heart Rate Baseline].
Li, Xiaodong; Lu, Yaosheng
2015-10-01
Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.
Two highly accurate methods for pitch calibration
NASA Astrophysics Data System (ADS)
Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.
2009-11-01
Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Accurate Guitar Tuning by Cochlear Implant Musicians
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081
Preparation and accurate measurement of pure ozone.
Janssen, Christof; Simone, Daniela; Guinet, Mickaël
2011-03-01
Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766
Accurate guitar tuning by cochlear implant musicians.
Lu, Thomas; Huang, Juan; Zeng, Fan-Gang
2014-01-01
Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate modeling of parallel scientific computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Townsend, James C.
1988-01-01
Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.
Line gas sampling system ensures accurate analysis
Not Available
1992-06-01
Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.
Accurate mask model for advanced nodes
NASA Astrophysics Data System (ADS)
Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle
2014-07-01
Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-10-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Accurate maser positions for MALT-45
NASA Astrophysics Data System (ADS)
Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven
2013-04-01
MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo
2016-02-01
Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.
Accurate Molecular Polarizabilities Based on Continuum Electrostatics
Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.
2013-01-01
A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139
Accurate phase-shift velocimetry in rock
NASA Astrophysics Data System (ADS)
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
Accurate phase-shift velocimetry in rock.
Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M
2016-06-01
Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.
High Frequency QRS ECG Accurately Detects Cardiomyopathy
NASA Technical Reports Server (NTRS)
Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds
2005-01-01
High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing
NASA Technical Reports Server (NTRS)
Martin, D. R.; Smaulon, A. S.; Hamori, A. S.
1980-01-01
A processor architecture for performing onboard geometric and radiometric correction of LANDSAT imagery is described. The design uses a general purpose processor to calculate the distortion values at selected points in the image and a special purpose processor to resample (calculate distortion at each image point and interpolate the intensity) the sensor output data. A distinct special purpose processor is used for each spectral band. Because of the sensor's high output data rate, 80 M bit per second, the special purpose processors use a pipeline architecture. Sizing has been done on both the general and special purpose hardware.
Fisher Transformations for Correlations Corrected for Selection and Missing Data.
ERIC Educational Resources Information Center
Mendoza, Jorge L.
1993-01-01
A Fisher's Z transformation is developed for the corrected correlation for conditions when the criterion data are missing because of selection on the predictor and when the criterion was missing at random, not because of selection. The two Z transformations were evaluated in a computer simulation and found accurate. (SLD)
NASA Astrophysics Data System (ADS)
Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk
2013-12-01
Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.
Complications of auricular correction
Staindl, Otto; Siedek, Vanessa
2008-01-01
The risk of complications of auricular correction is underestimated. There is around a 5% risk of early complications (haematoma, infection, fistulae caused by stitches and granulomae, allergic reactions, pressure ulcers, feelings of pain and asymmetry in side comparison) and a 20% risk of late complications (recurrences, telehone ear, excessive edge formation, auricle fitting too closely, narrowing of the auditory canal, keloids and complete collapse of the ear). Deformities are evaluated less critically by patients than by the surgeons, providing they do not concern how the ear is positioned. The causes of complications and deformities are, in the vast majority of cases, incorrect diagnosis and wrong choice of operating procedure. The choice of operating procedure must be adapted to suit the individual ear morphology. Bandaging technique and inspections and, if necessary, early revision are of great importance for the occurence and progress of early complications, in addition to operation techniques. In cases of late complications such as keloids and auricles that are too closely fitting, unfixed full-thickness skin flaps have proved to be the most successful. Large deformities can often only be corrected to a limited degree of satisfaction. PMID:22073079
Complications of auricular correction.
Staindl, Otto; Siedek, Vanessa
2007-01-01
The risk of complications of auricular correction is underestimated. There is around a 5% risk of early complications (haematoma, infection, fistulae caused by stitches and granulomae, allergic reactions, pressure ulcers, feelings of pain and asymmetry in side comparison) and a 20% risk of late complications (recurrences, telehone ear, excessive edge formation, auricle fitting too closely, narrowing of the auditory canal, keloids and complete collapse of the ear). Deformities are evaluated less critically by patients than by the surgeons, providing they do not concern how the ear is positioned. The causes of complications and deformities are, in the vast majority of cases, incorrect diagnosis and wrong choice of operating procedure. The choice of operating procedure must be adapted to suit the individual ear morphology. Bandaging technique and inspections and, if necessary, early revision are of great importance for the occurence and progress of early complications, in addition to operation techniques. In cases of late complications such as keloids and auricles that are too closely fitting, unfixed full-thickness skin flaps have proved to be the most successful. Large deformities can often only be corrected to a limited degree of satisfaction. PMID:22073079
Contact Lenses for Vision Correction
... Contact Lenses Colored Contact Lenses Contact Lenses for Vision Correction Written by: Kierstan Boyd Reviewed by: Brenda ... on the surface of the eye. They correct vision like eyeglasses do and are safe when used ...
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Accurately Mapping M31's Microlensing Population
NASA Astrophysics Data System (ADS)
Crotts, Arlin
2004-07-01
We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity
The first accurate description of an aurora
NASA Astrophysics Data System (ADS)
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.
Accurate density functional thermochemistry for larger molecules.
Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.
1997-06-20
Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).
New law requires 'medically accurate' lesson plans.
1999-09-17
The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835
Universality: Accurate Checks in Dyson's Hierarchical Model
NASA Astrophysics Data System (ADS)
Godina, J. J.; Meurice, Y.; Oktay, M. B.
2003-06-01
In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.
Misra, Satyajeet; Sinha, Prabhat K; Koshy, Thomas; Sandhyamani, Samavedam; Parija, Chandrabhanu; Gopal, Kirun
2009-11-01
Angiolipoma (angiolipohamartoma) of the tricuspid valve (TV) is a rare tumor which may be occasionally misdiagnosed as right atrial (RA) myxoma. Transesophageal echocardiography (TEE) provides accurate information regarding the size, shape, mobility as well as site of attachment of RA tumors and is a superior modality as compared to transthoracic echocardiography (TTE). Correct diagnosis of RA tumors has therapeutic significance and guides management of patients, as myxomas are generally more aggressively managed than lipomas. We describe a rare case of a pedunculated angiolipoma of the TV which was misdiagnosed as RA myxoma on TTE and discuss the echocardiographic-pathologic correlates of the tumor as well as its accurate localization by TEE.
Radiation camera motion correction system
Hoffer, P.B.
1973-12-18
The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)
Political Correctness and Cultural Studies.
ERIC Educational Resources Information Center
Carey, James W.
1992-01-01
Discusses political correctness and cultural studies, dealing with cultural studies and the left, the conservative assault on cultural studies, and political correctness in the university. Describes some of the underlying changes in the university, largely unaddressed in the political correctness debate, that provide the deep structure to the…
Job Satisfaction in Correctional Officers.
ERIC Educational Resources Information Center
Diehl, Ron J.
For more than a decade, correctional leaders throughout the country have attempted to come to grips with the basic issues involved in ascertaining and meeting the needs of correctional institutions. This study investigated job satisfaction in 122 correctional officers employed in both rural and urban prison locations for the State of Kansas…
Yearbook of Correctional Education 1989.
ERIC Educational Resources Information Center
Duguid, Stephen, Ed.
This yearbook contains conference papers, commissioned papers, reprints of earlier works, and research-in-progress. They offer a retrospective view as well as address the mission and perspective of correctional education, its international dimension, correctional education in action, and current research. Papers include "Correctional Education and…
EDITORIAL: Politically correct physics?
NASA Astrophysics Data System (ADS)
Pople Deputy Editor, Stephen
1997-03-01
If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the
Temperature Corrected Bootstrap Algorithm
NASA Technical Reports Server (NTRS)
Comiso, Joey C.; Zwally, H. Jay
1997-01-01
A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.
Electronic measurement correction devices
Mahns, R.R.
1984-04-01
The electronics semi-conductor revolution has touched every industry and home in the nation. The gas industry is no exception. Sophisticated gas measurement instrumentation has been with us for several decades now, but only in the last 10 years or so has it really begun to boom. First marketed were the flow computers dedicated to orifice meter measurement; but with steadily decreasing manufacturing costs, electronic instrumentation is now moving into the area of base volume, pressure and temperature correction previously handled almost solely by mechanical integrating instruments. This paper takes a brief look at some of the features of the newcomers on the market and how they stack up against the old standby mechanical base volume/pressure/temperature correctors.
Accurate basis set truncation for wavefunction embedding
NASA Astrophysics Data System (ADS)
Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.
2013-07-01
Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.
Accurate determination of characteristic relative permeability curves
NASA Astrophysics Data System (ADS)
Krause, Michael H.; Benson, Sally M.
2015-09-01
A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.
How Accurately can we Calculate Thermal Systems?
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-04-20
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.
Accurate Stellar Parameters for Exoplanet Host Stars
NASA Astrophysics Data System (ADS)
Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.
2015-01-01
A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.
Accurate Classification of RNA Structures Using Topological Fingerprints
Li, Kejie; Gribskov, Michael
2016-01-01
While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571
Accurate Orientation Estimation Using AHRS under Conditions of Magnetic Distortion
Yadav, Nagesh; Bleakley, Chris
2014-01-01
Low cost, compact attitude heading reference systems (AHRS) are now being used to track human body movements in indoor environments by estimation of the 3D orientation of body segments. In many of these systems, heading estimation is achieved by monitoring the strength of the Earth's magnetic field. However, the Earth's magnetic field can be locally distorted due to the proximity of ferrous and/or magnetic objects. Herein, we propose a novel method for accurate 3D orientation estimation using an AHRS, comprised of an accelerometer, gyroscope and magnetometer, under conditions of magnetic field distortion. The system performs online detection and compensation for magnetic disturbances, due to, for example, the presence of ferrous objects. The magnetic distortions are detected by exploiting variations in magnetic dip angle, relative to the gravity vector, and in magnetic strength. We investigate and show the advantages of using both magnetic strength and magnetic dip angle for detecting the presence of magnetic distortions. The correction method is based on a particle filter, which performs the correction using an adaptive cost function and by adapting the variance during particle resampling, so as to place more emphasis on the results of dead reckoning of the gyroscope measurements and less on the magnetometer readings. The proposed method was tested in an indoor environment in the presence of various magnetic distortions and under various accelerations (up to 3 g). In the experiments, the proposed algorithm achieves <2° static peak-to-peak error and <5° dynamic peak-to-peak error, significantly outperforming previous methods. PMID:25347584
Building dynamic population graph for accurate correspondence detection.
Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang
2015-12-01
In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph.
Accurate, Fully-Automated NMR Spectral Profiling for Metabolomics
Ravanbakhsh, Siamak; Liu, Philip; Bjordahl, Trent C.; Mandal, Rupasri; Grant, Jason R.; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S.
2015-01-01
Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person’s biofluids, which means such diseases can often be readily detected from a person’s “metabolic profile"—i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person’s metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the “signatures” of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively—with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Motor equivalence during multi-finger accurate force production.
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M; Latash, Mark L
2015-02-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The "inverse piano" apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes in neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Accurate, fully-automated NMR spectral profiling for metabolomics.
Ravanbakhsh, Siamak; Liu, Philip; Bjorndahl, Trent C; Bjordahl, Trent C; Mandal, Rupasri; Grant, Jason R; Wilson, Michael; Eisner, Roman; Sinelnikov, Igor; Hu, Xiaoyu; Luchinat, Claudio; Greiner, Russell; Wishart, David S
2015-01-01
Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in
Stanley, Jeffrey R; Adkins, Joshua N; Slysz, Gordon W; Monroe, Matthew E; Purvine, Samuel O; Karpievitch, Yuliya V; Anderson, Gordon A; Smith, Richard D; Dabney, Alan R
2011-08-15
Current algorithms for quantifying peptide identification confidence in the accurate mass and time (AMT) tag approach assume that the AMT tags themselves have been correctly identified. However, there is uncertainty in the identification of AMT tags, because this is based on matching LC-MS/MS fragmentation spectra to peptide sequences. In this paper, we incorporate confidence measures for the AMT tag identifications into the calculation of probabilities for correct matches to an AMT tag database, resulting in a more accurate overall measure of identification confidence for the AMT tag approach. The method is referenced as Statistical Tools for AMT Tag Confidence (STAC). STAC additionally provides a uniqueness probability (UP) to help distinguish between multiple matches to an AMT tag and a method to calculate an overall false discovery rate (FDR). STAC is freely available for download, as both a command line and a Windows graphical application.
Social contagion of correct and incorrect information in memory.
Rush, Ryan A; Clark, Steven E
2014-01-01
The present study examines how discussion between individuals regarding a shared memory affects their subsequent individual memory reports. In three experiments pairs of participants recalled items from photographs of common household scenes, discussed their recall with each other, and then recalled the items again individually. Results showed that after the discussion. individuals recalled more correct items and more incorrect items, with very small non-significant increases, or no change, in recall accuracy. The information people were exposed to during the discussion was generally accurate, although not as accurate as individuals' initial recall. Individuals incorporated correct exposure items into their subsequent recall at a higher rate than incorrect exposure items. Participants who were initially more accurate became less accurate, and initially less-accurate participants became more accurate as a result of their discussion. Comparisons to no-discussion control groups suggest that the effects were not simply the product of repeated recall opportunities or self-cueing, but rather reflect the transmission of information between individuals.
Conductivity Cell Thermal Inertia Correction Revisited
NASA Astrophysics Data System (ADS)
Eriksen, C. C.
2012-12-01
Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified
Radiometric correction of scatterometric wind measurements
NASA Technical Reports Server (NTRS)
1995-01-01
Use of a spaceborne scatterometer to determine the ocean-surface wind vector requires accurate measurement of radar backscatter from ocean. Such measurements are hindered by the effect of attenuation in the precipitating regions over sea. The attenuation can be estimated reasonably well with the knowledge of brightness temperatures observed by a microwave radiometer. The NASA SeaWinds scatterometer is to be flown on the Japanese ADEOS2. The AMSR multi-frequency radiometer on ADEOS2 will be used to correct errors due to attenuation in the SeaWinds scatterometer measurements. Here we investigate the errors in the attenuation corrections. Errors would be quite small if the radiometer and scatterometer footprints were identical and filled with uniform rain. However, the footprints are not identical, and because of their size one cannot expect uniform rain across each cell. Simulations were performed with the SeaWinds scatterometer (13.4 GHz) and AMSR (18.7 GHz) footprints with gradients of attenuation. The study shows that the resulting wind speed errors after correction (using the radiometer) are small for most cases. However, variations in the degree of overlap between the radiometer and scatterometer footprints affect the accuracy of the wind speed measurements.
Accurately Diagnosing and Treating Borderline Personality Disorder
Gentile, Julie P.; Correll, Terry L.
2010-01-01
The high prevalence of comorbid bipolar and borderline personality disorders and some diagnostic criteria similar to both conditions present both diagnostic and therapeutic challenges. This article delineates certain symptoms which, by careful history taking, may be attributed more closely to one of these two disorders. Making the correct primary diagnosis along with comorbid psychiatric conditions and choosing the appropriate type of psychotherapy and pharmacotherapy are critical steps to a patient's recovery. In this article, we will use a case example to illustrate some of the challenges the psychiatrist may face in diagnosing and treating borderline personality disorder. In addition, we will explore treatment strategies, including various types of therapy modalities and medication classes, which may prove effective in stabilizing or reducing a broad range of symptomotology associated with borderline personality disorder. PMID:20508805
Chemically accurate energy barriers of small gas molecules moving through hexagonal water rings.
Hjertenæs, Eirik; Trinh, Thuat T; Koch, Henrik
2016-07-21
We present chemically accurate potential energy curves of CH4, CO2 and H2 moving through hexagonal water rings, calculated by CCSD(T)/aug-cc-pVTZ with counterpoise correction. The barriers are extracted from a potential energy surface obtained by allowing the water ring to expand while the gas molecule diffuses through. State-of-the-art XC-functionals are evaluated against the CCSD(T) potential energy surface.
ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS
Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.
2013-08-01
We present the first accurate characterization of high-degree modes, derived using the best Michelson Doppler Imager (MDI) full-disk full-resolution data set available. A 90 day long time series of full-disk 2 arcsec pixel{sup -1} resolution Dopplergrams was acquired in 2001, thanks to the high rate telemetry provided by the Deep Space Network. These Dopplergrams were spatially decomposed using our best estimate of the image scale and the known components of MDI's image distortion. A multi-taper power spectrum estimator was used to generate power spectra for all degrees and all azimuthal orders, up to l = 1000. We used a large number of tapers to reduce the realization noise, since at high degrees the individual modes blend into ridges and thus there is no reason to preserve a high spectral resolution. These power spectra were fitted for all degrees and all azimuthal orders, between l = 100 and l = 1000, and for all the orders with substantial amplitude. This fitting generated in excess of 5.2 Multiplication-Sign 10{sup 6} individual estimates of ridge frequencies, line widths, amplitudes, and asymmetries (singlets), corresponding to some 5700 multiplets (l, n). Fitting at high degrees generates ridge characteristics, characteristics that do not correspond to the underlying mode characteristics. We used a sophisticated forward modeling to recover the best possible estimate of the underlying mode characteristics (mode frequencies, as well as line widths, amplitudes, and asymmetries). We describe in detail this modeling and its validation. The modeling has been extensively reviewed and refined, by including an iterative process to improve its input parameters to better match the observations. Also, the contribution of the leakage matrix on the accuracy of the procedure has been carefully assessed. We present the derived set of corrected mode characteristics, which includes not only frequencies, but line widths, asymmetries, and amplitudes. We present and discuss
Highly accurate articulated coordinate measuring machine
Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.
2003-12-30
Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.
Practical aspects of spatially high accurate methods
NASA Technical Reports Server (NTRS)
Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.
1992-01-01
The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.
Toward Accurate and Quantitative Comparative Metagenomics.
Nayfach, Stephen; Pollard, Katherine S
2016-08-25
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Apparatus for accurately measuring high temperatures
Smith, Douglas D.
1985-01-01
The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Micron Accurate Absolute Ranging System: Range Extension
NASA Technical Reports Server (NTRS)
Smalley, Larry L.; Smith, Kely L.
1999-01-01
The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2003-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Thermal Stresses for Beams: Normal Stress
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; Pilkey, Walter D.
2002-01-01
Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.
Accurate Telescope Mount Positioning with MEMS Accelerometers
NASA Astrophysics Data System (ADS)
Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.
2014-08-01
This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.
Toward Accurate and Quantitative Comparative Metagenomics
Nayfach, Stephen; Pollard, Katherine S.
2016-01-01
Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341
Accurate Weather Forecasting for Radio Astronomy
NASA Astrophysics Data System (ADS)
Maddalena, Ronald J.
2010-01-01
The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.
Yang, Lu
2009-01-01
For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.
NASA Astrophysics Data System (ADS)
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
A symmetric multivariate leakage correction for MEG connectomes
Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.
2015-01-01
Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259
Addition of noise by scatter correction methods in PVI
Barney, J.S. . Div. of Nuclear Medicine); Harrop, R.; Atkins, M.S. . School of Computing Science)
1994-08-01
Effective scatter correction techniques are required to account for errors due to high scatter fraction seen in positron volume imaging (PVI). To be effective, the correction techniques must be accurate and practical, but they also must not add excessively to the statistical noise in the image. The authors have investigated the noise added by three correction methods: a convolution/subtraction method; a method that interpolates the scatter from the events outside the object; and a dual energy window method with and without smoothing of the scatter estimate. The methods were applied to data generated by Monte Carlo simulation to determine their effect on the variance of the corrected projections. The convolution and interpolation methods did not add significantly to the variance. The dual energy window subtraction method without smoothing increased the variance by a factor of more than twelve, but this factor was improved to 1.2 by smoothing the scatter estimate.
Topologically correct cortical segmentation using Khalimsky's cubic complex framework
NASA Astrophysics Data System (ADS)
Cardoso, Manuel J.; Clarkson, Matthew J.; Modat, Marc; Talbot, Hugues; Couprie, Michel; Ourselin, Sébastien
2011-03-01
Automatic segmentation of the cerebral cortex from magnetic resonance brain images is a valuable tool for neuroscience research. Due to the presence of noise, intensity non-uniformity, partial volume effects, the limited resolution of MRI and the highly convoluted shape of the cerebral cortex, segmenting the brain in a robust, accurate and topologically correct way still poses a challenge. In this paper we describe a topologically correct Expectation Maximisation based Maximum a Posteriori segmentation algorithm formulated within the Khalimsky cubic complex framework, where both the solution of the EM algorithm and the information derived from a geodesic distance function are used to locally modify the weighting of a Markov Random Field and drive the topology correction operations. Experiments performed on 20 Brainweb datasets show that the proposed method obtains a topologically correct segmentation without significant loss in accuracy when compared to two well established techniques.
Star catalog position and proper motion corrections in asteroid astrometry
NASA Astrophysics Data System (ADS)
Farnocchia, D.; Chesley, S. R.; Chamberlin, A. B.; Tholen, D. J.
2015-01-01
We provide a scheme to correct asteroid astrometric observations for star catalog systematic errors due to inaccurate star positions and proper motions. As reference we select the most accurate stars in the PPMXL catalog, i.e., those based on 2MASS astrometry. We compute position and proper motion corrections for 19 of the most used star catalogs. The use of these corrections provides better ephemeris predictions and improves the error statistics of astrometric observations, e.g., by removing most of the regional systematic errors previously seen in Pan-STARRS PS1 asteroid astrometry. The correction table is publicly available at ftp://ssd.jpl.nasa.gov/pub/ssd/debias/debias_2014.tgz and can be freely used in orbit determination algorithms to obtain more reliable asteroid trajectories.
Accurate Completion of Medical Report on Diagnosing Death.
Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana
2015-01-01
Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.
A novel algorithm for scalable and accurate Bayesian network learning.
Brown, Laura E; Tsamardinos, Ioannis; Aliferis, Constantin F
2004-01-01
Bayesian Networks (BN) is a knowledge representation formalism that has been proven to be valuable in biomedicine for constructing decision support systems and for generating causal hypotheses from data. Given the emergence of datasets in medicine and biology with thousands of variables and that current algorithms do not scale more than a few hundred variables in practical domains, new efficient and accurate algorithms are needed to learn high quality BNs from data. We present a new algorithm called Max-Min Hill-Climbing (MMHC) that builds upon and improves the Sparse Candidate (SC) algorithm; a state-of-the-art algorithm that scales up to datasets involving hundreds of variables provided the generating networks are sparse. Compared to the SC, on a number of datasets from medicine and biology, (a) MMHC discovers BNs that are structurally closer to the data-generating BN, (b) the discovered networks are more probable given the data, (c) MMHC is computationally more efficient and scalable than SC, and (d) the generating networks are not required to be uniformly sparse nor is the user of MMHC required to guess correctly the network connectivity
Accurate Inference of Local Phased Ancestry of Modern Admixed Populations
Ma, Yamin; Zhao, Jian; Wong, Jian-Syuan; Ma, Li; Li, Wenzhi; Fu, Guoxing; Xu, Wei; Zhang, Kui; Kittles, Rick A.; Li, Yun; Song, Qing
2014-01-01
Population stratification is a growing concern in genetic-association studies. Averaged ancestry at the genome level (global ancestry) is insufficient for detecting the population substructures and correcting population stratifications in association studies. Local and phase stratification are needed for human genetic studies, but current technologies cannot be applied on the entire genome data due to various technical caveats. Here we developed a novel approach (aMAP, ancestry of Modern Admixed Populations) for inferring local phased ancestry. It took about 3 seconds on a desktop computer to finish a local ancestry analysis for each human genome with 1.4-million SNPs. This method also exhibits the scalability to larger datasets with respect to the number of SNPs, the number of samples, and the size of reference panels. It can detect the lack of the proxy of reference panels. The accuracy was 99.4%. The aMAP software has a capacity for analyzing 6-way admixed individuals. As the biomedical community continues to expand its efforts to increase the representation of diverse populations, and as the number of large whole-genome sequence datasets continues to grow rapidly, there is an increasing demand on rapid and accurate local ancestry analysis in genetics, pharmacogenomics, population genetics, and clinical diagnosis. PMID:25052506
Accurate inference of local phased ancestry of modern admixed populations.
Ma, Yamin; Zhao, Jian; Wong, Jian-Syuan; Ma, Li; Li, Wenzhi; Fu, Guoxing; Xu, Wei; Zhang, Kui; Kittles, Rick A; Li, Yun; Song, Qing
2014-01-01
Population stratification is a growing concern in genetic-association studies. Averaged ancestry at the genome level (global ancestry) is insufficient for detecting the population substructures and correcting population stratifications in association studies. Local and phase stratification are needed for human genetic studies, but current technologies cannot be applied on the entire genome data due to various technical caveats. Here we developed a novel approach (aMAP, ancestry of Modern Admixed Populations) for inferring local phased ancestry. It took about 3 seconds on a desktop computer to finish a local ancestry analysis for each human genome with 1.4-million SNPs. This method also exhibits the scalability to larger datasets with respect to the number of SNPs, the number of samples, and the size of reference panels. It can detect the lack of the proxy of reference panels. The accuracy was 99.4%. The aMAP software has a capacity for analyzing 6-way admixed individuals. As the biomedical community continues to expand its efforts to increase the representation of diverse populations, and as the number of large whole-genome sequence datasets continues to grow rapidly, there is an increasing demand on rapid and accurate local ancestry analysis in genetics, pharmacogenomics, population genetics, and clinical diagnosis. PMID:25052506
Accurate measurement of RF exposure from emerging wireless communication systems
NASA Astrophysics Data System (ADS)
Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno
2013-04-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Real-time lens distortion correction: speed, accuracy and efficiency
NASA Astrophysics Data System (ADS)
Bax, Michael R.; Shahidi, Ramin
2014-11-01
Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.
Reference module selection criteria for accurate testing of photovoltaic (PV) panels
Roy, J.N.; Gariki, Govardhan Rao; Nagalakhsmi, V.
2010-01-15
It is shown that for accurate testing of PV panels the correct selection of reference modules is important. A detailed description of the test methodology is given. Three different types of reference modules, having different I{sub SC} (short circuit current) and power (in Wp) have been used for this study. These reference modules have been calibrated from NREL. It has been found that for accurate testing, both I{sub SC} and power of the reference module must be either similar or exceed to that of modules under test. In case corresponding values of the test modules are less than a particular limit, the measurements may not be accurate. The experimental results obtained have been modeled by using simple equivalent circuit model and associated I-V equations. (author)
Accurate masses for dispersion-supported galaxies
NASA Astrophysics Data System (ADS)
Wolf, Joe; Martinez, Gregory D.; Bullock, James S.; Kaplinghat, Manoj; Geha, Marla; Muñoz, Ricardo R.; Simon, Joshua D.; Avedo, Frank F.
2010-08-01
We derive an accurate mass estimator for dispersion-supported stellar systems and demonstrate its validity by analysing resolved line-of-sight velocity data for globular clusters, dwarf galaxies and elliptical galaxies. Specifically, by manipulating the spherical Jeans equation we show that the mass enclosed within the 3D deprojected half-light radius r1/2 can be determined with only mild assumptions about the spatial variation of the stellar velocity dispersion anisotropy as long as the projected velocity dispersion profile is fairly flat near the half-light radius, as is typically observed. We find M1/2 = 3 G-1< σ2los > r1/2 ~= 4 G-1< σ2los > Re, where < σ2los > is the luminosity-weighted square of the line-of-sight velocity dispersion and Re is the 2D projected half-light radius. While deceptively familiar in form, this formula is not the virial theorem, which cannot be used to determine accurate masses unless the radial profile of the total mass is known a priori. We utilize this finding to show that all of the Milky Way dwarf spheroidal galaxies (MW dSphs) are consistent with having formed within a halo of a mass of approximately 3 × 109 Msolar, assuming a Λ cold dark matter cosmology. The faintest MW dSphs seem to have formed in dark matter haloes that are at least as massive as those of the brightest MW dSphs, despite the almost five orders of magnitude spread in luminosity between them. We expand our analysis to the full range of observed dispersion-supported stellar systems and examine their dynamical I-band mass-to-light ratios ΥI1/2. The ΥI1/2 versus M1/2 relation for dispersion-supported galaxies follows a U shape, with a broad minimum near ΥI1/2 ~= 3 that spans dwarf elliptical galaxies to normal ellipticals, a steep rise to ΥI1/2 ~= 3200 for ultra-faint dSphs and a more shallow rise to ΥI1/2 ~= 800 for galaxy cluster spheroids.
Accurate Measurement of Bone Density with QCT
NASA Technical Reports Server (NTRS)
Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)
2002-01-01
The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.
Highly accurate fast lung CT registration
NASA Astrophysics Data System (ADS)
Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd
2013-03-01
Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.
Accurate Position Calibrations for Charged Fragments
NASA Astrophysics Data System (ADS)
Russell, Autumn; Finck, Joseph E.; Spyrou, Artemis; Thoennessen, Michael
2009-10-01
The Modular Neutron Array (MoNA), located at the National Superconducting Laboratory at Michigan State University, is used in conjunction with the MSU/FSU Sweeper Magnet to study the breakup of neutron-rich nuclei. Fragmentation reactions create particle-unstable nuclei near the neutron dripline which spontaneously break up by the decay of one or two neutrons with energies that reflect the nuclear structure of unbound excited and ground states. The neutrons continue forward into MoNA where their position and time of flight are recorded, and the charged fragments' position and energy are measured by an array of detectors following the Sweeper Magnet. In such experiments the identification of the fragment of interest is done through energy loss and time-of-flight measurements using plastic scintillators. The emitted angles of the fragments are determined with the use of CRDCs. The purpose of the present work was the calibration of the CRDCs in the X and Y axis (where Z is the beam axis) using specially designed masks. This calibration was also used for the correction of the signal of the plastic scintillators, which is position dependent. The results of this work are used for the determination of the ground state of the neutron-unbound ^24N.
Cool Cluster Correctly Correlated
Varganov, Sergey Aleksandrovich
2005-01-01
Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms to
In Situ Mosaic Brightness Correction
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Lorre, Jean J.
2012-01-01
In situ missions typically have pointable, mast-mounted cameras, which are capable of taking panoramic mosaics comprised of many individual frames. These frames are mosaicked together. While the mosaic software applies radiometric correction to the images, in many cases brightness/contrast seams still exist between frames. This is largely due to errors in the radiometric correction, and the absence of correction for photometric effects in the mosaic processing chain. The software analyzes the overlaps between adjacent frames in the mosaic and determines correction factors for each image in an attempt to reduce or eliminate these brightness seams.
QCD corrections to triboson production
NASA Astrophysics Data System (ADS)
Lazopoulos, Achilleas; Melnikov, Kirill; Petriello, Frank
2007-07-01
We present a computation of the next-to-leading order QCD corrections to the production of three Z bosons at the Large Hadron Collider. We calculate these corrections using a completely numerical method that combines sector decomposition to extract infrared singularities with contour deformation of the Feynman parameter integrals to avoid internal loop thresholds. The NLO QCD corrections to pp→ZZZ are approximately 50% and are badly underestimated by the leading order scale dependence. However, the kinematic dependence of the corrections is minimal in phase space regions accessible at leading order.
Entropic Corrections to Coulomb's Law
NASA Astrophysics Data System (ADS)
Hendi, S. H.; Sheykhi, A.
2012-04-01
Two well-known quantum corrections to the area law have been introduced in the literatures, namely, logarithmic and power-law corrections. Logarithmic corrections, arises from loop quantum gravity due to thermal equilibrium fluctuations and quantum fluctuations, while, power-law correction appears in dealing with the entanglement of quantum fields in and out the horizon. Inspired by Verlinde's argument on the entropic force, and assuming the quantum corrected relation for the entropy, we propose the entropic origin for the Coulomb's law in this note. Also we investigate the Uehling potential as a radiative correction to Coulomb potential in 1-loop order and show that for some value of distance the entropic corrections of the Coulomb's law is compatible with the vacuum-polarization correction in QED. So, we derive modified Coulomb's law as well as the entropy corrected Poisson's equation which governing the evolution of the scalar potential ϕ. Our study further supports the unification of gravity and electromagnetic interactions based on the holographic principle.
Open quantum systems and error correction
NASA Astrophysics Data System (ADS)
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC
Accurate lineshape spectroscopy and the Boltzmann constant
Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.
2015-01-01
Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085
Accurate free energy calculation along optimized paths.
Chen, Changjun; Xiao, Yi
2010-05-01
The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.
Fast and Provably Accurate Bilateral Filtering.
Chaudhury, Kunal N; Dabhade, Swapnil D
2016-06-01
The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722
Accurate, reliable prototype earth horizon sensor head
NASA Technical Reports Server (NTRS)
Schwarz, F.; Cohen, H.
1973-01-01
The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.
Fast and Accurate Exhaled Breath Ammonia Measurement
Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.
2014-01-01
This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141
MEMS accelerometers in accurate mount positioning systems
NASA Astrophysics Data System (ADS)
Mészáros, László; Pál, András.; Jaskó, Attila
2014-07-01
In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.
A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction
NASA Technical Reports Server (NTRS)
Bockelie, Michael J.; Eiseman, Peter R.
1990-01-01
A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.
New orbit correction method uniting global and local orbit corrections
NASA Astrophysics Data System (ADS)
Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.
2006-01-01
A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.
PET measurements of cerebral metabolism corrected for CSF contributions
Chawluk, J.; Alavi, A.; Dann, R.; Kushner, M.J.; Hurtig, H.; Zimmerman, R.A.; Reivich, M.
1984-01-01
Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states.
Progress toward accurate high spatial resolution actinide analysis by EPMA
NASA Astrophysics Data System (ADS)
Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.
2010-12-01
High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation
Micromagnetometer calibration for accurate orientation estimation.
Zhang, Zhi-Qiang; Yang, Guang-Zhong
2015-02-01
Micromagnetometers, together with inertial sensors, are widely used for attitude estimation for a wide variety of applications. However, appropriate sensor calibration, which is essential to the accuracy of attitude reconstruction, must be performed in advance. Thus far, many different magnetometer calibration methods have been proposed to compensate for errors such as scale, offset, and nonorthogonality. They have also been used for obviate magnetic errors due to soft and hard iron. However, in order to combine the magnetometer with inertial sensor for attitude reconstruction, alignment difference between the magnetometer and the axes of the inertial sensor must be determined as well. This paper proposes a practical means of sensor error correction by simultaneous consideration of sensor errors, magnetic errors, and alignment difference. We take the summation of the offset and hard iron error as the combined bias and then amalgamate the alignment difference and all the other errors as a transformation matrix. A two-step approach is presented to determine the combined bias and transformation matrix separately. In the first step, the combined bias is determined by finding an optimal ellipsoid that can best fit the sensor readings. In the second step, the intrinsic relationships of the raw sensor readings are explored to estimate the transformation matrix as a homogeneous linear least-squares problem. Singular value decomposition is then applied to estimate both the transformation matrix and magnetic vector. The proposed method is then applied to calibrate our sensor node. Although there is no ground truth for the combined bias and transformation matrix for our node, the consistency of calibration results among different trials and less than 3(°) root mean square error for orientation estimation have been achieved, which illustrates the effectiveness of the proposed sensor calibration method for practical applications. PMID:25265625
Atmospheric correction of high resolution land surface images
NASA Technical Reports Server (NTRS)
Diner, D. J.; Martonchik, J. V.; Danielson, E. D.; Bruegge, C. J.
1989-01-01
Algorithms to correct for atmospheric-scattering effects in high-spatial resolution land-surface images require the ability to perform rapid and accurate computations of the top-of-atmosphere diffuse radiance field for arbitrarily general surface reflectance distributions (which may be both heterogeneous and non-Lambertian) and atmospheric models. Using three-dimensional radiative transfer (3DRT) theory algorithms are being developed. The methodology used to perform the 3DRT calculations is described. It is shown how these calculations are used to perform atmospheric corrections, and the sensitivity of the retrieved surface reflectances to atmospheric structural parameters is illustrated.
Towards Accurate Application Characterization for Exascale (APEX)
Hammond, Simon David
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
Important Nearby Galaxies without Accurate Distances
NASA Astrophysics Data System (ADS)
McQuinn, Kristen
2014-10-01
The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.
Accurate Thermal Conductivities from First Principles
NASA Astrophysics Data System (ADS)
Carbogno, Christian
2015-03-01
In spite of significant research efforts, a first-principles determination of the thermal conductivity at high temperatures has remained elusive. On the one hand, Boltzmann transport techniques that include anharmonic effects in the nuclear dynamics only perturbatively become inaccurate or inapplicable under such conditions. On the other hand, non-equilibrium molecular dynamics (MD) methods suffer from enormous finite-size artifacts in the computationally feasible supercells, which prevent an accurate extrapolation to the bulk limit of the thermal conductivity. In this work, we overcome this limitation by performing ab initio MD simulations in thermodynamic equilibrium that account for all orders of anharmonicity. The thermal conductivity is then assessed from the auto-correlation function of the heat flux using the Green-Kubo formalism. Foremost, we discuss the fundamental theory underlying a first-principles definition of the heat flux using the virial theorem. We validate our approach and in particular the techniques developed to overcome finite time and size effects, e.g., by inspecting silicon, the thermal conductivity of which is particularly challenging to converge. Furthermore, we use this framework to investigate the thermal conductivity of ZrO2, which is known for its high degree of anharmonicity. Our calculations shed light on the heat resistance mechanism active in this material, which eventually allows us to discuss how the thermal conductivity can be controlled by doping and co-doping. This work has been performed in collaboration with R. Ramprasad (University of Connecticut), C. G. Levi and C. G. Van de Walle (University of California Santa Barbara).
Diamagnetic Corrections and Pascal's Constants
ERIC Educational Resources Information Center
Bain, Gordon A.; Berry, John F.
2008-01-01
Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. These tabulated values can be problematic since many sources contain incomplete and conflicting data.…
Barometric and Earth Tide Correction
Toll, Nathaniel J.
2005-11-10
BETCO corrects for barometric and earth tide effects in long-term water level records. A regression deconvolution method is used ot solve a series of linear equations to determine an impulse response function for the well pressure head. Using the response function, a pressure head correction is calculated and applied.
Atmospheric correction of satellite data
NASA Astrophysics Data System (ADS)
Shmirko, Konstantin; Bobrikov, Alexey; Pavlov, Andrey
2015-11-01
Atmosphere responses for more than 90% of all radiation measured by satellite. Due to this, atmospheric correction plays an important role in separating water leaving radiance from the signal, evaluating concentration of various water pigments (chlorophyll-A, DOM, CDOM, etc). The elimination of atmospheric intrinsic radiance from remote sensing signal referred to as atmospheric correction.
Correcting Slightly Less Simple Movements
ERIC Educational Resources Information Center
Aivar, M. P.; Brenner, E.; Smeets, J. B. J.
2005-01-01
Many studies have analysed how goal directed movements are corrected in response to changes in the properties of the target. However, only simple movements to single targets have been used in those studies, so little is known about movement corrections under more complex situations. Evidence from studies that ask for movements to several targets…
Fine-Tuning Corrective Feedback.
ERIC Educational Resources Information Center
Han, ZhaoHong
2001-01-01
Explores the notion of "fine-tuning" in connection with the corrective feedback process. Describes a longitudinal case study, conducted in the context of Norwegian as a second a language, that shows how fine-tuning and lack thereof in the provision of written corrective feedback differentially affects a second language learner's restructuring of…
Geometric Correction System Capabilities, Processing, and Application
Brewster, S.B.
1999-06-30
The U.S. Department of Energy's Remote Sensing Laboratory developed the geometric correction system (GCS) as a state-of-the-art solution for removing distortions from multispectral line scanner data caused by aircraft motion. The system operates on Daedalus AADS-1268 scanner data acquired from fixed-wing and helicopter platforms. The aircraft attitude, altitude, acceleration, and location are recorded and applied to the data, thereby determining the location of the earth with respect to a given datum and projection. The GCS has yielded a positional accuracy of 0.5 meters when used with a 1-meter digital elevation model. Data at this level of accuracy are invaluable in making precise areal estimates and as input into a geographic information system. The combination of high-spatial resolution and accurate geo-rectification makes the GCS a unique tool in identifying and locating environmental conditions, finding targets of interest, and detecting changes as they occur over time.
Algorithmic scatter correction in dual-energy digital mammography
Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei
2013-11-15
background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.
Casual instrument corrections for short-period and broadband seismometers
Haney, Matthew M.; Power, John; West, Michael; Michaels, Paul
2012-01-01
Of all the filters applied to recordings of seismic waves, which include source, path, and site effects, the one we know most precisely is the instrument filter. Therefore, it behooves seismologists to accurately remove the effect of the instrument from raw seismograms. Applying instrument corrections allows analysis of the seismogram in terms of physical units (e.g., displacement or particle velocity of the Earth’s surface) instead of the output of the instrument (e.g., digital counts). The instrument correction can be considered the most fundamental processing step in seismology since it relates the raw data to an observable quantity of interest to seismologists. Complicating matters is the fact that, in practice, the term “instrument correction” refers to more than simply the seismometer. The instrument correction compensates for the complete recording system including the seismometer, telemetry, digitizer, and any anti‐alias filters. Knowledge of all these components is necessary to perform an accurate instrument correction. The subject of instrument corrections has been covered extensively in the literature (Seidl, 1980; Scherbaum, 1996). However, the prospect of applying instrument corrections still evokes angst among many seismologists—the authors of this paper included. There may be several reasons for this. For instance, the seminal paper by Seidl (1980) exists in a journal that is not currently available in electronic format and cannot be accessed online. Also, a standard method for applying instrument corrections involves the programs TRANSFER and EVALRESP in the Seismic Analysis Code (SAC) package (Goldstein et al., 2003). The exact mathematical methods implemented in these codes are not thoroughly described in the documentation accompanying SAC.
On the accurate estimation of gap fraction during daytime with digital cover photography
NASA Astrophysics Data System (ADS)
Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.
2015-12-01
Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.
Evaluation of QNI corrections in porous media applications
NASA Astrophysics Data System (ADS)
Radebe, M. J.; de Beer, F. C.; Nshimirimana, R.
2011-09-01
Qualitative measurements using digital neutron imaging has been the more explored aspect than accurate quantitative measurements. The reason for this bias is that quantitative measurements require correction for background and material scatter, and neutron spectral effects. Quantitative Neutron Imaging (QNI) software package has resulted from efforts at the Paul Scherrer Institute, Helmholtz Zentrum Berlin (HZB) and Necsa to correct for these effects, while the sample-detector distance (SDD) principle has previously been demonstrated as a measure to eliminate material scatter effect. This work evaluates the capabilities of the QNI software package to produce accurate quantitative results on specific characteristics of porous media, and its role to nondestructive quantification of material with and without calibration. The work further complements QNI abilities by the use of different SDDs. Studies of effective %porosity of mortar and attenuation coefficient of water using QNI and SDD principle are reported.
Surface consistent finite frequency phase corrections
NASA Astrophysics Data System (ADS)
Kimman, W. P.
2016-07-01
Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large
Fiona: a parallel and automatic strategy for read error correction
Schulz, Marcel H.; Weese, David; Holtgrewe, Manuel; Dimitrova, Viktoria; Niu, Sijia; Reinert, Knut; Richard, Hugues
2014-01-01
Motivation: Automatic error correction of high-throughput sequencing data can have a dramatic impact on the amount of usable base pairs and their quality. It has been shown that the performance of tasks such as de novo genome assembly and SNP calling can be dramatically improved after read error correction. While a large number of methods specialized for correcting substitution errors as found in Illumina data exist, few methods for the correction of indel errors, common to technologies like 454 or Ion Torrent, have been proposed. Results: We present Fiona, a new stand-alone read error–correction method. Fiona provides a new statistical approach for sequencing error detection and optimal error correction and estimates its parameters automatically. Fiona is able to correct substitution, insertion and deletion errors and can be applied to any sequencing technology. It uses an efficient implementation of the partial suffix array to detect read overlaps with different seed lengths in parallel. We tested Fiona on several real datasets from a variety of organisms with different read lengths and compared its performance with state-of-the-art methods. Fiona shows a constantly higher correction accuracy over a broad range of datasets from 454 and Ion Torrent sequencers, without compromise in speed. Conclusion: Fiona is an accurate parameter-free read error–correction method that can be run on inexpensive hardware and can make use of multicore parallelization whenever available. Fiona was implemented using the SeqAn library for sequence analysis and is publicly available for download at http://www.seqan.de/projects/fiona. Contact: mschulz@mmci.uni-saarland.de or hugues.richard@upmc.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161220
NASA Astrophysics Data System (ADS)
Mokrov, Yu. V.; Morozova, S. V.; Timoshenko, G. N.; Krylov, V. A.
2014-11-01
The results of correcting the readings of DVGN-01 albedo dosimeters behind the shielding of the MC400 cyclotron at the Laboratory of Nuclear Reactions (LNR) with the use of the spherical albedo system are presented. The formulas approximating the dependences of correction coefficients used to correct the readings on the hardness parameters of low-energy neutron spectra were obtained based on these results and the results of earlier studies. Neutron spectra were measured at three points behind the MC400 shielding, and the correction coefficients for DVGN-01 were calculated based on these spectra. It was demonstrated that these coefficients agree well with the coefficients obtained with the use of the spherical albedo system. This suggests that the obtained correction coefficient values are accurate. The recommended correction coefficient values to be used in the individual dosimetric control at LNR were specified based on the results of the present study and the data given in other papers.
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms
Radiative corrections to 0/sup +/-0/sup +/. beta. transitions
Jaus, W.; Rasche, G.
1987-06-01
We reexamine and refine our former analysis of electromagnetic corrections to 0/sup +/-0/sup +/ ..beta.. transitions. The disagreement with a recent approximate calculation of Sirlin and Zucchini is due to an error in our earlier numerical computation. The new results lead to much better agreement between the Ft values of the eight accurately studied decays. We find an average value of Ft = 3072.4 +- 1.6 s. .AE
Radiative corrections to 0+-0+ β transitions
NASA Astrophysics Data System (ADS)
Jaus, W.; Rasche, G.
1987-06-01
We reexamine and refine our former analysis of electromagnetic corrections to 0+-0+ β transitions. The disagreement with a recent approximate calculation of Sirlin and Zucchini is due to an error in our earlier numerical computation. The new results lead to much better agreement between the Ft values of the eight accurately studied decays. We find an average value of Ft =3072.4+/-1.6 s. .AE
Automated misspelling detection and correction in clinical free-text records.
Lai, Kenneth H; Topaz, Maxim; Goss, Foster R; Zhou, Li
2015-06-01
Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents.
Automated misspelling detection and correction in clinical free-text records.
Lai, Kenneth H; Topaz, Maxim; Goss, Foster R; Zhou, Li
2015-06-01
Accurate electronic health records are important for clinical care and research as well as ensuring patient safety. It is crucial for misspelled words to be corrected in order to ensure that medical records are interpreted correctly. This paper describes the development of a spelling correction system for medical text. Our spell checker is based on Shannon's noisy channel model, and uses an extensive dictionary compiled from many sources. We also use named entity recognition, so that names are not wrongly corrected as misspellings. We apply our spell checker to three different types of free-text data: clinical notes, allergy entries, and medication orders; and evaluate its performance on both misspelling detection and correction. Our spell checker achieves detection performance of up to 94.4% and correction accuracy of up to 88.2%. We show that high-performance spelling correction is possible on a variety of clinical documents. PMID:25917057
Psychiatric stigma in correctional facilities.
Miller, R D; Metzner, J L
1994-01-01
While legislatively sanctioned discrimination against the mentally ill in general society has largely disappeared, it persists in correctional systems where inmates are denied earn-time reductions in sentences, parole opportunities, placement in less restrictive facilities, and opportunities to participate in sentence-reducing programs because of their status as psychiatric patients or their need for psychotropic medications. The authors discuss the prevalence of such problems from detailed examinations of several correctional systems and from the results of a national survey of correctional medical directors.
Software for Correcting the Dynamic Error of Force Transducers
Miyashita, Naoki; Watanabe, Kazuhide; Irisa, Kyouhei; Iwashita, Hiroshi; Araki, Ryosuke; Takita, Akihiro; Yamaguchi, Takao; Fujii, Yusaku
2014-01-01
Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM), in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper. PMID:25004158
Software for correcting the dynamic error of force transducers.
Miyashita, Naoki; Watanabe, Kazuhide; Irisa, Kyouhei; Iwashita, Hiroshi; Araki, Ryosuke; Takita, Akihiro; Yamaguchi, Takao; Fujii, Yusaku
2014-01-01
Software which corrects the dynamic error of force transducers in impact force measurements using their own output signal has been developed. The software corrects the output waveform of the transducers using the output waveform itself, estimates its uncertainty and displays the results. In the experiment, the dynamic error of three transducers of the same model are evaluated using the Levitation Mass Method (LMM), in which the impact forces applied to the transducers are accurately determined as the inertial force of the moving part of the aerostatic linear bearing. The parameters for correcting the dynamic error are determined from the results of one set of impact measurements of one transducer. Then, the validity of the obtained parameters is evaluated using the results of the other sets of measurements of all the three transducers. The uncertainties in the uncorrected force and those in the corrected force are also estimated. If manufacturers determine the correction parameters for each model using the proposed method, and provide the software with the parameters corresponding to each model, then users can obtain the waveform corrected against dynamic error and its uncertainty. The present status and the future prospects of the developed software are discussed in this paper. PMID:25004158
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
NASA Astrophysics Data System (ADS)
Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru
2014-05-01
This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.
77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-25
... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28,...
Corrective optics space telescope axial replacement alignment system
NASA Astrophysics Data System (ADS)
Slusher, Robert B.; Satter, Michael J.; Kaplan, Michael L.; Martella, Mark A.; Freymiller, Ed D.; Buzzetta, Victor
1993-10-01
To facilitate the accurate placement and alignment of the corrective optics space telescope axial replacement (COSTAR) structure, mechanisms, and optics, the COSTAR Alignment System (CAS) has been designed and assembled. It consists of a 20-foot optical bench, support structures for holding and aligning the COSTAR instrument at various stages of assembly, a focal plane target fixture (FPTF) providing an accurate reference to the as-built Hubble Space Telescope (HST) focal plane, two alignment translation stages with interchangeable alignment telescopes and alignment lasers, and a Zygo Mark IV interferometer with a reference sphere custom designed to allow accurate double-pass operation of the COSTAR correction optics. The system is used to align the fixed optical bench (FOB), the track, the deployable optical bench (DOB), the mechanisms, and the optics to ensure that the correction mirrors are all located in the required positions and orientations on-orbit after deployment. In this paper, the layout of the CAS is presented and the various alignment operations are listed along with the relevant alignment requirements. In addition, calibration of the necessary support structure elements and alignment aids is described, including the two-axis translation stages, the latch positions, the FPTF, and the COSTAR-mounted alignment cubes.
An accurate and practical method for inference of weak gravitational lensing from galaxy images
NASA Astrophysics Data System (ADS)
Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.
2016-07-01
We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.
Benchmark data base for accurate van der Waals interaction in inorganic fragments
NASA Astrophysics Data System (ADS)
Brndiar, Jan; Stich, Ivan
2012-02-01
A range of inorganic materials, such as Sb, As, P, S, Se are built from van der Waals (vdW) interacting units forming the crystals, which neither the standard DFT GGA description as well as cheap quantum chemistry methods, such as MP2, do not describe correctly. We use this data base, for which have performed ultra accurate CCSD(T) calculations in complete basis set limit, to test the alternative approximate theories, such as Grimme [1], Langreth-Lundqvist [2], and Tkachenko-Scheffler [3]. While none of these theories gives entirely correct description, Grimme consistently provides more accurate results than Langreth-Lundqvist, which tend to overestimate the distances and underestimate the interaction energies for this set of systems. Contrary Tkachenko-Scheffler appear to yield surprisingly accurate and computationally cheap and convenient description applicable also for systems with appreciable charge transfer. [4pt] [1] S. Grimme, J. Comp. Chem. 27, 1787 (2006) [0pt] [2] K. Lee, et al., Phys. Rev. B 82 081101 (R) (2010) [0pt] [3] Tkachenko and M. Scheffler Phys. Rev. Lett. 102 073005 (2009).
Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry
NASA Astrophysics Data System (ADS)
Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.
2013-09-01
for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.
Reflection error correction of gas turbine blade temperature
NASA Astrophysics Data System (ADS)
Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan
2016-03-01
Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.
Three-Dimensional Turbulent RANS Adjoint-Based Error Correction
NASA Technical Reports Server (NTRS)
Park, Michael A.
2003-01-01
Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.
Bowman, Caitlin R; Dennis, Nancy A
2015-06-01
Successful memory retrieval is predicated not only on recognizing old information, but also on correctly rejecting new information (lures) in order to avoid false memories. Correctly rejecting lures is more difficult when they are perceptually or semantically related to information presented at study as compared to when lures are distinct from previously studied information. This behavioral difference suggests that the cognitive and neural basis of correct rejections differs with respect to the relatedness between lures and studied items. The present study sought to identify neural activity that aids in suppressing false memories by examining the network of brain regions underlying correct rejection of related and unrelated lures. Results showed neural overlap in the right hippocampus and anterior parahippocampal gyrus associated with both related and unrelated correct rejections, indicating that some neural regions support correctly rejecting lures regardless of their semantic/perceptual characteristics. Direct comparisons between related and unrelated correct rejections showed that unrelated correct rejections were associated with greater activity in bilateral middle and inferior temporal cortices, regions that have been associated with categorical processing and semantic labels. Related correct rejections showed greater activation in visual and lateral prefrontal cortices, which have been associated with perceptual processing and retrieval monitoring. Thus, while related and unrelated correct rejections show some common neural correlates, related correct rejections are driven by greater perceptual processing whereas unrelated correct rejections show greater reliance on salient categorical cues to support quick and accurate memory decisions. PMID:25862563
LETTER TO THE EDITOR: Accurate Hylleraas-like functions for the He atom with correct cusp conditions
NASA Astrophysics Data System (ADS)
Rodriguez, K. V.; Gasaneo, G.
2005-08-01
In this letter, a set of ground state wavefunctions for the He atom is given. The functions are constructed in terms of exponential and power series as similar as possible to the Hylleraas functions of Chandrasekhar and Herzberg (1955 Phys. Rev. 98 1050). The accuracy of the calculated energies is found to be about 10-4 au and all the cusp conditions at the Coulomb singularities are satisfied. The nine-parameter functions proposed here are found to have better local energy than those given by the 6 and 14 terms Hylleraas functions of Chandrasekhar. The mean value of various functions evaluated with the different proposals shows their good quality. These properties highly qualify the function to be used as an alternative to the Chandrasekhar functions in collisional problems. The whole set of functions given here can be considered as an alternative to the proposals of Chandrasekhar (1955 Phys. Rev. 98 1050), Bonham and Kohl (1966 J. Chem. Phys. 45 2471) and Le Sech (1997 J. Phys. B: At. Mol. Opt. Phys. 30 L47).
NASA Technical Reports Server (NTRS)
Durden, S.; Haddad, Z.
1998-01-01
Observations of Doppler velocity of hydrometeors form airborne Doppler weather radars normally contains a component due to the aircraft motion. Accurate hydrometeor velocity measurements thus require correction by subtracting this velocity from the observed velocity.
Fully 3D refraction correction dosimetry system
NASA Astrophysics Data System (ADS)
Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan
2016-02-01
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Fully 3D refraction correction dosimetry system.
Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan
2016-02-21
medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems. PMID:26651397
Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna
2010-01-01
With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420
Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti
2016-01-01
The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Dateo, Christopher E.; Schwenke, David W.; Chaban, Galina M.
2005-01-01
Accurate quartic force fields have been determined for the CCH- and NH2- molecular anions using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, CCSD(T). Very large one-particle basis sets have been used including diffuse functions and up through g-type functions. Correlation of the nitrogen and carbon core electrons has been included, as well as other "small" effects, such as the diagonal Born-Oppenheimer correction, and basis set extrapolation, and corrections for higher-order correlation effects and scalar relativistic effects. Fundamental vibrational frequencies have been computed using standard second-order perturbation theory as well as variational methods. Comparison with the available experimental data is presented and discussed. The implications of our research for the astronomical observation of molecular anions will be discussed.
Automated Fast and Accurate Display Calibration Using ADT Compensated LCD for Mobile Phone
NASA Astrophysics Data System (ADS)
Han, Chan-Ho; Park, Kil-Houm
Gamma correction is an essential function and is time consuming task in every display device such as CRT and LCD. And gray scale CCT reproduction in most LCD are quite different from those of standard CRT. An automated fast and accurate display adjusment method and system for gamma correction and for constant gray scale CCT calibration of mobile phone LCD is presented in this paper. We develop the test pattern disply and register control program in mobile phone and devleop automatic measure program in computer using spectroradimeter. The proposed system is maintain given gamma values and CCT values accuratly. In addition, This system is possible to fast mobile phone LCD adjusment within one hour.
Madebene, Bruno; Ulusoy, Inga; Mancera, Luis; Scribano, Yohann; Chulkov, Sergey
2011-01-01
Summary We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters. PMID:22003450
Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido
2013-01-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them. PMID:24167405
Taverna, Ettore; Ufenast, Henri; Broffoni, Laura; Garavaglia, Guido
2013-07-01
The Latarjet procedure is a confirmed method for the treatment of shoulder instability in the presence of bone loss. It is a challenging procedure for which a key point is the correct placement of the coracoid graft onto the glenoid neck. We here present our technique for an athroscopically assisted Latarjet procedure with a new drill guide, permitting an accurate and reproducible positioning of the coracoid graft, with optimal compression of the graft onto the glenoid neck due to the perfect position of the screws: perpendicular to the graft and the glenoid neck and parallel between them.
Delegation in Correctional Nursing Practice.
Tompkins, Frances
2016-07-01
Correctional nurses face daily challenges as a result of their work environment. Common challenges include availability of resources for appropriate care delivery, negotiating with custody staff for access to patients, adherence to scope of practice standards, and working with a varied staffing mix. Professional correctional nurses must consider the educational backgrounds and competency of other nurses and assistive personnel in planning for care delivery. Budgetary constraints and varied staff preparation can be a challenge for the professional nurse. Adequate care planning requires understanding the educational level and competency of licensed and unlicensed staff. Delegation is the process of assessing patient needs and transferring responsibility for care to appropriately educated and competent staff. Correctional nurses can benefit from increased knowledge about delegation. PMID:27302707
Hubeny, Veronika; Maloney, Alexander; Rangamani, Mukund
2005-02-07
We investigate the geometry of four dimensional black hole solutions in the presence of stringy higher curvature corrections to the low energy effective action. For certain supersymmetric two charge black holes these corrections drastically alter the causal structure of the solution, converting seemingly pathological null singularities into timelike singularities hidden behind a finite area horizon. We establish, analytically and numerically, that the string-corrected two-charge black hole metric has the same Penrose diagram as the extremal four-charge black hole. The higher derivative terms lead to another dramatic effect -- the gravitational force exerted by a black hole on an inertial observer is no longer purely attractive! The magnitude of this effect is related to the size of the compactification manifold.
Error Field Correction in ITER
Park, Jong-kyu; Boozer, Allen H.; Menard, Jonathan E.; Schaffer, Michael J.
2008-05-22
A new method for correcting magnetic field errors in the ITER tokamak is developed using the Ideal Perturbed Equilibrium Code (IPEC). The dominant external magnetic field for driving islands is shown to be localized to the outboard midplane for three ITER equilibria that represent the projected range of operational scenarios. The coupling matrices between the poloidal harmonics of the external magnetic perturbations and the resonant fields on the rational surfaces that drive islands are combined for different equilibria and used to determine an ordered list of the dominant errors in the external magnetic field. It is found that efficient and robust error field correction is possible with a fixed setting of the correction currents relative to the currents in the main coils across the range of ITER operating scenarios that was considered.
When correction turns positive: processing corrective prosody in Dutch.
Dimitrova, Diana V; Stowe, Laurie A; Hoeks, John C J
2015-01-01
Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch.
When Correction Turns Positive: Processing Corrective Prosody in Dutch
Dimitrova, Diana V.; Stowe, Laurie A.; Hoeks, John C. J.
2015-01-01
Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch. PMID:25973607
1992-12-11
Last month, the U.S. Postal Service (USPS) prompted a 13 November Random Sample naming a group of scientists whose faces were appearing, USPS said, on stamps belonging to its Black Heritage Series. Among them: chemist Percy Lavon Julian; George Washington Carver; physician Charles R. Drew; astronomer and mathematician Benjamin Banneker; and inventor Jan Matzeliger. Science readers knew better. Two of the quintet appeared years ago: a stamp bearing Carver's picture was issued in 1948, and Drew appeared in the Great Americans Series in 1981. PMID:17831650
2015-03-01
In the January 2015 issue of Cyberpsychology, Behavior, and Social Networking (vol. 18, no. 1, pp. 3–7), the article "Individual Differences in Cyber Security Behaviors: An Examination of Who Is Sharing Passwords." by Prof. Monica Whitty et al., has an error in wording in the abstract. The sentence in question was originally printed as: Contrary to our hypotheses, we found older people and individuals who score high on self-monitoring were more likely to share passwords. It should read: Contrary to our hypotheses, we found younger people and individuals who score high on self-monitoring were more likely to share passwords. The authors wish to apologize for the error. PMID:25751054
1992-12-11
Last month, the U.S. Postal Service (USPS) prompted a 13 November Random Sample naming a group of scientists whose faces were appearing, USPS said, on stamps belonging to its Black Heritage Series. Among them: chemist Percy Lavon Julian; George Washington Carver; physician Charles R. Drew; astronomer and mathematician Benjamin Banneker; and inventor Jan Matzeliger. Science readers knew better. Two of the quintet appeared years ago: a stamp bearing Carver's picture was issued in 1948, and Drew appeared in the Great Americans Series in 1981.
NASA Astrophysics Data System (ADS)
2009-12-01
Due to an error in converting energy data from "quads" (one quadrillion, or 1015, British thermal units) to watt-hours, the opening paragraph of Grant's article contained several incorrect values for world energy consumption.
1991-05-01
Contrary to what we reported, the horned dinosaur Chasmosaurus (Science, 12 April, p. 207) did not have the largest skull of any land animal. Paleontologist Paul Sereno of the University of Chicago says that honor belongs to Triceratops, another member of the family Ceratopsidae.
1991-11-29
Because of a production error, the photographs of pierre Chambon and Harald zur Hausen, which appeared on pages 1116 and 1117 of last week's issue (22 November), were transposed. Here's what you should have seen: Chambon is on the left, zur Hausen on the right.
NASA Astrophysics Data System (ADS)
2016-09-01
The feature article “Neutrons for new drugs” (August pp26–29) stated that neutron crystallography was used to determine the structures of “wellknown complex biological molecules such as lysine, insulin and trypsin”.
NASA Astrophysics Data System (ADS)
2004-05-01
1. The first photograph on p12 of News in Physics Educaton January 2004 is of Prof. Paul Black and not Prof. Jonathan Osborne, as stated. 2. The review of Flowlog on p209 of the March 2004 issue wrongly gives the maximum sampling rate of the analogue inputs as 25 kHz (40 ms) instead of 25 kHz (40 µs) and the digital inputs as 100 kHz (10 ms) instead of 100 kHz (10 µs). 3. The letter entitled 'A trial of two energies' by Eric McIldowie on pp212-4 of the March 2004 issue was edited to fit the space available. We regret that a few small errors were made in doing this. Rather than detail these, the interested reader can access the whole of the original letter as a Word file from the link below.
2015-03-01
In the January 2015 issue of Cyberpsychology, Behavior, and Social Networking (vol. 18, no. 1, pp. 3–7), the article "Individual Differences in Cyber Security Behaviors: An Examination of Who Is Sharing Passwords." by Prof. Monica Whitty et al., has an error in wording in the abstract. The sentence in question was originally printed as: Contrary to our hypotheses, we found older people and individuals who score high on self-monitoring were more likely to share passwords. It should read: Contrary to our hypotheses, we found younger people and individuals who score high on self-monitoring were more likely to share passwords. The authors wish to apologize for the error.
NASA Astrophysics Data System (ADS)
2013-08-01
In the 9 July issue of Eos, the feature "Peak Oil and Energy Independence: Myth and Reality"(Eos, 94(28), 245-246, doi:10.1002/2013EO280001) gave the price of natural gas in terms of dollars per Mcf and defined Mcf to be million cubic feet. However, Mcf means thousand cubic feet—the M comes from the Latin mille (thousand).
1992-05-15
In the 24 April "Inside AAAS" article "AAAS organizes more meetings of the mind" (p. 548), it is stated incorrectly that Paul Berg of Stanford University will be giving the keynote address and that Helen Donis-Keller of Washington University will be presenting a paper at the Science Innovation '92 meeting in San Francisco (21 to 25 July 1992). The Science Innovation '92 program was tentative at the time the article was written. Joseph Martin of the University of California, San Francisco, will deliver the keynote address on one of the major themes of the meeting, "Mapping the Human Brain." Helen Donis-Keller and Paul Berg were invited to speak but will not be on the program this year.
NASA Astrophysics Data System (ADS)
1999-11-01
Synsedimentary deformation in the Jurassic of southeastern Utah—A case of impact shaking? COMMENT Geology, v. 27, p. 661 (July 1999) The sentence on p. 661, first column, second paragraph, line one, should read: The 1600 m of Pennsylvania Paradox Formation is 75 90% salt in Arches National Park. The sentence on p. 661, second column, third paragraph, line seven, should read: This high-pressured ydrothermal solution created the clastic dikes, chert nodules from reprecipitated siliceous cement that have been called “siliceous impactites” (Kriens et al., 1997), and much of the present structure at Upheaval Dome by further faulting.
Atmospheric Corrections in Coastal Altimetry
NASA Astrophysics Data System (ADS)
Antonita, Maria; Kumar, Raj
2012-07-01
The range measurements from the altimeter are associated with a large number of geophysical corrections which needs special attention near coasts and the shallow water regions. The corrections due to ionosphere, dry and wet troposphere and that due to sea state are of primary importance in altimetry. Water vapor dominates the wet tropospheric corrections by several factors which is more complex with higher spatio-temporal variations and thus needs a careful attention near coasts. In addition to this rain is one of the major atmospheric phenomena which attenuate the backscatter altimeter measurements which in turn affect the altimeter derived wind and wave measurements. Thus during rain events utmost care should be taken while deriving the altimeter wind speeds and wave heights. The first objective of the present study involves the comparison of the water vapor corrections estimated from radiosonde measurements near the coastal regions with the model estimated corrections applied in the altimeter range measurements. Analysis has been performed for the Coastal Altimeter products provided by the PISTACH to observe these corrections. The second objective is to estimate the rain rate using altimeter backscatter measurements. The differential attenuation of KU band over C band due to rain has been utilized to identify the rain events and to estimate the amount of rain fall. JASON-2 altimeter data during two tropical cyclonic events over Bay of Bengal have been used for this purpose. An attempt is made to compare the estimated rain rate from altimeter measurements with the other available collocated satellite observations like KALPANA and TRMM-TMI. The results are encouraging and can be used to provide valid rain flags in the altimeter products in addition to the radiometer rain flags.
Accurate and Timely Forecasting of CME-Driven Geomagnetic Storms
NASA Astrophysics Data System (ADS)
Chen, J.; Kunkel, V.; Skov, T. M.
2015-12-01
Wide-spread and severe geomagnetic storms are primarily caused by theejecta of coronal mass ejections (CMEs) that impose long durations ofstrong southward interplanetary magnetic field (IMF) on themagnetosphere, the duration and magnitude of the southward IMF (Bs)being the main determinants of geoeffectiveness. Another importantquantity to forecast is the arrival time of the expected geoeffectiveCME ejecta. In order to accurately forecast these quantities in atimely manner (say, 24--48 hours of advance warning time), it isnecessary to calculate the evolving CME ejecta---its structure andmagnetic field vector in three dimensions---using remote sensing solardata alone. We discuss a method based on the validated erupting fluxrope (EFR) model of CME dynamics. It has been shown using STEREO datathat the model can calculate the correct size, magnetic field, and theplasma parameters of a CME ejecta detected at 1 AU, using the observedCME position-time data alone as input (Kunkel and Chen 2010). Onedisparity is in the arrival time, which is attributed to thesimplified geometry of circular toroidal axis of the CME flux rope.Accordingly, the model has been extended to self-consistently includethe transverse expansion of the flux rope (Kunkel 2012; Kunkel andChen 2015). We show that the extended formulation provides a betterprediction of arrival time even if the CME apex does not propagatedirectly toward the earth. We apply the new method to a number of CMEevents and compare predicted flux ropes at 1 AU to the observed ejectastructures inferred from in situ magnetic and plasma data. The EFRmodel also predicts the asymptotic ambient solar wind speed (Vsw) foreach event, which has not been validated yet. The predicted Vswvalues are tested using the ENLIL model. We discuss the minimum andsufficient required input data for an operational forecasting systemfor predicting the drivers of large geomagnetic storms.Kunkel, V., and Chen, J., ApJ Lett, 715, L80, 2010. Kunkel, V., Ph
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; Pattison, P.; Peake, D. J.; Radicci, V.; Sobott, B. A.; Walko, D. A.; Broennimann, C.
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanismmore » has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.« less
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; Pattison, P.; Peake, D. J.; Radicci, V.; Sobott, B. A.; Walko, D. A.; Broennimann, C.
2015-01-01
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel. PMID:25931086
Bunch mode specific rate corrections for PILATUS3 detectors
Trueb, P.; Dejoie, C.; Kobas, M.; Pattison, P.; Peake, D. J.; Radicci, V.; Sobott, B. A.; Walko, D. A.; Broennimann, C.
2015-04-09
PILATUS X-ray detectors are in operation at many synchrotron beamlines around the world. This article reports on the characterization of the new PILATUS3 detector generation at high count rates. As for all counting detectors, the measured intensities have to be corrected for the dead-time of the counting mechanism at high photon fluxes. The large number of different bunch modes at these synchrotrons as well as the wide range of detector settings presents a challenge for providing accurate corrections. To avoid the intricate measurement of the count rate behaviour for every bunch mode, a Monte Carlo simulation of the counting mechanism has been implemented, which is able to predict the corrections for arbitrary bunch modes and a wide range of detector settings. This article compares the simulated results with experimental data acquired at different synchrotrons. It is found that the usage of bunch mode specific corrections based on this simulation improves the accuracy of the measured intensities by up to 40% for high photon rates and highly structured bunch modes. For less structured bunch modes, the instant retrigger technology of PILATUS3 detectors substantially reduces the dependency of the rate correction on the bunch mode. The acquired data also demonstrate that the instant retrigger technology allows for data acquisition up to 15 million photons per second per pixel.
Achieving Algorithmic Resilience for Temporal Integration through Spectral Deferred Corrections
Grout, R. W.; Kolla, H.; Minion, M. L.; Bell, J. B.
2015-04-06
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.
DARHT Radiographic Grid Scale Correction
Warthen, Barry J.
2015-02-13
Recently it became apparent that the radiographic grid which has been used to calibrate the dimensional scale of DARHT radiographs was not centered at the location where the objects have been centered. This offset produced an error of 0.188% in the dimensional scaling of the radiographic images processed using the assumption that the grid and objects had the same center. This paper will show the derivation of the scaling correction, explain how new radiographs are being processed to account for the difference in location, and provide the details of how to correct radiographic image processed with the erroneous scale factor.
Anterior endoscopic correction of scoliosis.
Picetti, George D; Ertl, Janos P; Bueff, H Ulrich
2002-04-01
Our technique of anterior endoscopic scoliosis correction demonstrates the ability to perform an anterior approach through a minimally invasive technique with minimal disruption of the local biology. The initial results appear to equal curve correction and fusion rates to those of a formal open anterior approach. Additional benefits are: 1) shortened operative time, 2) lower blood loss, 3) shortened rehabilitation time, 4) less pain, and 5) shortened hospital stays. Endoscopic technique shows great promise in the management of scoliosis curves; however, this is a technically demanding procedure that requires cross-training in endoscopic discectomy and scoliosis management as well as familiarity with the anterior approach anatomy. PMID:12389288
Accurate description of calcium solvation in concentrated aqueous solutions.
Kohagen, Miriam; Mason, Philip E; Jungwirth, Pavel
2014-07-17
Calcium is one of the biologically most important ions; however, its accurate description by classical molecular dynamics simulations is complicated by strong electrostatic and polarization interactions with surroundings due to its divalent nature. Here, we explore the recently suggested approach for effectively accounting for polarization effects via ionic charge rescaling and develop a new and accurate parametrization of the calcium dication. Comparison to neutron scattering and viscosity measurements demonstrates that our model allows for an accurate description of concentrated aqueous calcium chloride solutions. The present model should find broad use in efficient and accurate modeling of calcium in aqueous environments, such as those encountered in biological and technological applications.
Regional deconvolution method for partial volume correction in brain PET
NASA Astrophysics Data System (ADS)
Rusinek, Henry; Tsui, Wai-Hon; de Leon, Mony J.
2001-05-01
Correction of PET images for partial volume effects (PVE) is of particular utility in studies of metabolism in brain aging and brain disorders. PVE is commonly corrected using voxel-by- voxel factors obtained from a high resolution brain mask (obtained from the coregistered MR scan), after convolution with the point spread function (PSF) of the imaging system. In a recently proposed regional deconvolution (RD) method, the observed regional activity is expressed as linear combinations of the true metabolic activity. The weights are obtained by integrating the PSF over the geometric extent of the brain regions. We have analyzed the accuracy of RD and two other PVE correction algorithms under a variety of conditions using simulated PET scans. Each of the brain regions was assigned a distribution of metabolic activity, with gray matter/white matter contrast representative of subjects in several age categories. Simulations were performed over a wide range of PET resolutions. The influence of PET/MR misregistration and heterogeneity of brain metabolism were also evaluated. Our results demonstrate the importance of correcting PET metabolic images for PVE. Without such correction, the regional brain activity values are contaminated with 30 - 40% errors. Under most conditions studied, the accuracy of RD and of the three- compartmental method were superior to the accuracy of the two- compartmental method. Our study provides the first demonstration of the feasibility of RD algorithm to provide accurate correction for a large number (n equals 109) of brain compartments. PVE correction methods appear to be promising tools in studies of metabolism in normal brain, brain aging, and brain disorders.
A correction on coastal heads for groundwater flow models.
Lu, Chunhui; Werner, Adrian D; Simmons, Craig T; Luo, Jian
2015-01-01
We introduce a simple correction to coastal heads for constant-density groundwater flow models that contain a coastal boundary, based on previous analytical solutions for interface flow. The results demonstrate that accurate discharge to the sea in confined aquifers can be obtained by direct application of Darcy's law (for constant-density flow) if the coastal heads are corrected to ((α + 1)/α)hs - B/2α, in which hs is the mean sea level above the aquifer base, B is the aquifer thickness, and α is the density factor. For unconfined aquifers, the coastal head should be assigned the value hs1+α/α. The accuracy of using these corrections is demonstrated by consistency between constant-density Darcy's solution and variable-density flow numerical simulations. The errors introduced by adopting two previous approaches (i.e., no correction and using the equivalent fresh water head at the middle position of the aquifer to represent the hydraulic head at the coastal boundary) are evaluated. Sensitivity analysis shows that errors in discharge to the sea could be larger than 100% for typical coastal aquifer parameter ranges. The location of observation wells relative to the toe is a key factor controlling the estimation error, as it determines the relative aquifer length of constant-density flow relative to variable-density flow. The coastal head correction method introduced in this study facilitates the rapid and accurate estimation of the fresh water flux from a given hydraulic head measurement and allows for an improved representation of the coastal boundary condition in regional constant-density groundwater flow models.
When 95% Accurate Isn't: Exploring Bayes's Theorem
ERIC Educational Resources Information Center
CadwalladerOlsker, Todd D.
2011-01-01
Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…
Speech Correction in the Schools.
ERIC Educational Resources Information Center
Eisenson, Jon; Ogilvie, Mardel
An introduction to the problems and therapeutic needs of school age children whose speech requires remedial attention, the text is intended for both the classroom teacher and the speech correctionist. General considerations include classification and incidence of speech defects, speech correction services, the teacher as a speaker, the mechanism…
ADMINISTRATIVE GUIDE IN SPEECH CORRECTION.
ERIC Educational Resources Information Center
HEALEY, WILLIAM C.
WRITTEN PRIMARILY FOR SCHOOL SUPERINTENDENTS, PRINCIPALS, SPEECH CLINICIANS, AND SUPERVISORS, THIS GUIDE OUTLINES THE MECHANICS OF ORGANIZING AND CONDUCTING SPEECH CORRECTION ACTIVITIES IN THE PUBLIC SCHOOLS. IT INCLUDES THE REQUIREMENTS FOR CERTIFICATION OF A SPEECH CLINICIAN IN MISSOURI AND DESCRIBES ESSENTIAL STEPS FOR THE DEVELOPMENT OF A…
Teaching Politically without Political Correctness.
ERIC Educational Resources Information Center
Graff, Gerald
2000-01-01
Discusses how to bring political issues into the classroom, highlighting the influence of local context and noting conservative and liberal criticisms of political correctness. Suggests the need for a different idea of how to teach politically from the advocacy pedagogy advanced by recent critical educators, explaining that bringing students into…
The Politics of Political Correctness.
ERIC Educational Resources Information Center
Minsky, Leonard
1992-01-01
This article reacts to President Bush's entry into the dispute over "political correctness" on college campuses. The paper summarizes discussions of students, faculty, and others in the Washington, D.C. area which concluded that this seeming defense of free speech is actually an attack on affirmative action and multiculturalism stemming from the…
Political Correctness and American Academe.
ERIC Educational Resources Information Center
Drucker, Peter F.
1994-01-01
Argues that today's political correctness atmosphere is a throwback to attempts made by the Nazis and Stalinists to force society into conformity. Academia, it is claimed, is being forced to conform to gain control of the institution of higher education. It is predicted that this effort will fail. (GR)
Special Language and Political Correctness.
ERIC Educational Resources Information Center
Corbett, Jenny
1994-01-01
This article looks at the way in which the language used in relation to special education needs has changed and evolved since the 1960s, based on articles published in the British special education literature. Vocabulary, images, and attitudes are discussed in the context of political correctness and its impact on behavior. (DB)
Terrain Corrections for Gravity Gradiometry
NASA Astrophysics Data System (ADS)
Huang, Ou
This study developed a geostatistical method to determine the required extent of terrain corrections for gravity gradients under the criterion of different applications. We present the different methods to compute the terrain corrections for gravity gradients for the case of ground and airborne gravity gradiometry. In order to verify our geostatistical method and study the required extent for different types of terrain, we also developed a method to simulate topography based on the covariance model. The required extents were determined from the variance of truncation error for one point, or furthermore from the variance of truncation error difference for a pair of points, and these variances were verified with that from the deterministic method. The extent of terrain correction was determined for ground gradiometry based on simulated, ultra-high resolution topography for very local application, and also was determined based on mountainous topography of large areas. For airborne gradiometry, we compute the terrain corrections and the required extent based on Air-FTG observations at Vinton Dome, LA and Parkfield, CA area; also they were verified with the results of Bell Geospace. Finally, from the mostly flat, medium rough and mountainous areas, an empirical relationship was developed which has the properties that the required extent has 4 times relationship corresponding to the amplitude of PSD has 100 times relationship between mountainous and mostly flat areas, and it can be interpolated for other types of topography from their geostatistics.
Correcting the AGS depolarizing resonances
Ratner, L.G.
1986-01-01
For the 1986 AGS run, the technique of correcting an imperfection resonance using a beat harmonic instead of the direct harmonic was applied and found to be useful in achieving a 22 GeV/c polarized beam. Both conventional and modified techniques are explained. (LEW)
The correct "ball bearings" data.
Caroni, C
2002-12-01
The famous data on fatigue failure times of ball bearings have been quoted incorrectly from Lieblein and Zelen's original paper. The correct data include censored values, as well as non-fatigue failures that must be handled appropriately. They could be described by a mixture of Weibull distributions, corresponding to different modes of failure.
Tube dimpling tool assures accurate dip-brazed joints
NASA Technical Reports Server (NTRS)
Beuyukian, C. S.; Heisman, R. M.
1968-01-01
Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.
31 CFR 205.24 - How are accurate estimates maintained?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false How are accurate estimates maintained... Treasury-State Agreement § 205.24 How are accurate estimates maintained? (a) If a State has knowledge that an estimate does not reasonably correspond to the State's cash needs for a Federal assistance...
78 FR 34604 - Submitting Complete and Accurate Information
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-10
... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger
Do Bond Functions Help for the Calculation of Accurate Bond Energies?
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James (Technical Monitor)
1998-01-01
The bond energies of 8 chemically bound diatomics are computed using several basis sets with and without bond functions (BF). The bond energies obtained using the aug-pVnZ+BF basis sets (with a correction for basis set superposition error, BSSE) tend to be slightly smaller that the results obtained using the aug-pV(n+I)Z basis sets, but slightly larger than the BSSE corrected aug-pV(n+I)Z results. The aug-cc-pVDZ+BF and aug-cc-pVTZ+BF basis sets yield reasonable estimates of bond energies, but, in most cases, these results cannot be considered highly accurate. Extrapolation of the results obtained with basis sets including bond functions appears to be inferior to the results obtained by extrapolation using atom-centered basis sets. Therefore bond functions do not appear to offer a path for obtaining highly accurate results for chemically bound systems at a lower computational cost than atom centered basis sets.
Ciancio, Dennis; Thompson, Kelly; Schall, Megan; Skinner, Christopher; Foorman, Barbara
2015-10-01
The relationship between reading comprehension rate measures and broad reading skill development was examined using data from approximately 1425 students (grades 1-3). Students read 3 passages, from a pool of 30, and answered open-ended comprehension questions. Accurate reading comprehension rate (ARCR) was calculated by dividing the percentage of questions answered correctly (%QC) by seconds required to read the passage. Across all 30 passages, ARCR and its two components, %QC correct and time spent reading (1/seconds spent reading the passage), were significantly correlated with broad reading scores, with %QC resulting in the lowest correlations. Two sequential regressions supported previous findings which suggest that ARCR measures consistently produced meaningful incremental increases beyond %QC in the amount of variance explained in broad reading skill; however, ARCR produced small or no incremental increases beyond reading time. Discussion focuses on the importance of the measure of reading time embedded in brief accurate reading rate measures and directions for future research.
75 FR 33587 - Defense Science Board; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-14
... of advisory committee meeting; correction. SUMMARY: On June 8, 2010, DoD published a notice (75 FR... one instance of irrelevant text. This notice corrects that information. Correction In the notice (FR Doc. 2010-13770) published on June 8, 2010 (75 FR 32416), make the following correction. On page...
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly. PMID:23206091
Frequency-domain correction of sensor dynamic error for step response.
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
Frequency-domain correction of sensor dynamic error for step response
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Xu, Ke-Jun
2012-11-01
To obtain accurate results in dynamic measurements it is required that the sensors should have good dynamic performance. In practice, sensors have non-ideal dynamic characteristics due to their small damp ratios and low natural frequencies. In this case some dynamic error correction methods can be adopted for dealing with the sensor responses to eliminate the effect of their dynamic characteristics. The frequency-domain correction of sensor dynamic error is a common method. Using the existing calculation method, however, the correct frequency-domain correction function (FCF) cannot be obtained according to the step response calibration experimental data. This is because of the leakage error and invalid FCF value caused by the cycle extension of the finite length step input-output intercepting data. In order to solve these problems the data splicing preprocessing and FCF interpolation are put forward, and the FCF calculation steps as well as sensor dynamic error correction procedure by the calculated FCF are presented in this paper. The proposed solution is applied to the dynamic error correction of the bar-shaped wind tunnel strain gauge balance so as to verify its effectiveness. The dynamic error correction results show that the adjust time of the balance step response is shortened to 10 ms (shorter than 1/30 before correction) after frequency-domain correction, and the overshoot is fallen within 5% (less than 1/10 before correction) as well. The dynamic measurement accuracy of the balance is improved significantly.
The Right Track for Vision Correction
NASA Technical Reports Server (NTRS)
2003-01-01
More and more people are putting away their eyeglasses and contact lenses as a result of laser vision correction surgery. LASIK, the most widely performed version of this surgical procedure, improves vision by reshaping the cornea, the clear front surface of the eye, using an excimer laser. One excimer laser system, Alcon s LADARVision 4000, utilizes a laser radar (LADAR) eye tracking device that gives it unmatched precision. During LASIK surgery, laser During LASIK surgery, laser pulses must be accurately placed to reshape the cornea. A challenge to this procedure is the patient s constant eye movement. A person s eyes make small, involuntary movements known as saccadic movements about 100 times per second. Since the saccadic movements will not stop during LASIK surgery, most excimer laser systems use an eye tracking device that measures the movements and guides the placement of the laser beam. LADARVision s eye tracking device stems from the LADAR technology originally developed through several Small Business Innovation Research (SBIR) contracts with NASA s Johnson Space Center and the U.S. Department of Defense s Ballistic Missile Defense Office (BMDO). In the 1980s, Johnson awarded Autonomous Technologies Corporation a Phase I SBIR contract to develop technology for autonomous rendezvous and docking of space vehicles to service satellites. During Phase II of the Johnson SBIR contract, Autonomous Technologies developed a prototype range and velocity imaging LADAR to demonstrate technology that could be used for this purpose.
Shuffler bias corrections using calculated count rates
Rinard, Phillip M.; Hurd, J. R.; Hsue, F.
2001-04-01
Los Alamos National Laboratory has two identical shufflers that have been calibrated with a dozen U{sub 3}O{sub 8} certified standards from 10 g {sup 235}U to 3600 g {sup 235}U. The shufflers are used to assay a wide variety of material types for their {sup 235}U contents. When the items differ greatly in chemical composition or shape from the U{sub 3}O{sub 8} standards a bias is introduced because the calibration is not appropriate. Recently a new tool has been created to calculate shuffler count rates accurately, and this has been applied to generate bias correction factors. The tool has also been used to verify the masses and count rates of some uncertified U{sub 3}O{sub 8} standards up to 8.0 kg of {sup 235}U which were used to provisionally extend the calibration beyond the 3.6 kg of {sup 235}U mass when a special need arose. Metallic uranium has significantly different neutronic properties from the U{sub 3}O{sub 8} standards and measured count rates from metals are biased low when the U{sub 3}O{sub 8} calibration is applied. The application of the calculational tool to generate bias corrrections for assorted metals will be described. The accuracy of the calculational tool was verified using highly enriched metal disk standards that could be stacked to form cylinders or put into spread arrays.
Entropic corrections to Friedmann equations
Sheykhi, Ahmad
2010-05-15
Recently, Verlinde discussed that gravity can be understood as an entropic force caused by changes in the information associated with the positions of material bodies. In Verlinde's argument, the area law of the black hole entropy plays a crucial role. However, the entropy-area relation can be modified from the inclusion of quantum effects, motivated from the loop quantum gravity. In this note, by employing this modified entropy-area relation, we derive corrections to Newton's law of gravitation as well as modified Friedmann equations by adopting the viewpoint that gravity can be emerged as an entropic force. Our study further supports the universality of the log correction and provides a strong consistency check on Verlinde's model.
Proximity effect correction sensitivity analysis
NASA Astrophysics Data System (ADS)
Zepka, Alex; Zimmermann, Rainer; Hoppe, Wolfgang; Schulz, Martin
2010-05-01
Determining the quality of a proximity effect correction (PEC) is often done via 1-dimensional measurements such as: CD deviations from target, corner rounding, or line-end shortening. An alternative approach would compare the entire perimeter of the exposed shape and its original design. Unfortunately, this is not a viable solution as there is a practical limit to the number of metrology measurements that can be done in a reasonable amount of time. In this paper we make use of simulated results and introduce a method which may be considered complementary to the standard way of PEC qualification. It compares simulated contours with the target layout via a Boolean XOR operation with the area of the XOR differences providing a direct measure of how close a corrected layout approximates the target.
Interaction and self-correction
Satne, Glenda L.
2014-01-01
In this paper, I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the naturalist challenge (NC), referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in phenomenology and developmental psychology. PMID:25101044
Entropic corrections to Friedmann equations
NASA Astrophysics Data System (ADS)
Sheykhi, Ahmad
2010-05-01
Recently, Verlinde discussed that gravity can be understood as an entropic force caused by changes in the information associated with the positions of material bodies. In Verlinde’s argument, the area law of the black hole entropy plays a crucial role. However, the entropy-area relation can be modified from the inclusion of quantum effects, motivated from the loop quantum gravity. In this note, by employing this modified entropy-area relation, we derive corrections to Newton’s law of gravitation as well as modified Friedmann equations by adopting the viewpoint that gravity can be emerged as an entropic force. Our study further supports the universality of the log correction and provides a strong consistency check on Verlinde’s model.
Trajectory correction propulsion for TOPS
NASA Technical Reports Server (NTRS)
Long, H. R.; Bjorklund, R. A.
1972-01-01
A blowdown-pressurized hydrazine propulsion system was selected to provide trajectory correction impulse for outer planet flyby spacecraft as the result of cost/mass/reliability tradeoff analyses. Present hydrazine component and system technology and component designs were evaluated for application to the Thermoelectric Outer Planet Spacecraft (TOPS); while general hydrazine technology was adequate, component design changes were deemed necessary for TOPS-type missions. A prototype hydrazine propulsion system was fabricated and fired nine times for a total of 1600 s to demonstrate the operation and performance of the TOPS propulsion configuration. A flight-weight trajectory correction propulsion subsystem (TCPS) was designed for the TOPS based on actual and estimated advanced components.
ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.
Malusek, A; Sandborg, M; Carlsson, G Alm
2016-06-01
Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of <7 % as recommended by IAEA. A computational model of the reference KAP meter is used to calculate beam quality correction factors for transfer of the calibration coefficient at the standards laboratory, Q0, to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography.
ACCURATE KAP METER CALIBRATION AS A PREREQUISITE FOR OPTIMISATION IN PROJECTION RADIOGRAPHY.
Malusek, A; Sandborg, M; Carlsson, G Alm
2016-06-01
Modern X-ray units register the air kerma-area product, PKA, with a built-in KAP meter. Some KAP meters show an energy-dependent bias comparable with the maximum uncertainty articulated by the IEC (25 %), adversely affecting dose-optimisation processes. To correct for the bias, a reference KAP meter calibrated at a standards laboratory and two calibration methods described here can be used to achieve an uncertainty of <7 % as recommended by IAEA. A computational model of the reference KAP meter is used to calculate beam quality correction factors for transfer of the calibration coefficient at the standards laboratory, Q0, to any beam quality, Q, in the clinic. Alternatively, beam quality corrections are measured with an energy-independent dosemeter via a reference beam quality in the clinic, Q1, to beam quality, Q Biases up to 35 % of built-in KAP meter readings were noted. Energy-dependent calibration factors are needed for unbiased PKA Accurate KAP meter calibration as a prerequisite for optimisation in projection radiography. PMID:26743261
Correctness criteria for process migration
NASA Technical Reports Server (NTRS)
Lu, Chin; Liu, J. W. S.
1987-01-01
Two correctness criteria, the state consistency criterion and the property consistency criterion for process migration are discussed. The state machine approach is used to model the interactions between a user process and its environment. These criteria are defined in terms of the model. The idea of environment view was introduced to distinguish what a user process observes about its environment from what its environment state really is and argue that a consistent view of the environment must be maintained for every migrating process.
Holographic superconductors with Weyl corrections
NASA Astrophysics Data System (ADS)
Momeni, Davood; Raza, Muhammad; Myrzakulov, Ratbay
2016-10-01
A quick review on the analytical aspects of holographic superconductors (HSCs) with Weyl corrections has been presented. Mainly, we focus on matching method and variational approaches. Different types of such HSC have been investigated — s-wave, p-wave and Stúckelberg ones. We also review the fundamental construction of a p-wave type, in which the non-Abelian gauge field is coupled to the Weyl tensor. The results are compared from numerics to analytical results.
An overview of correctional psychiatry.
Metzner, Jeffrey; Dvoskin, Joel
2006-09-01
Supermax facilities may be an unfortunate and unpleasant necessity in modern corrections. Because of the serious dangers posed by prison gangs, they are unlikely to disappear completely from the correctional landscape any time soon. But such units should be carefully reserved for those inmates who pose the most serious danger to the prison environment. Further, the constitutional duty to provide medical and mental health care does not end at the supermax door. There is a great deal of common ground between the opponents of such environments and those who view them as a necessity. No one should want these expensive beds to be used for people who could be more therapeutically and safely managed in mental health treatment environments. No one should want people with serious mental illnesses to be punished for their symptoms. Finally, no one wants these units to make people more, instead of less, dangerous. It is in everyone's interests to learn as much as possible about the potential of these units for good and for harm. Corrections is a profession, and professions base their practices on data. If we are to avoid the most egregious and harmful effects of supermax confinement, we need to understand them far better than we currently do. Though there is a role for advocacy from those supporting or opposed to such environments, there is also a need for objective, scientifically rigorous study of these units and the people who live there.
Quantum Corrections to Entropic Gravity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Wang, Chiao-Hsuan
2013-12-01
The entropic gravity scenario recently proposed by Erik Verlinde reproduced Newton's law of purely classical gravity yet the key assumptions of this approach all have quantum mechanical origins. As is typical for emergent phenomena in physics, the underlying, more fundamental physics often reveals itself as corrections to the leading classical behavior. So one naturally wonders: where is ħ hiding in entropic gravity? To address this question, we first revisit the idea of holographic screen as well as entropy and its variation law in order to obtain a self-consistent approach to the problem. Next we argue that as the concept of minimal length has been invoked in the Bekenstein entropic derivation, the generalized uncertainty principle (GUP), which is a direct consequence of the minimal length, should be taken into consideration in the entropic interpretation of gravity. Indeed based on GUP it has been demonstrated that the black hole Bekenstein entropy area law must be modified not only in the strong but also in the weak gravity regime where in the weak gravity limit the GUP modified entropy exhibits a logarithmic correction. When applying it to the entropic interpretation, we demonstrate that the resulting gravity force law does include sub-leading order correction terms that depend on ħ. Such deviation from the classical Newton's law may serve as a probe to the validity of entropic gravity.
Quantum Corrections to Entropic Gravity
NASA Astrophysics Data System (ADS)
Chen, Pisin; Wang, Chiao-Hsuan
2013-01-01
The entropic gravity scenario recently proposed by Erik Verlinde reproduced Newton's law of purely classical gravity yet the key assumptions of this approach all have quantum mechanical origins. As is typical for emergent phenomena in physics, the underlying, more fundamental physics often reveals itself as corrections to the leading classical behavior. So one naturally wonders: where is ℏ hiding in entropic gravity? To address this question, we first revisit the idea of holographic screen as well as entropy and its variation law in order to obtain a self-consistent approach to the problem. Next we argue that since the concept of minimal length has been invoked in the Bekenstein entropic derivation, the generalized uncertainty principle (GUP), which is a direct consequence of the minimal length, should be taken into consideration in the entropic interpretation of gravity. Indeed based on GUP it has been demonstrated that the black hole Bekenstein entropy area law must be modified not only in the strong but also in the weak gravity regime where in the weak gravity limit the GUP modified entropy exhibits a logarithmic correction. When applying it to the entropic interpretation, we demonstrate that the resulting gravity force law does include sub-leading order correction terms that depend on ℏ. Such deviation from the classical Newton's law may serve as a probe to the validity of entropic gravity.
Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space
Tokunaga, Terumasa; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi
2016-01-01
To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured. PMID:27271939
Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space.
Toyoshima, Yu; Tokunaga, Terumasa; Hirose, Osamu; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi
2016-06-01
To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured. PMID:27271939
Stewart, W C L; Hager, V R
2016-01-01
In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD—a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance—the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences. PMID:27245422
Stewart, W C L; Hager, V R
2016-08-01
In the analysis of DNA sequences on related individuals, most methods strive to incorporate as much information as possible, with little or no attention paid to the issue of statistical significance. For example, a modern workstation can easily handle the computations needed to perform a large-scale genome-wide inheritance-by-descent (IBD) scan, but accurate assessment of the significance of that scan is often hindered by inaccurate approximations and computationally intensive simulation. To address these issues, we developed gLOD-a test of co-segregation that, for large samples, models chromosome-specific IBD statistics as a collection of stationary Gaussian processes. With this simple model, the parametric bootstrap yields an accurate and rapid assessment of significance-the genome-wide corrected P-value. Furthermore, we show that (i) under the null hypothesis, the limiting distribution of the gLOD is the standard Gumbel distribution; (ii) our parametric bootstrap simulator is approximately 40 000 times faster than gene-dropping methods, and it is more powerful than methods that approximate the adjusted P-value; and, (iii) the gLOD has the same statistical power as the widely used maximum Kong and Cox LOD. Thus, our approach gives researchers the ability to determine quickly and accurately the significance of most large-scale IBD scans, which may contain multiple traits, thousands of families and tens of thousands of DNA sequences.
Accurate Automatic Detection of Densely Distributed Cell Nuclei in 3D Space.
Toyoshima, Yu; Tokunaga, Terumasa; Hirose, Osamu; Kanamori, Manami; Teramoto, Takayuki; Jang, Moon Sun; Kuge, Sayuri; Ishihara, Takeshi; Yoshida, Ryo; Iino, Yuichi
2016-06-01
To measure the activity of neurons using whole-brain activity imaging, precise detection of each neuron or its nucleus is required. In the head region of the nematode C. elegans, the neuronal cell bodies are distributed densely in three-dimensional (3D) space. However, no existing computational methods of image analysis can separate them with sufficient accuracy. Here we propose a highly accurate segmentation method based on the curvatures of the iso-intensity surfaces. To obtain accurate positions of nuclei, we also developed a new procedure for least squares fitting with a Gaussian mixture model. Combining these methods enables accurate detection of densely distributed cell nuclei in a 3D space. The proposed method was implemented as a graphical user interface program that allows visualization and correction of the results of automatic detection. Additionally, the proposed method was applied to time-lapse 3D calcium imaging data, and most of the nuclei in the images were successfully tracked and measured.
Accurate calculation of diffraction-limited encircled and ensquared energy.
Andersen, Torben B
2015-09-01
Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873
Drift correction of the dissolved signal in single particle ICPMS.
Cornelis, Geert; Rauch, Sebastien
2016-07-01
A method is presented where drift, the random fluctuation of the signal intensity, is compensated for based on the estimation of the drift function by a moving average. It was shown using single particle ICPMS (spICPMS) measurements of 10 and 60 nm Au NPs that drift reduces accuracy of spICPMS analysis at the calibration stage and during calculations of the particle size distribution (PSD), but that the present method can again correct the average signal intensity as well as the signal distribution of particle-containing samples skewed by drift. Moreover, deconvolution, a method that models signal distributions of dissolved signals, fails in some cases when using standards and samples affected by drift, but the present method was shown to improve accuracy again. Relatively high particle signals have to be removed prior to drift correction in this procedure, which was done using a 3 × sigma method, and the signals are treated separately and added again. The method can also correct for flicker noise that increases when signal intensity is increased because of drift. The accuracy was improved in many cases when flicker correction was used, but when accurate results were obtained despite drift, the correction procedures did not reduce accuracy. The procedure may be useful to extract results from experimental runs that would otherwise have to be run again. Graphical Abstract A method is presented where a spICP-MS signal affected by drift (left) is corrected (right) by adjusting the local (moving) averages (green) and standard deviations (purple) to the respective values at a reference time (red). In combination with removing particle events (blue) in the case of calibration standards, this method is shown to obtain particle size distributions where that would otherwise be impossible, even when the deconvolution method is used to discriminate dissolved and particle signals.
Correcting electrode impedance effects in broadband SIP measurements
NASA Astrophysics Data System (ADS)
Huisman, Johan Alexander; Zimmermann, Egon; Esser, Odilia; Haegel, Franz-Hubert; Vereecken, Harry
2016-04-01
Broadband spectral induced polarization (SIP) measurements of the complex electrical resistivity can be affected by the contact impedance of the potential electrodes above 100 Hz. In this study, we present a correction procedure to remove electrode impedance effects from SIP measurements. The first step in this correction procedure is to estimate the electrode impedance using a measurement with reversed current and potential electrodes. In a second step, this estimated electrode impedance is used to correct SIP measurements based on a simplified electrical model of the SIP measurement system. We evaluated this new correction procedure using SIP measurements on water because of the well-defined dielectric properties. It was found that the difference between the corrected and expected phase of the complex electrical resistivity of water was below 0.1 mrad at 1 kHz for a wide range of electrode impedances. In addition, SIP measurements on a saturated unconsolidated sediment sample with two types of potential electrodes showed that the measured phase of the electrical resistivity was very similar (difference <0.2 mrad) up to a frequency of 10 kHz after the effect of the different electrode impedances was removed. Finally, SIP measurements on variably saturated unconsolidated sand were made. Here, the plausibility of the phase of the electrical resistivity was improved for frequencies up to 1 kHz, but errors remained for higher frequencies due to the approximate nature of the electrode impedance estimates and some remaining unknown parasitic capacitances that led to current leakage. It was concluded that the proposed correction procedure for SIP measurements improved the accuracy of the phase measurements by an order of magnitude in the kHz frequency range. Further improvement of this accuracy requires a method to accurately estimate parasitic capacitances in situ.
Intelligent Automated Correction of Baseplane and Systematic Noise in Two-Dimensional NMR Spectra
NASA Astrophysics Data System (ADS)
Levy, G. C.; Jeong, G. W.; Yu, J. Q.; Wang, K.
A computer program useful for 2D NMR data is described that provides automatic two-dimensional baseplane correction and subsequent tl and t2 ridge suppression. The algorithm per forms combined correction of smooth baseplane distortions and sharp ridges in 2D NMR spectra through five steps: (1) identification of resonance peaks and ridges, (2) extraction of initial, putative global baseplane, (3) window filtering of the corresponding time domain, (4) construction of a 2D spectrum free of baseplane distortion, and (5) suppression of ridges, The optimal parameters for baseplane and ridge correction are automatically decided by the program, yielding a greatly improved spectrum, together with more accurate spectral information.
NASA Astrophysics Data System (ADS)
Ma, S.; Quan, C.; Zhu, R.; Tay, C. J.
2012-08-01
Digital sinusoidal phase-shifting fringe projection profilometry (DSPFPP) is a powerful tool to reconstruct three-dimensional (3D) surface of diffuse objects. However, a highly accurate profile is often hindered by nonlinear response, color crosstalk and imbalance of a pair of digital projector and CCD/CMOS camera. In this paper, several phase error correction methods, such as Look-Up-Table (LUT) compensation, intensity correction, gamma correction, LUT-based hybrid method and blind phase error suppression for gray and color-encoded DSPFPP are described. Experimental results are also demonstrated to evaluate the effectiveness of each method.
Correction for instrument time constant in determination of reaction kinetics.
Chilton, Marie; Clark, Jared; Thomas, Nathan; Nicholson, Allen; Hansen, Lee D.; Hansen, Clifford W.; Hansen, Jaron
2010-02-01
Rates of reactions can be expressed as dn/dt = kcf(n) where n is moles of reaction, k is a rate constant, c is a proportionality constant, and f(n) is a function of the properties of the sample. When the instrument time constant, ?, and k are sufficiently comparable that measured rates are significantly affected by instrument response, correction for instrument response must be done to obtain accurate reaction kinetics. Correction for instrument response has previously been done by truncating early data or by use of the Tian equation. Both methods can lead to significant errors. We describe a method for simultaneous determination of ?, k, and c by fitting equations describing the combined instrument response and rate law to rates observed as a function of time. The method was tested with data on the heat rate from acid-catalyzed hydrolysis of sucrose.
Professional orientation and pluralistic ignorance among jail correctional officers.
Cook, Carrie L; Lane, Jodi
2014-06-01
Research about the attitudes and beliefs of correctional officers has historically been conducted in prison facilities while ignoring jail settings. This study contributes to our understanding of correctional officers by examining the perceptions of those who work in jails, specifically measuring professional orientations about counseling roles, punitiveness, corruption of authority by inmates, and social distance from inmates. The study also examines whether officers are accurate in estimating these same perceptions of their peers, a line of inquiry that has been relatively ignored. Findings indicate that the sample was concerned about various aspects of their job and the management of inmates. Specifically, officers were uncertain about adopting counseling roles, were somewhat punitive, and were concerned both with maintaining social distance from inmates and with an inmate's ability to corrupt their authority. Officers also misperceived the professional orientation of their fellow officers and assumed their peer group to be less progressive than they actually were.
Assessment of a long-range corrected hybrid functional
Vydrov, Oleg A.; Scuseria, Gustavo E.
2006-12-21
Common approximate exchange-correlation functionals suffer from self-interaction error, and as a result, their corresponding potentials have incorrect asymptotic behavior. The exact asymptote can be imposed by introducing range separation into the exchange component and replacing the long-range portion of the approximate exchange by the Hartree-Fock counterpart. The authors show that this long-range correction works particularly well in combination with the short-range variant of the Perdew-Burke-Ernzerhof (PBE) exchange functional. This long-range-corrected hybrid, here denoted LC-{omega}PBE, is remarkably accurate for a broad range of molecular properties, such as thermochemistry, barrier heights of chemical reactions, bond lengths, and most notably, description of processes involving long-range charge transfer.
Correction factors for gravimetric measurement of peritumoural oedema in man.
Bell, B A; Smith, M A; Tocher, J L; Miller, J D
1987-01-01
The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man. PMID:3268140
Correction factors for gravimetric measurement of peritumoural oedema in man.
Bell, B A; Smith, M A; Tocher, J L; Miller, J D
1987-01-01
The water content of samples of normal and oedematous brain in lobectomy specimens from 16 patients with cerebral tumours has been measured by gravimetry and by wet and dry weighing. Uncorrected gravimetry underestimated the water content of oedematous peritumoural cortex by a mean of 1.17%, and of oedematous peritumoural white matter by a mean of 2.52%. Gravimetric correction equations calculated theoretically and from an animal model of serum infusion white matter oedema overestimate peritumoural white matter oedema in man, and empirical gravimetric error correction factors for oedematous peritumoural human white matter and cortex have therefore been derived. These enable gravimetry to be used to accurately determine peritumoural oedema in man.
Thermal correction of deformations in a telescope mirror
NASA Technical Reports Server (NTRS)
Rhodes, M. D.
1973-01-01
Orbiting astronomical observatories have the potential for making observations far superior to those from earth-based mirrors. In order for this performance to be realized, the contour of the primary mirror must be very accurately controlled. A preliminary investigation of the use of thermally induced elastic strains for correcting axisymmetric deformations in space telescope mirrors has been presented. The relation between axial deformation and thermal inputs was determined by a finite difference solution of the equations for thin elastic shells. The use of this technique was demonstrated analytically on a beryllium paraboloid. This mirror had 10 equally spaced thermal inputs and results are presented which show the nature of the temperature distribution required to correct deformations due to an acceleration-type loading.
Sensor Data Management, Validation, Correction, and Provenance for Building Technologies
Castello, Charles C; Sanyal, Jibonananda; Rossiter, Jeffrey S; Hensley, Zachary; New, Joshua Ryan
2014-01-01
Oak Ridge National Laboratory (ORNL) conducts research on technologies that use a wide range of sensors to develop and characterize building energy performance. The management of high-resolution sensor data, analysis, and tracing lineage of such activities is challenging. Missing or corrupt data due to sensor failure, fouling, drifting, calibration error, or data logger failure is another issue. This paper focuses on sensor data management, validation, correction, and provenance to combat these issues, ensuring complete and accurate sensor datasets for building technologies applications and research. The design and development of two integrated software products are discussed: Sensor Data Validation and Correction (SensorDVC) and the Provenance Data Management System (ProvDMS) platform.
Optimal Drift Correction for Superresolution Localization Microscopy with Bayesian Inference.
Elmokadem, Ahmed; Yu, Ji
2015-11-01
Single-molecule-localization-based superresolution microscopy requires accurate sample drift correction to achieve good results. Common approaches for drift compensation include using fiducial markers and direct drift estimation by image correlation. The former increases the experimental complexity and the latter estimates drift at a reduced temporal resolution. Here, we present, to our knowledge, a new approach for drift correction based on the Bayesian statistical framework. The technique has the advantage of being able to calculate the drifts for every image frame of the data set directly from the single-molecule coordinates. We present the theoretical foundation of the algorithm and an implementation that achieves significantly higher accuracy than image-correlation-based estimations.
Measurement and correction of leaf open times in helical tomotherapy
Sevillano, David; Minguez, Cristina; Sanchez, Alicia; Sanchez-Reyes, Alberto
2012-11-15
showed that, while treatments affected by latency effects were improved, those affected by individual leaf errors were not. Conclusions: Measurement of MLC performance in real treatments provides the authors with a valuable tool for ensuring the quality of HT delivery. The LOTs of MLC are very accurate in most cases. Sources of error were found and correction methods proposed and applied. The corrections decreased the amount of LOT errors. The dosimetric impact of these corrections should be evaluated more thoroughly using 3D dose distribution analysis.
Aerosol effects and corrections in the Halogen Occultation Experiment
NASA Technical Reports Server (NTRS)
Hervig, Mark E.; Russell, James M., III; Gordley, Larry L.; Daniels, John; Drayson, S. Roland; Park, Jae H.
1995-01-01
The eruptions of Mt. Pinatubo in June 1991 increased stratospheric aerosol loading by a factor of 30, affecting chemistry, radiative transfer, and remote measurements of the stratosphere. The Halogen Occultation Experiment (HALOE) instrument on board Upper Atmosphere Research Satellite (UARS) makes measurements globally for inferring profiles of NO2, H2O, O3, HF, HCl, CH4, NO, and temperature in addition to aerosol extinction at five wavelengths. Understanding and removing the aerosol extinction is essential for obtaining accurate retrievals from the radiometer channels of NO2, H2O and O3 in the lower stratosphere since these measurements are severely affected by contaminant aerosol absorption. If ignored, aerosol absorption in the radiometer measurements is interpreted as additional absorption by the target gas, resulting in anomalously large mixing ratios. To correct the radiometer measurements for aerosol effects, a retrieved aerosol extinction profile is extrapolated to the radiometer wavelengths and then included as continuum attenuation. The sensitivity of the extrapolation to size distribution and composition is small for certain wavelength combinations, reducing the correction uncertainty. The aerosol corrections extend the usable range of profiles retrieved from the radiometer channels to the tropopause with results that agree well with correlative measurements. In situations of heavy aerosol loading, errors due to aerosol in the retrieved mixing ratios are reduced to values of about 15, 25, and 60% in H2O, O3, and NO2, respectively, levels that are much less than the correction magnitude.
Assessment of ionospheric and tropospheric corrections for PPP-RTK
NASA Astrophysics Data System (ADS)
de Oliveira, Paulo; Fund, François; Morel, Laurent; Monico, João; Durand, Stéphane; Durand, Fréderic
2016-04-01
The PPP-RTK is a state of art GNSS (Global Navigation Satellite System) technique employed to determine accurate positions in real-time. To perform the PPP-RTK it is necessary to accomplish the SSR (State Space Representation) of the spatially correlated errors affecting the GNSS observables, such as the tropospheric delay and the ionospheric effect. Using GNSS data of local or regional GNSS active networks, it is possible to determine quite well the atmospheric errors for any position in the network coverage area, by modeling these effects or biases. This work presents the results of tropospheric and ionospheric modeling employed to obtain the respective corrections. The region in the study is France and the Orphéon GNSS active network is used to generate the atmospheric corrections. The CNES (Centre National d'Etudes Spatiales) satellite orbit products are used to perform ambiguity fixing in GNSS processing. Two atmospheric modeling approaches are considered: 1) generation of a priori correction by coefficients estimated using the GNSS network and 2) the use of interpolated ionospheric and tropospheric effects from the closest reference stations to the user's location, as suggested in the second stage of RTCM (Ratio Technical Commission for Maritime) messages development. Finally, the atmospheric corrections are introduced in PPP-RTK as a priori values to allow improvements in ambiguity fixing and to reduce its convergence time. The discussion emphasizes the positive and the negative points of each solution or even the associated use of them.
Children's perception of their synthetically corrected speech production.
Strömbergsson, Sofia; Wengelin, Asa; House, David
2014-06-01
We explore children's perception of their own speech - in its online form, in its recorded form, and in synthetically modified forms. Children with phonological disorder (PD) and children with typical speech and language development (TD) performed tasks of evaluating accuracy of the different types of speech stimuli, either immediately after having produced the utterance or after a delay. In addition, they performed a task designed to assess their ability to detect synthetic modification. Both groups showed high performance in tasks involving evaluation of other children's speech, whereas in tasks of evaluating one's own speech, the children with PD were less accurate than their TD peers. The children with PD were less sensitive to misproductions in immediate conjunction with their production of an utterance, and more accurate after a delay. Within-category modification often passed undetected, indicating a satisfactory quality of the generated speech. Potential clinical benefits of using corrective re-synthesis are discussed.
Stray Light Correction in the Optical Spectroscopy of Crystals
Hendler, Richard W.; Meuse, Curtis W.; Gallagher, Travis; Labahn, Joerg; Kubicek, Jan; Smith, Paul D.; Kakareka, John W.
2015-01-01
It has long been known in spectroscopy that light not passing through a sample, but reaching the detector (i.e., stray light), results in a distortion of the spectrum known as absorption flattening. In spectroscopy with crystals, one must either include such stray light or take steps to exclude it. In the former case, the derived spectra are not accurate. In the latter case, a significant amount of the crystal must be masked off and excluded. In this paper, we describe a method that allows use of the entire crystal by correcting the distorted spectrum. PMID:26688880
Higher-order binding corrections to the Lamb shift
Pachucki, K. )
1993-08-15
In this work a new analytical method for calculating the one-loop self-energy correction to the Lamb shift is presented in detail. The technique relies on division into the low and the high energy parts. The low energy part is calculated using the multipole expansion and the high energy part is calculated by expanding the Dirac-Coulomb propagator in powers of the Coulomb field. The obtained results are in agreement with those previously known, but are more accurate. A new theoretical value of the Lamb shift is also given. 47 refs., 2 figs., 1 tab.
Automated motion correction based on target tracking for dynamic nuclear medicine studies
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Tetrault, Tracy; Fahey, Fred; Treves, Ted
2008-03-01
Nuclear medicine dynamic studies of kidneys, bladder and stomach are important diagnostic tools. Accurate generation of time-activity curves from regions of interest (ROIs) requires that the patient remains motionless for the duration of the study. This is not always possible since some dynamic studies may last from several minutes to one hour. Several motion correction solutions have been explored. Motion correction using external point sources is inconvenient and not accurate especially when motion results from breathing, organ motion or feeding rather than from body motion alone. Centroid-based motion correction assumes that activity distribution is only inside the single organ (without background) and uniform, but this approach is impractical in most clinical studies. In this paper, we present a novel technique of motion correction that first tracks the organ of interest in a dynamic series then aligns the organ. The implementation algorithm for target tracking-based motion correction consists of image preprocessing, target detection, target positioning, motion estimation and prediction, tracking (new search region generation) and target alignment. The targeted organ is tracked from the first frame to the last one in the dynamic series to generate a moving trajectory of the organ. Motion correction is implemented by aligning the organ ROIs in the image series to the location of the organ in the first image. The proposed method of motion correction has been applied to several dynamic nuclear medicine studies including radionuclide cystography, dynamic renal scintigraphy, diuretic renography and gastric emptying scintigraphy.
Radiosondes Corrected for Inaccuracy in RH Measurements
Miloshevich, Larry
2008-01-15
Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).
Plans for Jet Energy Corrections at CMS
NASA Astrophysics Data System (ADS)
Mishra, Kalanand
2009-05-01
We present a plan for Jet Energy Corrections at CMS. Jet corrections at CMS will come initially from simulation tuned on test beam data, directly from collision data when available, and ultimately from a simulation tuned on collision data. The corrections will be factorized into a fixed sequence of sub-corrections associated with different detector and physics effects. The following three factors are minimum requirements for most analysis: offset corrections for pile-up and noise; correction for the response of the calorimeter as a function of jet pseudorapidity relative to the barrel; correction for the absolute response as a function of transverse momentum in the barrel. The required correction gives a jet Lorentz vector equivalent to the sum of particles in the jet cone emanating from a QCD hard collision. We discuss the status of these corrections, the planned data-driven techniques for their derivation, and their anticipated evolution with the stages of the CMS experiment.
Some ideas and opportunities concerning three-dimensional wind-tunnel wall corrections
NASA Technical Reports Server (NTRS)
Rubbert, P. E.
1982-01-01
Opportunities for improving the accuracy and reliability of wall corrections in conventional ventilated test sections are presented. The approach encompasses state-of-the-art technology in transonic computational methods combined with the measurement of tunnel-wall pressures. The objective is to arrive at correction procedures of known, verifiable accuracy that are practical within a production testing environment. It is concluded that: accurate and reliable correction procedures can be developed for cruise-type aerodynamic testing for any wall configuration; passive walls can be optimized for minimal interference for cruise-type aerodynamic testing (tailored slots, variable open area ratio, etc.); monitoring and assessment of noncorrectable interference (buoyancy and curvature in a transonic stream) can be an integral part of a correction procedure; and reasonably good correction procedures can probably be developd for complex flows involving extensive separation and other unpredictable phenomena.
Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations
Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim
2011-03-23
A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.
Hyde, Christian; Wilmut, Kate; Fuelscher, Ian; Williams, Jacqueline
2013-01-01
Neurocomputational models of reaching indicate that efficient purposive correction of movement midflight (e.g., online control) depends on one's ability to generate and monitor an accurate internal (neural) movement representation. In the first study to test this empirically, the authors investigated the relationship between healthy young adults' implicit motor imagery performance and their capacity to correct their reaching trajectory. As expected, after controlling for general reaching speed, hierarchical regression demonstrated that imagery ability was a significant predictor of hand correction speed; that is, faster and more accurate imagery performance associated with faster corrections to reaching following target displacement at movement onset. They argue that these findings provide preliminary support for the view that a link exists between an individual's ability to represent movement mentally and correct movement online efficiently.
BFC: correcting Illumina sequencing errors
2015-01-01
Summary: BFC is a free, fast and easy-to-use sequencing error corrector designed for Illumina short reads. It uses a non-greedy algorithm but still maintains a speed comparable to implementations based on greedy methods. In evaluations on real data, BFC appears to correct more errors with fewer overcorrections in comparison to existing tools. It particularly does well in suppressing systematic sequencing errors, which helps to improve the base accuracy of de novo assemblies. Availability and implementation: https://github.com/lh3/bfc Contact: hengli@broadinstitute.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25953801
Correction of Distributed Optical Aberrations
Baker, K; Olivier, S; Carrano, C; Phillion, D
2006-02-12
The objective of this project was to demonstrate the use of multiple distributed deformable mirrors (DMs) to improve the performance of optical systems with distributed aberrations. This concept is expected to provide dramatic improvement in the optical performance of systems in applications where the aberrations are distributed along the optical path or within the instrument itself. Our approach used multiple actuated DMs distributed to match the aberration distribution. The project developed the algorithms necessary to determine the required corrections and simulate the performance of these multiple DM systems.
Aberration correction of unstable resonators
NASA Technical Reports Server (NTRS)
Lang, Robert J. (Inventor)
1994-01-01
Construction of aspheric reflectors for unstable resonator lasers to provide an arbitrary laser mode inside the resonator to correct aberrations of an output beam by the construction of the shape of an end reflector opposite the output reflector of the resonator cavity, such as aberrations resulting from refraction of a beam exiting the solid of the resonator having an index of refraction greater than 1 or to produce an aberration in the output beam that will precisely compensate for the aberration of an optical train into which the resonator beam is coupled.
Cosmic strings with curvature corrections
NASA Astrophysics Data System (ADS)
Boisseau, Bruno; Letelier, Patricio S.
1992-08-01
A generic model of string described by a Lagrangian density that depends on the extrinsic curvature of the string worldsheet is studied. Using a system of coordinates adapted to the string world sheet the equation of motion and the energy-momentum tensor are derived for strings evolving in curved spacetime. We find that the curvature corrections may change the relation between the string energy density and the tension. It can also introduce heat propagation along the string. We also find for the Polyakov as well as Nambu strings with a topological term that the open string end points can travel with a speed less than the velocity of light.
GyrB polymorphisms accurately assign invasive viridans group streptococcal species.
Galloway-Peña, Jessica; Sahasrabhojane, Pranoti; Tarrand, Jeffrey; Han, Xiang Y; Shelburne, Samuel A
2014-08-01
Viridans group streptococci (VGS) are a heterogeneous group of medically important bacteria that cannot be accurately assigned to a particular species using conventional phenotypic methods. Although multilocus sequence analysis (MLSA) is considered the gold standard for VGS species-level identification, MLSA is not yet feasible in the clinical setting. Conversely, molecular methods, such as sodA and 16S rRNA gene sequencing, are clinically practical but not sufficiently accurate for VGS species-level identification. Here, we present data regarding the use of an ∼ 400-nucleotide internal fragment of the gene encoding DNA gyrase subunit B (GyrB) for VGS species-level identification. MLSA, internal gyrB, sodA, full-length, and 5' 16S gene sequences were used to characterize 102 unique VGS blood isolates collected from 2011 to 2012. When using the MLSA species assignment as a reference, full-length and 5' partial 16S gene and sodA sequence analyses failed to correctly assign all strains to a species. Precise species determination was particularly problematic for Streptococcus mitis and Streptococcus oralis isolates. However, the internal gyrB fragment allowed for accurate species designations for all 102 strains. We validated these findings using 54 VGS strains for which MLSA, 16S gene, sodA, and gyrB data are available at the NCBI, showing that gyrB is superior to 16S gene and sodA sequence analyses for VGS species identification. We also observed that specific polymorphisms in the 133-amino acid sequence of the internal GyrB fragment can be used to identify invasive VGS species. Thus, the GyrB amino acid sequence may offer a more practical and accurate method for classifying invasive VGS strains to the species level. PMID:24899021
Words Correct per Minute: The Variance in Standardized Reading Scores Accounted for by Reading Speed
ERIC Educational Resources Information Center
Williams, Jacqueline L.; Skinner, Christopher H.; Floyd, Randy G.; Hale, Andrea D.; Neddenriep, Christine; Kirk, Emily P.
2011-01-01
The measure words correct per minute (WC/M) incorporates a measure of accurate aloud word reading and a measure of reading speed. The current article describes two studies designed to parse the variance in global reading scores accounted for by reading speed. In Study I, reading speed accounted for more than 40% of the reading composite score…
The Role of the Components of Knowledge of Results Information in Error Correction.
ERIC Educational Resources Information Center
Reeve, T. Gilmour; Magill, Richard A.
1981-01-01
A study was done to determine the usefulness of the components of a knowledge of results (KR) statement for organizing response correction. Errors in direction and distance components of a KR statement testing psychomotor skills were manipulated across four groups. The groups receiving directional information were more accurate in error…
In situ corrective action technologies are being proposed and installed at an increasing number of underground storage tank (LIST) sites contaminated with petroleum products in saturated and unsaturated zones. It is often difficult to accurately assess the performance of these sy...
Drift-corrected nanoplasmonic hydrogen sensing by polarization
NASA Astrophysics Data System (ADS)
Wadell, Carl; Langhammer, Christoph
2015-06-01
Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer-parallel polarization direction but not the perpendicular one. Thus the perpendicular polarization readout can be used to efficiently correct for drifts occurring due to changes of the sensor element itself or due to non-specific events like a temperature change. Furthermore, by the use of a polarizing beamsplitter, both polarization signals can be read out simultaneously making it possible to continuously correct the sensor response to eliminate long-term drift and ageing effects. Since our approach is generic, we also foresee its usefulness for other applications of nanoplasmonic sensors than hydrogen sensing.Accurate and reliable hydrogen sensors are an important enabling technology for the large-scale introduction of hydrogen as a fuel or energy storage medium. As an example, in a hydrogen-powered fuel cell car of the type now introduced to the market, more than 15 hydrogen sensors are required for safe operation. To enable the long-term use of plasmonic sensors in this particular context, we introduce a concept for drift-correction based on light polarization utilizing symmetric sensor and sensing material nanoparticles arranged in a heterodimer. In this way the inert gold sensor element of the plasmonic dimer couples to a sensing-active palladium element if illuminated in the dimer
2006-09-01
The purpose of this Corrective Action Plan is to provide the detailed scope of work required to implement the recommended corrective actions as specified in the approved Corrective Action Decision Document.
Accurately measuring MPI broadcasts in a computational grid
Karonis N T; de Supinski, B R
1999-05-06
An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the
Aberration correction past and present.
Hawkes, P W
2009-09-28
Electron lenses are extremely poor: if glass lenses were as bad, we should see as well with the naked eye as with a microscope! The demonstration by Otto Scherzer in 1936 that skillful lens design could never eliminate the spherical and chromatic aberrations of rotationally symmetric electron lenses was therefore most unwelcome and the other great electron optician of those years, Walter Glaser, never ceased striving to find a loophole in Scherzer's proof. In the wartime and early post-war years, the first proposals for correcting C(s) were made and in 1947, in a second milestone paper, Scherzer listed these and other ways of correcting lenses; soon after, Dennis Gabor invented holography for the same purpose. These approaches will be briefly summarized and the work that led to the successful implementation of quadupole-octopole and sextupole correctors in the 1990 s will be analysed. In conclusion, the elegant role of image algebra in describing image formation and processing and, above all, in developing new methods will be mentioned. PMID:19687058
78 FR 76193 - Special Notice; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-16
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS AFFAIRS Special Notice; Correction AGENCY: National Cemetery Administration, Department of Veterans Affairs. ACTION: Notice; correction. SUMMARY: The Department of Veterans Affairs (VA) published...
78 FR 76193 - Special Notice; Correction
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-16
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF VETERANS... Questionnaire)] Special Notice; Correction AGENCY: Veterans Benefits Administration, Department of Veterans Affairs. ACTION: Notice; correction. SUMMARY: The Department of Veterans Affairs (VA) published...
Effective Correctional Treatment: Bibliotherapy for Cynics.
ERIC Educational Resources Information Center
Gendreau, Paul; Ross, Bob
1979-01-01
Presents recent evidence, obtained from a review of the literature on correctional treatment published since 1973, appealing the verdict that correctional rehabilitation is ineffective. There are several types of intervention programs that have proved successful with offender populations. (Author)
Accurately measuring dynamic coefficient of friction in ultraform finishing
NASA Astrophysics Data System (ADS)
Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.
2013-09-01
UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.
Correcting for Telluric Absorption: Methods, Case Studies, and Release of the TelFit Code
NASA Astrophysics Data System (ADS)
Gullikson, Kevin; Dodson-Robinson, Sarah; Kraus, Adam
2014-09-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ~3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
Correcting for telluric absorption: Methods, case studies, and release of the TelFit code
Gullikson, Kevin; Kraus, Adam; Dodson-Robinson, Sarah
2014-09-01
Ground-based astronomical spectra are contaminated by the Earth's atmosphere to varying degrees in all spectral regions. We present a Python code that can accurately fit a model to the telluric absorption spectrum present in astronomical data, with residuals of ∼3%-5% of the continuum for moderately strong lines. We demonstrate the quality of the correction by fitting the telluric spectrum in a nearly featureless A0V star, HIP 20264, as well as to a series of dwarf M star spectra near the 819 nm sodium doublet. We directly compare the results to an empirical telluric correction of HIP 20264 and find that our model-fitting procedure is at least as good and sometimes more accurate. The telluric correction code, which we make freely available to the astronomical community, can be used as a replacement for telluric standard star observations for many purposes.
NASA Astrophysics Data System (ADS)
Lee, J.-K.; Kim, J.-H.; Suk, M.-K.
2015-04-01
There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.
NASA Astrophysics Data System (ADS)
Lee, J.-K.; Kim, J.-H.; Suk, M.-K.
2015-11-01
There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.
Breast tissue decomposition with spectral distortion correction: A postmortem study
Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee
2014-10-15
Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique.
A rigid motion correction method for helical computed tomography (CT)
NASA Astrophysics Data System (ADS)
Kim, J.-H.; Nuyts, J.; Kyme, A.; Kuncic, Z.; Fulton, R.
2015-03-01
We propose a method to compensate for six degree-of-freedom rigid motion in helical CT of the head. The method is demonstrated in simulations and in helical scans performed on a 16-slice CT scanner. Scans of a Hoffman brain phantom were acquired while an optical motion tracking system recorded the motion of the bed and the phantom. Motion correction was performed by restoring projection consistency using data from the motion tracking system, and reconstructing with an iterative fully 3D algorithm. Motion correction accuracy was evaluated by comparing reconstructed images with a stationary reference scan. We also investigated the effects on accuracy of tracker sampling rate, measurement jitter, interpolation of tracker measurements, and the synchronization of motion data and CT projections. After optimization of these aspects, motion corrected images corresponded remarkably closely to images of the stationary phantom with correlation and similarity coefficients both above 0.9. We performed a simulation study using volunteer head motion and found similarly that our method is capable of compensating effectively for realistic human head movements. To the best of our knowledge, this is the first practical demonstration of generalized rigid motion correction in helical CT. Its clinical value, which we have yet to explore, may be significant. For example it could reduce the necessity for repeat scans and resource-intensive anesthetic and sedation procedures in patient groups prone to motion, such as young children. It is not only applicable to dedicated CT imaging, but also to hybrid PET/CT and SPECT/CT, where it could also ensure an accurate CT image for lesion localization and attenuation correction of the functional image data.
DOE /NV
2000-11-03
This addendum to the Corrective Action Investigation Plan (CAIP) contains the U.S. Department of Energy, Nevada Operations Office's approach to determine the extent of contamination existing at Corrective Action Unit (CAU) 321. This addendum was required when the extent of contamination exceeded the estimate in the original Corrective Action Decision Document (CADD). Located in Area 22 on the Nevada Test Site, Corrective Action Unit 321, Weather Station Fuel Storage, consists of Corrective Action Site 22-99-05, Fuel Storage Area, was used to store fuel and other petroleum products necessary for motorized operations at the historic Camp Desert Rock facility. This facility was operational from 1951 to 1958 and dismantled after 1958. Based on site history and earlier investigation activities at CAU 321, the contaminant of potential concern (COPC) was previously identified as total petroleum hydrocarbons (diesel-range organics). The scope of this corrective action investigation for the Fuel Storage Area will include the selection of biased sample locations to determine the vertical and lateral extent of contamination, collection of soil samples using rotary sonic drilling techniques, and the utilization of field-screening methods to accurately determine the extent of COPC contamination. The results of this field investigation will support a defensible evaluation of corrective action alternatives and be included in the revised CADD.
Corrective Feedback and Learner Uptake in CALL
ERIC Educational Resources Information Center
Heift, Trude
2004-01-01
This paper describes a study in which we investigated the effects of corrective feedback on learner uptake in CALL. Learner uptake is here defined as learner responses to corrective feedback in which, in case of an error, students attempt to correct their mistake(s). 177 students from three Canadian universities participated in the study during…
75 FR 2510 - Procurement List; Corrections
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... services on January 11, 2010 (75 FR 1354-1355). The correct date that comments should be received is... FR 1355-1356). The correct effective date should be February 11, 2010. ADDRESSES: Committee for... PEOPLE WHO ARE BLIND OR SEVERELY DISABLED Procurement List; Corrections AGENCY: Committee for...
Working toward Literacy in Correctional Education ESL
ERIC Educational Resources Information Center
Gardner, Susanne
2014-01-01
Correctional Education English as a Second Language (ESL) literacy programs vary from state to state, region to region. Some states enroll their correctional ESL students in adult basic education (ABE) classes; other states have separate classes and programs. At the Maryland Correctional Institution in Jessup, the ESL class is a self-contained…
Spectroscopically Accurate Calculations of the Rovibrational Energies of Diatomic Hydrogen
NASA Astrophysics Data System (ADS)
Perry, Jason
2005-05-01
The Born-Oppenheimer approximation has been used to calculate the rotational and vibrational states of diatomic hydrogen. Because it is an approximation, our group now wants to use a Born-Oppenheimer potential to calculate the electronic energy that has been corrected to match closely with spectroscopic results. We are using a code that has corrections for adiabatic, relativistic, radiative, and non-adiabatic effects. The rovibrational energies have now been calculated for both bound and quasi-bound states. We also want to compute quadrupole transition probabilities for diatomic hydrogen. These calculations aspire to investigate diatomic hydrogen in astrophysical environments.
Deformation field correction for spatial normalization of PET images
Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.
2015-01-01
Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272
Memory conformity affects inaccurate memories more than accurate memories.
Wright, Daniel B; Villalba, Daniella K
2012-01-01
After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
A high order accurate difference scheme for complex flow fields
Dexun Fu; Yanwen Ma
1997-06-01
A high order accurate finite difference method for direct numerical simulation of coherent structure in the mixing layers is presented. The reason for oscillation production in numerical solutions is analyzed. It is caused by a nonuniform group velocity of wavepackets. A method of group velocity control for the improvement of the shock resolution is presented. In numerical simulation the fifth-order accurate upwind compact difference relation is used to approximate the derivatives in the convection terms of the compressible N-S equations, a sixth-order accurate symmetric compact difference relation is used to approximate the viscous terms, and a three-stage R-K method is used to advance in time. In order to improve the shock resolution the scheme is reconstructed with the method of diffusion analogy which is used to control the group velocity of wavepackets. 18 refs., 12 figs., 1 tab.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions.
2016-01-01
Bacteria in the Staphylococcus intermedius group, including Staphylococcus pseudintermedius, often encode mecA-mediated methicillin resistance. Reliable detection of this phenotype for proper treatment and infection control decisions requires that these coagulase-positive staphylococci are accurately identified and specifically that they are not misidentified as S. aureus. As correct species level bacterial identification becomes more commonplace in clinical laboratories, one can expect to see changes in guidance for antimicrobial susceptibility testing and interpretation. The study by Wu et al. in this issue (M. T. Wu, C.-A. D. Burnham, L. F. Westblade, J. Dien Bard, S. D. Lawhon, M. A. Wallace, T. Stanley, E. Burd, J. Hindler, R. M. Humphries, J Clin Microbiol 54:535–542, 2016, http://dx.doi.org/10.1128/JCM.02864-15) highlights the impact of robust identification of S. intermedius group organisms on the selection of appropriate antimicrobial susceptibility testing methods and interpretation. PMID:26763965
Accurate and precise calibration of AFM cantilever spring constants using laser Doppler vibrometry.
Gates, Richard S; Pratt, Jon R
2012-09-21
Accurate cantilever spring constants are important in atomic force microscopy both in control of sensitive imaging and to provide correct nanomechanical property measurements. Conventional atomic force microscope (AFM) spring constant calibration techniques are usually performed in an AFM. They rely on significant handling and often require touching the cantilever probe tip to a surface to calibrate the optical lever sensitivity of the configuration. This can damage the tip. The thermal calibration technique developed for laser Doppler vibrometry (LDV) can be used to calibrate cantilevers without handling or touching the tip to a surface. Both flexural and torsional spring constants can be measured. Using both Euler-Bernoulli modeling and an SI traceable electrostatic force balance technique as a comparison we demonstrate that the LDV thermal technique is capable of providing rapid calibrations with a combination of ease, accuracy and precision beyond anything previously available.
Generation of accurate integral surfaces in time-dependent vector fields.
Garth, Christoph; Krishnan, Han; Tricoche, Xavier; Bobach, Tom; Joy, Kenneth I
2008-01-01
We present a novel approach for the direct computation of integral surfaces in time-dependent vector fields. As opposed to previous work, which we analyze in detail, our approach is based on a separation of integral surface computation into two stages: surface approximation and generation of a graphical representation. This allows us to overcome several limitations of existing techniques. We first describe an algorithm for surface integration that approximates a series of time lines using iterative refinement and computes a skeleton of the integral surface. In a second step, we generate a well-conditioned triangulation. Our approach allows a highly accurate treatment of very large time-varying vector fields in an efficient, streaming fashion. We examine the properties of the presented methods on several example datasets and perform a numerical study of its correctness and accuracy. Finally, we investigate some visualization aspects of integral surfaces. PMID:18988990
Simple accurate approximations for the optical properties of metallic nanospheres and nanoshells.
Schebarchov, Dmitri; Auguié, Baptiste; Le Ru, Eric C
2013-03-28
This work aims to provide simple and accurate closed-form approximations to predict the scattering and absorption spectra of metallic nanospheres and nanoshells supporting localised surface plasmon resonances. Particular attention is given to the validity and accuracy of these expressions in the range of nanoparticle sizes relevant to plasmonics, typically limited to around 100 nm in diameter. Using recent results on the rigorous radiative correction of electrostatic solutions, we propose a new set of long-wavelength polarizability approximations for both nanospheres and nanoshells. The improvement offered by these expressions is demonstrated with direct comparisons to other approximations previously obtained in the literature, and their absolute accuracy is tested against the exact Mie theory. PMID:23358525
Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.
Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan
2015-10-01
Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062
Limbago, Brandi M
2016-03-01
Bacteria in the Staphylococcus intermedius group, including Staphylococcus pseudintermedius, often encode mecA-mediated methicillin resistance. Reliable detection of this phenotype for proper treatment and infection control decisions requires that these coagulase-positive staphylococci are accurately identified and specifically that they are not misidentified as S. aureus. As correct species level bacterial identification becomes more commonplace in clinical laboratories, one can expect to see changes in guidance for antimicrobial susceptibility testing and interpretation. The study by Wu et al. in this issue (M. T. Wu, C.-A. D. Burnham, L. F. Westblade, J. Dien Bard, S. D. Lawhon, M. A. Wallace, T. Stanley, E. Burd, J. Hindler, R. M. Humphries, J Clin Microbiol 54:535-542, 2016, http://dx.doi.org/10.1128/JCM.02864-15) highlights the impact of robust identification of S. intermedius group organisms on the selection of appropriate antimicrobial susceptibility testing methods and interpretation.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
An Accurate Quartic Force Field and Vibrational Frequencies for HNO and DNO
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Lee, Timothy J.; Schwenke, David W.
1994-01-01
An accurate ab initio quartic force field for HNO has been determined using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, CCSD(T), in conjunction with the correlation consistent polarized valence triple zeta (cc-pVTZ) basis set. Improved harmonic frequencies were determined with the cc-pVQZ basis set. Fundamental vibrational frequencies were determined using a second-order perturbation theory analysis and also using variational calculations. The N-0 stretch and bending fundamentals are determined well from both vibrational analyses. The H-N stretch, however, is shown to have an unusually large anharmonic correction, and is not well determined using second-order perturbation theory. The H-N fundamental is well determined from the variational calculations, demonstrating the quality of the ab initio quartic force field. The zero-point energy of HNO that should be used in isodesmic reactions is also discussed.
Accurate estimation of influenza epidemics using Google search data via ARGO.
Yang, Shihao; Santillana, Mauricio; Kou, S C
2015-11-24
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search-based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people's online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Accurate estimation of influenza epidemics using Google search data via ARGO
Yang, Shihao; Santillana, Mauricio; Kou, S. C.
2015-01-01
Accurate real-time tracking of influenza outbreaks helps public health officials make timely and meaningful decisions that could save lives. We propose an influenza tracking model, ARGO (AutoRegression with GOogle search data), that uses publicly available online search data. In addition to having a rigorous statistical foundation, ARGO outperforms all previously available Google-search–based tracking models, including the latest version of Google Flu Trends, even though it uses only low-quality search data as input from publicly available Google Trends and Google Correlate websites. ARGO not only incorporates the seasonality in influenza epidemics but also captures changes in people’s online search behavior over time. ARGO is also flexible, self-correcting, robust, and scalable, making it a potentially powerful tool that can be used for real-time tracking of other social events at multiple temporal and spatial resolutions. PMID:26553980
Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.
Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian
2015-09-01
Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need to
Thermoelectric Corrections to Quantum Measurement
NASA Astrophysics Data System (ADS)
Bergfield, Justin; Ratner, Mark; Stafford, Charles; di Ventra, Massimiliano
The voltage and temperature measured by a floating probe of a nonequilibrium quantum system is shown to exhibit nontrivial thermoelectric corrections at finite temperature. Using a realistic model of a scanning thermal microscope to calculate the voltage and temperature distributions, we predict quantum temperature variations along graphene nanoribbons subject to a thermal bias which are not simply related to the local density of states. Experimentally, the wavelength of the oscillations can be tuned over several orders of magnitude by gating/doping, bringing quantum temperature oscillations within reach of the spatial resolution of existing measurement techniques. We also find that the Peltier cooling/heating which causes the temperature oscillations can lead to significant errors in voltage measurements for a wide range of system.
Quantum Error Correction for Metrology
NASA Astrophysics Data System (ADS)
Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail
2014-05-01
The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.
NASA Astrophysics Data System (ADS)
Fitzpatrick, A. Liam; Kaplan, Jared
2016-05-01
We use results on Virasoro conformal blocks to study chaotic dynamics in CFT2 at large central charge c. The Lyapunov exponent λ L , which is a diagnostic for the early onset of chaos, receives 1 /c corrections that may be interpreted as {λ}_L=2π /β(1+12/c) . However, out of time order correlators receive other equally important 1 /c suppressed contributions that do not have such a simple interpretation. We revisit the proof of a bound on λ L that emerges at large c, focusing on CFT2 and explaining why our results do not conflict with the analysis leading to the bound. We also comment on relationships between chaos, scattering, causality, and bulk locality.
Accurate stress resultants equations for laminated composite deep thick shells
Qatu, M.S.
1995-11-01
This paper derives accurate equations for the normal and shear force as well as bending and twisting moment resultants for laminated composite deep, thick shells. The stress resultant equations for laminated composite thick shells are shown to be different from those of plates. This is due to the fact the stresses over the thickness of the shell have to be integrated on a trapezoidal-like shell element to obtain the stress resultants. Numerical results are obtained and showed that accurate stress resultants are needed for laminated composite deep thick shells, especially if the curvature is not spherical.
Must Kohn-Sham oscillator strengths be accurate at threshold?
Yang Zenghui; Burke, Kieron; Faassen, Meta van
2009-09-21
The exact ground-state Kohn-Sham (KS) potential for the helium atom is known from accurate wave function calculations of the ground-state density. The threshold for photoabsorption from this potential matches the physical system exactly. By carefully studying its absorption spectrum, we show the answer to the title question is no. To address this problem in detail, we generate a highly accurate simple fit of a two-electron spectrum near the threshold, and apply the method to both the experimental spectrum and that of the exact ground-state Kohn-Sham potential.
Accurate upwind-monotone (nonoscillatory) methods for conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1992-01-01
The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.
Fringe Capacitance Correction for a Coaxial Soil Cell
Pelletier, Mathew G.; Viera, Joseph A.; Schwartz, Robert C.; Lascano, Robert J.; Evett, Steven R.; Green, Tim R.; Wanjura, John D.; Holt, Greg A.
2011-01-01
Accurate measurement of moisture content is a prime requirement in hydrological, geophysical and biogeochemical research as well as for material characterization and process control. Within these areas, accurate measurements of the surface area and bound water content is becoming increasingly important for providing answers to many fundamental questions ranging from characterization of cotton fiber maturity, to accurate characterization of soil water content in soil water conservation research to bio-plant water utilization to chemical reactions and diffusions of ionic species across membranes in cells as well as in the dense suspensions that occur in surface films. One promising technique to address the increasing demands for higher accuracy water content measurements is utilization of electrical permittivity characterization of materials. This technique has enjoyed a strong following in the soil-science and geological community through measurements of apparent permittivity via time-domain-reflectometry (TDR) as well in many process control applications. Recent research however, is indicating a need to increase the accuracy beyond that available from traditional TDR. The most logical pathway then becomes a transition from TDR based measurements to network analyzer measurements of absolute permittivity that will remove the adverse effects that high surface area soils and conductivity impart onto the measurements of apparent permittivity in traditional TDR applications. This research examines an observed experimental error for the coaxial probe, from which the modern TDR probe originated, which is hypothesized to be due to fringe capacitance. The research provides an experimental and theoretical basis for the cause of the error and provides a technique by which to correct the system to remove this source of error. To test this theory, a Poisson model of a coaxial cell was formulated to calculate the effective theoretical extra length caused by the fringe capacitance
Temperature correction in conductivity measurements
Smith, Stanford H.
1962-01-01
Electrical conductivity has been widely used in freshwater research but usual methods employed by limnologists for converting measurements to conductance at a given temperature have not given uniformly accurate results. The temperature coefficient used to adjust conductivity of natural waters to a given temperature varies depending on the kinds and concentrations of electrolytes, the temperature at the time of measurement, and the temperature to which measurements are being adjusted. The temperature coefficient was found to differ for various lake and stream waters, and showed seasonal changes. High precision can be obtained only by determining temperature coefficients for each water studied. Mean temperature coefficients are given for various temperature ranges that may be used where less precision is required.
ACCURATE TEMPERATURE MEASUREMENTS IN A NATURALLY-ASPIRATED RADIATION SHIELD
Kurzeja, R.
2009-09-09
Experiments and calculations were conducted with a 0.13 mm fine wire thermocouple within a naturally-aspirated Gill radiation shield to assess and improve the accuracy of air temperature measurements without the use of mechanical aspiration, wind speed or radiation measurements. It was found that this thermocouple measured the air temperature with root-mean-square errors of 0.35 K within the Gill shield without correction. A linear temperature correction was evaluated based on the difference between the interior plate and thermocouple temperatures. This correction was found to be relatively insensitive to shield design and yielded an error of 0.16 K for combined day and night observations. The correction was reliable in the daytime when the wind speed usually exceeds 1 m s{sup -1} but occasionally performed poorly at night during very light winds. Inspection of the standard deviation in the thermocouple wire temperature identified these periods but did not unambiguously locate the most serious events. However, estimates of sensor accuracy during these periods is complicated by the much larger sampling volume of the mechanically-aspirated sensor compared with the naturally-aspirated sensor and the presence of significant near surface temperature gradients. The root-mean-square errors therefore are upper limits to the aspiration error since they include intrinsic sensor differences and intermittent volume sampling differences.
Joint Correction of Ionospheric Artifact and Orbital Error in L-band SAR Interferometry
NASA Astrophysics Data System (ADS)
Jung, H.; Liu, Z.; Lu, Z.
2012-12-01
Synthetic aperture radar interferometry (InSAR) is a powerful technique to measure surface deformation. However, the accuracy of this technique for L-band synthetic aperture radar (SAR) system is largely compromised by ionospheric path delays on the radar signals. The ionospheric effect causes severe ionospheric distortion called azimuth streaking in SAR backscattering intensity images as well as long wavelength phase distortion similar to orbital ramp error. Effective detection and correction of ionospheric phase distortion from L-band InSAR images are necessary to measure and interpret surface displacement accurately. Recently Jung et al.(2012) proposed an efficient method to correct ionospheric phase distortions using the multiple aperture interferometry (MAI) interferogram. In this study, we extend this technique to correct the ionosphere effect in InSAR measurements of interseismic deformation. We present case studies in southern California using L-band ALOS PALSAR data and in-situ GPS measurements and show that the long wavelength noise can be removed by joint correction of the ionospheric artifact and the orbital error. Displacement maps created from 20070715-20091020 ALOS PALSAR pair: (a-b) before and after joint correction of ionospheric artifact and orbital error, and (c) after correction from 2D-polynomial fit Displacement maps created from 20071015-20091020 ALOS PALSAR pair: (a-b) before and after joint correction of ionospheric artifact and orbital error, and (c) after correction from 2D-polynomial fit
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR.
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-06-25
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model.
Empirical corrections for atmospheric neutral density derived from thermospheric models
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen; Börger, Klaus; Henze, Christina; Löcher, Anno; Eickmans, Marius; Agena, Jens
2016-04-01
Accurately predicting satellite positions is a prerequisite for various applications from space situational awareness to precise orbit determination (POD). Given the fact that atmospheric drag represents a dominant influence on the position of low-Earth orbit objects, an accurate evaluation of thermospheric mass density is of great importance to low Earth orbital prediction. Over decades, various empirical atmospheric models have been developed to support computation of density changes within the atmosphere. The quality of these models is, however, restricted mainly due to the complexity of atmospheric density changes and the limited resolution of indices used to account for atmospheric temperature and neutral density changes caused by solar and geomagnetic activity. Satellite missions, such as Challenging Mini-Satellite Payload (CHAMP) and Gravity Recovery and Climate Experiment (GRACE), provide a direct measurement of non-conservative accelerations, acting on the surface of satellites. These measurements provide valuable data for improving our knowledge of thermosphere density and winds. In this paper we present two empirical frameworks to correct model-derived neutral density simulations by the along-track thermospheric density measurements of CHAMP and GRACE. First, empirical scale factors are estimated by analyzing daily CHAMP and GRACE acceleration measurements and are used to correct the density simulation of Jacchia and MSIS (Mass-Spectrometer-Incoherent-Scatter) thermospheric models. The evolution of daily scale factors is then related to solar and magnetic activity enabling their prediction in time. In the second approach, principal component analysis (PCA) is applied to extract the dominant modes of differences between CHAMP/GRACE observations and thermospheric model simulations. Afterwards an adaptive correction procedure is used to account for long-term and high-frequency differences. We conclude the study by providing recommendations on possible
Correction for etch proximity: new models and applications
NASA Astrophysics Data System (ADS)
Granik, Yuri
2001-09-01
Short-range etch proximity effects increase intra-die CD variability and degrade the IC performance and yield. Tight control of the etch bias is an increasingly critical factor in realizing the ITRS technology nodes. The 2000 technology nodes revision added a new category, the post-etch 'physical' gate length metric, that is 9 - 17% smaller than 'in-resist' gate length. We present new etch proximity correction methods and models designed to reduce negative impact of etch-induced CD variability and increase uniformity of the controlled over- etching. Resolution Enhancement Technologies (RET) design correction methods typically employ 'lumped' process models. We found that an alternative methodology based upon separation of the process factors and the related models may yield better accuracy, performance, and better suit the design and process optimization flows. The contributions from the reticle, the optics, the wafer, and etch are individually determined and then used either separately or in aggregation for the most flexible and optimum correction of their respective contributions. The etch corrections are based on the Variable Etch Bias model (VEB model). This semi-empirical model requires experimental CD information to be collected from the test patterns under fixed process conditions (point-process model). It demonstrates excellent fit to the early experimental CD-SEM data gathered to date, which spans a variety of layout features and process conditions. The VEB model works in conjunction with CalibreR software system's Variable Threshold Resist-Extended (VTR-E) model, however the etching is modeled separately from the optics and the resist processing. This yields better understanding and more accurate explanation of the experiments than those that are produced by the 'lumped' process modeling. The VEB model explains etch- induced bias in terms of the following three proximity characteristics or variables: effective trench width (or pattern separation), pattern
Digital correction of computed X-radiographs for coral densitometry
NASA Astrophysics Data System (ADS)
Boucher, H.; Duprey, N.; Jiménez, C.
2011-12-01
Corals are widely used for environmental and climatic changes assessment as their skeletal growth is influenced by the surrounding environment. Variations in skeletal density are sensitive to environmental variations (water temperature, nutrients concentration etc.). Digitized X-radiographs have been used for coral skeleton density measurements since the 1980s. However, the shape of the X-ray beam emitted during the irradiation process is strongly distorted due to spherical spreading (inverse square law) and heel effect. Consequently, the X-ray intensity intersecting the surface of the sensitive film or the electronic sensor (e.g. PSL plate) is heterogeneous. These heterogeneities are characterized by an asymmetrical concentric pattern of decreasing intensity from the center to the edges of the X-radiographs. It commonly generates an error on density measurements that may reach up to 40%. This is twice as much as the seasonal density variations that are usually found in corals. Until now, extra X-ray images or aluminum standards were used to correct X-radiographs. Such corrective methods may be constraining when working with a high number of coral samples. We present an inexpensive, straightforward, and accurate method to correct strong heterogeneities of X-ray irradiation that affect X-ray images. The method relies on the relation between optical density (OD) and skeletal density; it is non-destructive, and provides high-resolution measurements. Our method was applied to measure density variations on Caribbean reef-building coral Siderastrea siderea from Costa Rica. The basic assumption is that the X-radiograph background, i.e., areas without objects, records the asymmetrical concentric pattern of X-ray intensity. A full image of this pattern was created with a natural neighbor interpolation. The resulting modeled image was then subtracted from the original X-ray image, permitting thus a reliable OD measurement directly on the corrected X-ray image. This Digital
NASA Astrophysics Data System (ADS)
Tao, Jianmin; Rappe, Andrew M.
2016-01-01
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C8 and C10 between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C8 and 7% for C10. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Radiometric terrain correction of SPOT5 image
NASA Astrophysics Data System (ADS)
Feng, Xiuli; Zhang, Feng; Wang, Ke
2007-06-01
Remote sensing SPOT5 images have been widely applied to the surveying of agriculture and forest resources and to the monitoring of ecology environment of mountain areas. However, the accuracy of land-cover classification of mountain areas is often influenced by the topographical shadow effect. Radiometric terrain correction is important for this kind of application. In this study, a radiometric terrain correction model which based on the rationale of moment matching was made in ERDAS IMAGINE by using the Spatial Modeler Language. Lanxi city in China as the study area, a SPOT5 multispectral image with the spatial resolution of 10 m of that mountain area was corrected by the model. Furthermore, in order to present the advantage of this new model in radiometric terrain correction of remote sensing SPOT5 image, the traditional C correction approach was also applied to the same area to see its difference with the result of the radiometric terrain correction model. The results show that the C correction approach keeps the overall statistical characteristics of spectral bands. The mean and the standard deviation value of the corrected image are the same as original ones. However, the standard deviation value became smaller by using the radiometric terrain correction model and the mean value changed accordingly. The reason of these changes is that before the correction, the histogram of the original image is represented as the 'plus-skewness distribution' due to the relief-caused shade effect, after the correction of the model, the histogram of the image is represented as the normal distribution and the shade effect of the relief has been removed. But as for the result of the traditional C approach, the skewness of the histogram remains the same after the correction. Besides, some portions of the mountain area have been over-corrected. So in my study area, the C correction approach can't remove the shade effect of the relief ideally. The results show that the radiometric
Monitoring circuit accurately measures movement of solenoid valve
NASA Technical Reports Server (NTRS)
Gillett, J. D.
1966-01-01
Solenoid operated valve in a control system powered by direct current issued to accurately measure the valve travel. This system is currently in operation with a 28-vdc power system used for control of fluids in liquid rocket motor test facilities.
Instrument accurately measures small temperature changes on test surface
NASA Technical Reports Server (NTRS)
Harvey, W. D.; Miller, H. B.
1966-01-01
Calorimeter apparatus accurately measures very small temperature rises on a test surface subjected to aerodynamic heating. A continuous thin sheet of a sensing material is attached to a base support plate through which a series of holes of known diameter have been drilled for attaching thermocouples to the material.
Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method
ERIC Educational Resources Information Center
Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey
2013-01-01
Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…
Second-order accurate difference schemes on highly irregular meshes
Manteuffel, T.A.; White, A.B. Jr.
1988-01-01
In this paper compact-as-possible second-order accurate difference schemes will be constructed for boundary-value problems of arbitrary order on highly irregular meshes. It will be shown that for equations of order (K) these schemes will have truncation error of order (3/endash/K). This phenomena is known as supraconvergence. 7 refs.
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
What's Normal? Accurately and Efficiently Assessing Menstrual Function.
Takemoto, Darcie M; Beharry, Meera S
2015-09-01
Many young women are unsure of what constitutes normal menses. By asking focused questions, pediatric providers can quickly and accurately assess menstrual function and dispel anxiety and myths. In this article, we review signs and symptoms of normal versus pathologic menstrual functioning and provide suggestions to improve menstrual history taking.
Benchmarking accurate spectral phase retrieval of single attosecond pulses
NASA Astrophysics Data System (ADS)
Wei, Hui; Le, Anh-Thu; Morishita, Toru; Yu, Chao; Lin, C. D.
2015-02-01
A single extreme-ultraviolet (XUV) attosecond pulse or pulse train in the time domain is fully characterized if its spectral amplitude and phase are both determined. The spectral amplitude can be easily obtained from photoionization of simple atoms where accurate photoionization cross sections have been measured from, e.g., synchrotron radiations. To determine the spectral phase, at present the standard method is to carry out XUV photoionization in the presence of a dressing infrared (IR) laser. In this work, we examine the accuracy of current phase retrieval methods (PROOF and iPROOF) where the dressing IR is relatively weak such that photoelectron spectra can be accurately calculated by second-order perturbation theory. We suggest a modified method named swPROOF (scattering wave phase retrieval by omega oscillation filtering) which utilizes accurate one-photon and two-photon dipole transition matrix elements and removes the approximations made in PROOF and iPROOF. We show that the swPROOF method can in general retrieve accurate spectral phase compared to other simpler models that have been suggested. We benchmark the accuracy of these phase retrieval methods through simulating the spectrogram by solving the time-dependent Schrödinger equation numerically using several known single attosecond pulses with a fixed spectral amplitude but different spectral phases.
Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing
B. Olinger
2005-07-01
Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.
Second-order accurate nonoscillatory schemes for scalar conservation laws
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1989-01-01
Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.
How Accurate Are Judgments of Intelligence by Strangers?
ERIC Educational Resources Information Center
Borkenau, Peter
Whether judgments made by complete strangers as to the intelligence of subjects are accurate or merely illusory was studied in Germany. Target subjects were 50 female and 50 male adults recruited through a newspaper article. Eighteen judges, who did not know the subjects, were recruited from a university community. Videorecordings of the subjects,…
Laser Guided Automated Calibrating System for Accurate Bracket Placement
Anitha, A; Kumar, AJ; Mascarenhas, R; Husain, A
2015-01-01
Background: The basic premise of preadjusted bracket system is accurate bracket positioning. It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of this study was to design a calibrating system to accurately detect a point on a plane as well as to determine the accuracy of the Laser Guided Automated Calibrating (LGAC) System. Materials and Methods: To the lowest order of approximation a plane having two parallel lines is used to verify the accuracy of the system. On prescribing the distance of a point from the line, images of the plane are analyzed from controlled angles, calibrated and the point is identified with a laser marker. Results: The image was captured and analyzed using MATLAB ver. 7 software (The MathWorks Inc.). Each pixel in the image corresponded to a distance of 1cm/413 (10 mm/413) = 0.0242 mm (L/P). This implies any variations in distance above 0.024 mm can be measured and acted upon, and sets the highest possible accuracy for this system. Conclusion: A new automated system is introduced having an accuracy of 0.024 mm for accurate bracket placement. PMID:25745575
Bioaccessibility tests accurately estimate bioavailability of lead to quail
Technology Transfer Automated Retrieval System (TEKTRAN)
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...
Accurate momentum transfer cross section for the attractive Yukawa potential
Khrapak, S. A.
2014-04-15
Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.
A-B Similarity-Complementarity and Accurate Empathy.
ERIC Educational Resources Information Center
Gillam, Sandra; McGinley, Hugh
1983-01-01
Rated the audio portions of videotaped segments of 32 dyadic interviews between A-type and B-type undergraduate males for accurate empathy using Truax's AE-Scale. Results indicated B-types elicited higher levels of empathy when they interacted with other B-types, while any dyad that contained an A-type resulted in less empathy. (JAC)
BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL
Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...