Sample records for function correction applied

  1. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  2. LOKI WIND CORRECTION COMPUTER AND WIND STUDIES FOR LOKI

    DTIC Science & Technology

    which relates burnout deviation of flight path with the distributed wind along the boost trajectory. The wind influence function was applied to...electrical outputs. A complete wind correction computer system based on the influence function and the results of wind studies was designed.

  3. Adiabatic corrections to density functional theory energies and wave functions.

    PubMed

    Mohallem, José R; Coura, Thiago de O; Diniz, Leonardo G; de Castro, Gustavo; Assafrão, Denise; Heine, Thomas

    2008-09-25

    The adiabatic finite-nuclear-mass-correction (FNMC) to the electronic energies and wave functions of atoms and molecules is formulated for density-functional theory and implemented in the deMon code. The approach is tested for a series of local and gradient corrected density functionals, using MP2 results and diagonal-Born-Oppenheimer corrections from the literature for comparison. In the evaluation of absolute energy corrections of nonorganic molecules the LDA PZ81 functional works surprisingly better than the others. For organic molecules the GGA BLYP functional has the best performance. FNMC with GGA functionals, mainly BLYP, show a good performance in the evaluation of relative corrections, except for nonorganic molecules containing H atoms. The PW86 functional stands out with the best evaluation of the barrier of linearity of H2O and the isotopic dipole moment of HDO. In general, DFT functionals display an accuracy superior than the common belief and because the corrections are based on a change of the electronic kinetic energy they are here ranked in a new appropriate way. The approach is applied to obtain the adiabatic correction for full atomization of alcanes C(n)H(2n+2), n = 4-10. The barrier of 1 mHartree is approached for adiabatic corrections, justifying its insertion into DFT.

  4. Self-interaction corrections applied to Mg-porphyrin, C60, and pentacene molecules

    NASA Astrophysics Data System (ADS)

    Pederson, Mark R.; Baruah, Tunna; Kao, Der-you; Basurto, Luis

    2016-04-01

    We have applied a recently developed method to incorporate the self-interaction correction through Fermi orbitals to Mg-porphyrin, C60, and pentacene molecules. The Fermi-Löwdin orbitals are localized and unitarily invariant to the Kohn-Sham orbitals from which they are constructed. The self-interaction-corrected energy is obtained variationally leading to an optimum set of Fermi-Löwdin orbitals (orthonormalized Fermi orbitals) that gives the minimum energy. A Fermi orbital, by definition, is dependent on a certain point which is referred to as the descriptor position. The degree to which the initial choice of descriptor positions influences the variational approach to the minimum and the complexity of the energy landscape as a function of Fermi-orbital descriptors is examined in detail for Mg-porphyrin. The applications presented here also demonstrate that the method can be applied to larger molecular systems containing a few hundred electrons. The atomization energy of the C60 molecule within the Fermi-Löwdin-orbital self-interaction-correction approach is significantly improved compared to local density approximation in the Perdew-Wang 92 functional and generalized gradient approximation of Perdew-Burke-Ernzerhof functionals. The eigenvalues of the highest occupied molecular orbitals show qualitative improvement.

  5. Radiosondes Corrected for Inaccuracy in RH Measurements

    DOE Data Explorer

    Miloshevich, Larry

    2008-01-15

    Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).

  6. Ellipsoidal corrections for geoid undulation computations using gravity anomalies in a cap

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1981-01-01

    Ellipsoidal correction terms have been derived for geoid undulation computations when the Stokes equation using gravity anomalies in a cap is combined with potential coefficient information. The correction terms are long wavelength and depend on the cap size in which its gravity anomalies are given. Using the regular Stokes equation, the maximum correction for a cap size of 20 deg is -33 cm, which reduces to -27 cm when the Stokes function is modified by subtracting the value of the Stokes function at the cap radius. Ellipsoidal correction terms were also derived for the well-known Marsh/Chang geoids. When no gravity was used, the correction could reach 101 cm, while for a cap size of 20 deg the maximum correction was -45 cm. Global correction maps are given for a number of different cases. For work requiring accurate geoid computations these correction terms should be applied.

  7. Combined Henyey-Greenstein and Rayleigh phase function.

    PubMed

    Liu, Quanhua; Weng, Fuzhong

    2006-10-01

    The phase function is an important parameter that affects the distribution of scattered radiation. In Rayleigh scattering, a scatterer is approximated by a dipole, and its phase function is analytically related to the scattering angle. For the Henyey-Greenstein (HG) approximation, the phase function preserves only the correct asymmetry factor (i.e., the first moment), which is essentially important for anisotropic scattering. When the HG function is applied to small particles, it produces a significant error in radiance. In addition, the HG function is applied only for an intensity radiative transfer. We develop a combined HG and Rayleigh (HG-Rayleigh) phase function. The HG phase function plays the role of modulator extending the application of the Rayleigh phase function for small asymmetry scattering. The HG-Rayleigh phase function guarantees the correct asymmetry factor and is valid for a polarization radiative transfer. It approaches the Rayleigh phase function for small particles. Thus the HG-Rayleigh phase function has wider applications for both intensity and polarimetric radiative transfers. For microwave radiative transfer modeling in this study, the largest errors in the brightness temperature calculations for weak asymmetry scattering are generally below 0.02 K by using the HG-Rayleigh phase function. The errors can be much larger, in the 1-3 K range, if the Rayleigh and HG functions are applied separately.

  8. THE SYSTEMATIC ERROR TEST FOR PSF CORRECTION IN WEAK GRAVITATIONAL LENSING SHEAR MEASUREMENT BY THE ERA METHOD BY IDEALIZING PSF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@riken.jp

    We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematicmore » error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.« less

  9. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lepori, Francesca; Viel, Matteo; Baccigalupi, Carlo

    We investigate the Alcock Paczy'nski (AP) test applied to the Baryon Acoustic Oscillation (BAO) feature in the galaxy correlation function. By using a general formalism that includes relativistic effects, we quantify the importance of the linear redshift space distortions and gravitational lensing corrections to the galaxy number density fluctuation. We show that redshift space distortions significantly affect the shape of the correlation function, both in radial and transverse directions, causing different values of galaxy bias to induce offsets up to 1% in the AP test. On the other hand, we find that the lensing correction around the BAO scale modifiesmore » the amplitude but not the shape of the correlation function and therefore does not introduce any systematic effect. Furthermore, we investigate in details how the AP test is sensitive to redshift binning: a window function in transverse direction suppresses correlations and shifts the peak position toward smaller angular scales. We determine the correction that should be applied in order to account for this effect, when performing the test with data from three future planned galaxy redshift surveys: Euclid, the Dark Energy Spectroscopic Instrument (DESI) and the Square Kilometer Array (SKA).« less

  11. Stripe nonuniformity correction for infrared imaging system based on single image optimization

    NASA Astrophysics Data System (ADS)

    Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai

    2018-06-01

    Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.

  12. Comparing multilayer brain networks between groups: Introducing graph metrics and recommendations.

    PubMed

    Mandke, Kanad; Meier, Jil; Brookes, Matthew J; O'Dea, Reuben D; Van Mieghem, Piet; Stam, Cornelis J; Hillebrand, Arjan; Tewarie, Prejaas

    2018-02-01

    There is an increasing awareness of the advantages of multi-modal neuroimaging. Networks obtained from different modalities are usually treated in isolation, which is however contradictory to accumulating evidence that these networks show non-trivial interdependencies. Even networks obtained from a single modality, such as frequency-band specific functional networks measured from magnetoencephalography (MEG) are often treated independently. Here, we discuss how a multilayer network framework allows for integration of multiple networks into a single network description and how graph metrics can be applied to quantify multilayer network organisation for group comparison. We analyse how well-known biases for single layer networks, such as effects of group differences in link density and/or average connectivity, influence multilayer networks, and we compare four schemes that aim to correct for such biases: the minimum spanning tree (MST), effective graph resistance cost minimisation, efficiency cost optimisation (ECO) and a normalisation scheme based on singular value decomposition (SVD). These schemes can be applied to the layers independently or to the multilayer network as a whole. For correction applied to whole multilayer networks, only the SVD showed sufficient bias correction. For correction applied to individual layers, three schemes (ECO, MST, SVD) could correct for biases. By using generative models as well as empirical MEG and functional magnetic resonance imaging (fMRI) data, we further demonstrated that all schemes were sensitive to identify network topology when the original networks were perturbed. In conclusion, uncorrected multilayer network analysis leads to biases. These biases may differ between centres and studies and could consequently lead to unreproducible results in a similar manner as for single layer networks. We therefore recommend using correction schemes prior to multilayer network analysis for group comparisons. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  14. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  15. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  16. The impact of physiologic noise correction applied to functional MRI of pain at 1.5 and 3.0 T.

    PubMed

    Vogt, Keith M; Ibinson, James W; Schmalbrock, Petra; Small, Robert H

    2011-07-01

    This study quantified the impact of the well-known physiologic noise correction algorithm RETROICOR applied to a pain functional magnetic resonance imaging (FMRI) experiment at two field strengths: 1.5 and 3.0 T. In the 1.5-T acquisition, there was an 8.2% decrease in time course variance (σ) and a 227% improvement in average model fit (increase in mean R(2)(a)). In the 3.0-T acquisition, significantly greater improvements were seen: a 10.4% decrease in σ and a 240% increase in mean R(2)(a). End-tidal carbon dioxide data were also collected during scanning and used to account for low-frequency changes in cerebral blood flow; however, the impact of this correction was trivial compared to applying RETROICOR. Comparison between two implementations of RETROICOR demonstrated that oversampled physiologic data can be applied by either downsampling or modification of the timing in the RETROICOR algorithm, with equivalent results. Furthermore, there was no significant effect from manually aligning the physiologic data with corresponding image slices from an interleaved acquisition, indicating that RETROICOR accounts for timing differences between physiologic changes and MR signal changes. These findings suggest that RETROICOR correction, as it is commonly implemented, should be included as part of the data analysis for pain FMRI studies performed at 1.5 and 3.0 T. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  18. Implementation and benchmark of a long-range corrected functional in the density functional based tight-binding method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lutsker, V.; Niehaus, T. A., E-mail: thomas.niehaus@physik.uni-regensburg.de; Aradi, B.

    2015-11-14

    Bridging the gap between first principles methods and empirical schemes, the density functional based tight-binding method (DFTB) has become a versatile tool in predictive atomistic simulations over the past years. One of the major restrictions of this method is the limitation to local or gradient corrected exchange-correlation functionals. This excludes the important class of hybrid or long-range corrected functionals, which are advantageous in thermochemistry, as well as in the computation of vibrational, photoelectron, and optical spectra. The present work provides a detailed account of the implementation of DFTB for a long-range corrected functional in generalized Kohn-Sham theory. We apply themore » method to a set of organic molecules and compare ionization potentials and electron affinities with the original DFTB method and higher level theory. The new scheme cures the significant overpolarization in electric fields found for local DFTB, which parallels the functional dependence in first principles density functional theory (DFT). At the same time, the computational savings with respect to full DFT calculations are not compromised as evidenced by numerical benchmark data.« less

  19. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stanke, Monika, E-mail: monika@fizyka.umk.pl; Palikot, Ewa, E-mail: epalikot@doktorant.umk.pl; Adamowicz, Ludwik, E-mail: ludwik@email.arizona.edu

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H{sub 2} and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  20. Technical and Applied Features of Functional Behavioral Assessments and Behavior Intervention Plans

    ERIC Educational Resources Information Center

    Hawkins, Shannon M.

    2012-01-01

    When conducted correctly, functional behavior assessments (FBAs) can help professionals intervene with problem behavior using function-based interventions. Despite the fact that researchers have shown that effective interventions are based on function, recent investigators have found that most behavioral intervention plans (BIPs) are written…

  1. 34 CFR 489.5 - What definitions apply?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF EDUCATION FUNCTIONAL LITERACY FOR STATE AND LOCAL PRISONERS PROGRAM General § 489.5 What...— Functional literacy means at least an eighth grade equivalence, or a functional criterion score, on a nationally recognized literacy assessment. Local correctional agency means any agency of local government...

  2. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, Herschel B.; Einerson, Carolyn J.; Watkins, Arthur D.

    1989-01-01

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections.

  3. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  4. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  5. Large-field-of-view imaging by multi-pupil adaptive optics.

    PubMed

    Park, Jung-Hoon; Kong, Lingjie; Zhou, Yifeng; Cui, Meng

    2017-06-01

    Adaptive optics can correct for optical aberrations. We developed multi-pupil adaptive optics (MPAO), which enables simultaneous wavefront correction over a field of view of 450 × 450 μm 2 and expands the correction area to nine times that of conventional methods. MPAO's ability to perform spatially independent wavefront control further enables 3D nonplanar imaging. We applied MPAO to in vivo structural and functional imaging in the mouse brain.

  6. [Study on phase correction method of spatial heterodyne spectrometer].

    PubMed

    Wang, Xin-Qiang; Ye, Song; Zhang, Li-Juan; Xiong, Wei

    2013-05-01

    Phase distortion exists in collected interferogram because of a variety of measure reasons when spatial heterodyne spectrometers are used in practice. So an improved phase correction method is presented. The phase curve of interferogram was obtained through Fourier inverse transform to extract single side transform spectrum, based on which, the phase distortions were attained by fitting phase slope, so were the phase correction functions, and the convolution was processed between transform spectrum and phase correction function to implement spectrum phase correction. The method was applied to phase correction of actually measured monochromatic spectrum and emulational water vapor spectrum. Experimental results show that the low-frequency false signals in monochromatic spectrum fringe would be eliminated effectively to increase the periodicity and the symmetry of interferogram, in addition when the continuous spectrum imposed phase error was corrected, the standard deviation between it and the original spectrum would be reduced form 0.47 to 0.20, and thus the accuracy of spectrum could be improved.

  7. Method for controlling gas metal arc welding

    DOEpatents

    Smartt, H.B.; Einerson, C.J.; Watkins, A.D.

    1987-08-10

    The heat input and mass input in a Gas Metal Arc welding process are controlled by a method that comprises calculating appropriate values for weld speed, filler wire feed rate and an expected value for the welding current by algorithmic function means, applying such values for weld speed and filler wire feed rate to the welding process, measuring the welding current, comparing the measured current to the calculated current, using said comparison to calculate corrections for the weld speed and filler wire feed rate, and applying corrections. 3 figs., 1 tab.

  8. Research on controlling thermal deformable mirror's influence functions via manipulating thermal fields.

    PubMed

    Xue, Qiao; Huang, Lei; Hu, Dongxia; Yan, Ping; Gong, Mali

    2014-01-10

    For thermal deformable mirrors (DMs), the thermal field control is important because it will decide aberration correction effects. In order to better manipulate the thermal fields, a simple water convection system is proposed. The water convection system, which can be applied in thermal field bimetal DMs, shows effective thermal fields and influence-function controlling abilities. This is verified by the simulations and the contrast experiments of two prototypes: one of which utilizes air convection, the other uses water convection. Controlling the thermal fields will greatly promote the influence-function adjustability and aberration correction ability of thermal DMs.

  9. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko

    Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.

  11. Thermodynamically constrained correction to ab initio equations of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    French, Martin; Mattsson, Thomas R.

    2014-07-07

    We show how equations of state generated by density functional theory methods can be augmented to match experimental data without distorting the correct behavior in the high- and low-density limits. The technique is thermodynamically consistent and relies on knowledge of the density and bulk modulus at a reference state and an estimation of the critical density of the liquid phase. We apply the method to four materials representing different classes of solids: carbon, molybdenum, lithium, and lithium fluoride. It is demonstrated that the corrected equations of state for both the liquid and solid phases show a significantly reduced dependence ofmore » the exchange-correlation functional used.« less

  12. Using a neural network to proximity correct patterns written with a Cambridge electron beam microfabricator 10.5 lithography system

    NASA Astrophysics Data System (ADS)

    Cummings, K. D.; Frye, R. C.; Rietman, E. A.

    1990-10-01

    This letter describes the initial results of using a theoretical determination of the proximity function and an adaptively trained neural network to proximity-correct patterns written on a Cambridge electron beam lithography system. The methods described are complete and may be applied to any electron beam exposure system that can modify the dose during exposure. The patterns produced in resist show the effects of proximity correction versus noncorrected patterns.

  13. Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianwei; Zhang, Yong

    2014-12-07

    We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less

  14. Combining extrapolation with ghost interaction correction in range-separated ensemble density functional theory for excited states

    NASA Astrophysics Data System (ADS)

    Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel

    2017-11-01

    The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.

  15. A new approach for beam hardening correction based on the local spectrum distributions

    NASA Astrophysics Data System (ADS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-09-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, Mark R.; Baruah, Tunna; Basurto, Luis

    We have applied a recently developed method to incorporate the self-interaction correction through Fermi orbitals to Mg-porphyrin, C{sub 60}, and pentacene molecules. The Fermi-Löwdin orbitals are localized and unitarily invariant to the Kohn-Sham orbitals from which they are constructed. The self-interaction-corrected energy is obtained variationally leading to an optimum set of Fermi-Löwdin orbitals (orthonormalized Fermi orbitals) that gives the minimum energy. A Fermi orbital, by definition, is dependent on a certain point which is referred to as the descriptor position. The degree to which the initial choice of descriptor positions influences the variational approach to the minimum and the complexitymore » of the energy landscape as a function of Fermi-orbital descriptors is examined in detail for Mg-porphyrin. The applications presented here also demonstrate that the method can be applied to larger molecular systems containing a few hundred electrons. The atomization energy of the C{sub 60} molecule within the Fermi-Löwdin-orbital self-interaction-correction approach is significantly improved compared to local density approximation in the Perdew-Wang 92 functional and generalized gradient approximation of Perdew-Burke-Ernzerhof functionals. The eigenvalues of the highest occupied molecular orbitals show qualitative improvement.« less

  17. Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography

    USGS Publications Warehouse

    Liu, J.; Xia, J.; Chen, C.; Zhang, G.

    2005-01-01

    The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.

  18. Laser correcting mirror

    DOEpatents

    Sawicki, Richard H.

    1994-01-01

    An improved laser correction mirror (10) for correcting aberrations in a laser beam wavefront having a rectangular mirror body (12) with a plurality of legs (14, 16, 18, 20, 22, 24, 26, 28) arranged into opposing pairs (34, 36, 38, 40) along the long sides (30, 32) of the mirror body (12). Vector force pairs (49, 50, 52, 54) are applied by adjustment mechanisms (42, 44, 46, 48) between members of the opposing pairs (34, 36, 38, 40) for bending a reflective surface 13 of the mirror body 12 into a shape defining a function which can be used to correct for comatic aberrations.

  19. Evaluation of the Vienna APL corrections using reprocessed GNSS series

    NASA Astrophysics Data System (ADS)

    Steigenberger, P.; Dach, R.

    2011-12-01

    The Institute of Geodesy and Geophysics of the Vienna University of Technology recently started an operational service to provide non-tidal atmospheric pressure loading (APL) corrections. As the series is based on European Centre for Medium-Range Weather Forecasts (ECMWF) pressure data, it is fully consistent with the Vienna Mapping Function 1 (VMF1) atmospheric delay correction model for microwave measurements. Whereas VMF1 is widely used for, e.g., observations of Global Navigation Satellite Systems (GNSS), applying APL corrections is not yet a standard nowadays. The Center for Orbit Determination in Europe (CODE) - a joint venture between the Astronomical Institute of the University of Bern (AIUB, Bern, Switzerland), the Federal Office of Topography (swisstopo, Wabern, Switzerland), the Federal Office for Cartography and Geodesy (BKG, Frankfurt am Main, Germany), and the Insitute for Astronomical and Physical Geodesy, TU Muenchen (IAPG, Munich, Germany) - uses a recently generated series of reprocessed multi-GNSS data (considering GPS and GLONASS) to evaluate the APL corrections provided by the Vienna group. The results are also used to investigate the propagation of the APL effect in GNSS-derived results if no corrections are applied.

  20. Beam hardening correction in CT myocardial perfusion measurement

    NASA Astrophysics Data System (ADS)

    So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim

    2009-05-01

    This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.

  1. QCD evolution of the Sivers function

    NASA Astrophysics Data System (ADS)

    Aybat, S. M.; Collins, J. C.; Qiu, J. W.; Rogers, T. C.

    2012-02-01

    We extend the Collins-Soper-Sterman (CSS) formalism to apply it to the spin dependence governed by the Sivers function. We use it to give a correct numerical QCD evolution of existing fixed-scale fits of the Sivers function. With the aid of approximations useful for the nonperturbative region, we present the results as parametrizations of a Gaussian form in transverse-momentum space, rather than in the Fourier conjugate transverse coordinate space normally used in the CSS formalism. They are specifically valid at small transverse momentum. Since evolution has been applied, our results can be used to make predictions for Drell-Yan and semi-inclusive deep inelastic scattering at energies different from those where the original fits were made. Our evolved functions are of a form that they can be used in the same parton-model factorization formulas as used in the original fits, but now with a predicted scale dependence in the fit parameters. We also present a method by which our evolved functions can be corrected to allow for twist-3 contributions at large parton transverse momentum.

  2. Valence and charge-transfer optical properties for some SinCm (m, n ≤ 12) clusters: Comparing TD-DFT, complete-basis-limit EOMCC, and benchmarks from spectroscopy

    NASA Astrophysics Data System (ADS)

    Lutz, Jesse J.; Duan, Xiaofeng F.; Ranasinghe, Duminda S.; Jin, Yifan; Margraf, Johannes T.; Perera, Ajith; Burggraf, Larry W.; Bartlett, Rodney J.

    2018-05-01

    Accurate optical characterization of the closo-Si12C12 molecule is important to guide experimental efforts toward the synthesis of nano-wires, cyclic nano-arrays, and related array structures, which are anticipated to be robust and efficient exciton materials for opto-electronic devices. Working toward calibrated methods for the description of closo-Si12C12 oligomers, various electronic structure approaches are evaluated for their ability to reproduce measured optical transitions of the SiC2, Si2Cn (n = 1-3), and Si3Cn (n = 1, 2) clusters reported earlier by Steglich and Maier [Astrophys. J. 801, 119 (2015)]. Complete-basis-limit equation-of-motion coupled-cluster (EOMCC) results are presented and a comparison is made between perturbative and renormalized non-iterative triples corrections. The effect of adding a renormalized correction for quadruples is also tested. Benchmark test sets derived from both measurement and high-level EOMCC calculations are then used to evaluate the performance of a variety of density functionals within the time-dependent density functional theory (TD-DFT) framework. The best-performing functionals are subsequently applied to predict valence TD-DFT excitation energies for the lowest-energy isomers of SinC and Sin-1C7-n (n = 4-6). TD-DFT approaches are then applied to the SinCn (n = 4-12) clusters and unique spectroscopic signatures of closo-Si12C12 are discussed. Finally, various long-range corrected density functionals, including those from the CAM-QTP family, are applied to a charge-transfer excitation in a cyclic (Si4C4)4 oligomer. Approaches for gauging the extent of charge-transfer character are also tested and EOMCC results are used to benchmark functionals and make recommendations.

  3. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  4. Assisting People with Multiple Disabilities by Actively Keeping the Head in an Upright Position with a Nintendo Wii Remote Controller through the Control of an Environmental Stimulation

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Shih, Chia-Ju; Shih, Ching-Tien

    2011-01-01

    The latest researches have adopted software technology by applying the Nintendo Wii Remote Controller to the correction of hyperactive limb behavior. This study extended Wii Remote Controller functionality for improper head position (posture) correction (i.e. actively adjusting abnormal head posture) to assess whether two people with multiple…

  5. Corrections of arterial input function for dynamic H215O PET to assess perfusion of pelvic tumours: arterial blood sampling versus image extraction

    NASA Astrophysics Data System (ADS)

    Lüdemann, L.; Sreenivasa, G.; Michel, R.; Rosner, C.; Plotkin, M.; Felix, R.; Wust, P.; Amthauer, H.

    2006-06-01

    Assessment of perfusion with 15O-labelled water (H215O) requires measurement of the arterial input function (AIF). The arterial time activity curve (TAC) measured using the peripheral sampling scheme requires corrections for delay and dispersion. In this study, parametrizations with and without arterial spillover correction for fitting of the tissue curve are evaluated. Additionally, a completely noninvasive method for generation of the AIF from a dynamic positron emission tomography (PET) acquisition is applied to assess perfusion of pelvic tumours. This method uses a volume of interest (VOI) to extract the TAC from the femoral artery. The VOI TAC is corrected for spillover using a separate tissue TAC and for recovery by determining the recovery coefficient on a coregistered CT data set. The techniques were applied in five patients with pelvic tumours who underwent a total of 11 examinations. Delay and dispersion correction of the blood TAC without arterial spillover correction yielded in seven examinations solutions inconsistent with physiology. Correction of arterial spillover increased the fitting accuracy and yielded consistent results in all patients. Generation of an AIF from PET image data was investigated as an alternative to arterial blood sampling and was shown to have an intrinsic potential to determine the AIF noninvasively and reproducibly. The AIF extracted from a VOI in a dynamic PET scan was similar in shape to the blood AIF but yielded significantly higher tissue perfusion values (mean of 104.0 ± 52.0%) and lower partition coefficients (-31.6 ± 24.2%). The perfusion values and partition coefficients determined with the VOI technique have to be corrected in order to compare the results with those of studies using a blood AIF.

  6. Adaptive control for accelerators

    DOEpatents

    Eaton, Lawrie E.; Jachim, Stephen P.; Natter, Eckard F.

    1991-01-01

    An adaptive feedforward control loop is provided to stabilize accelerator beam loading of the radio frequency field in an accelerator cavity during successive pulses of the beam into the cavity. A digital signal processor enables an adaptive algorithm to generate a feedforward error correcting signal functionally determined by the feedback error obtained by a beam pulse loading the cavity after the previous correcting signal was applied to the cavity. Each cavity feedforward correcting signal is successively stored in the digital processor and modified by the feedback error resulting from its application to generate the next feedforward error correcting signal. A feedforward error correcting signal is generated by the digital processor in advance of the beam pulse to enable a composite correcting signal and the beam pulse to arrive concurrently at the cavity.

  7. Performance of the STIS CCD Dark Rate Temperature Correction

    NASA Astrophysics Data System (ADS)

    Branton, Doug; STScI STIS Team

    2018-06-01

    Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.

  8. An empirical method to correct for temperature-dependent variations in the overlap function of CHM15k ceilometers

    NASA Astrophysics Data System (ADS)

    Hervo, Maxime; Poltera, Yann; Haefele, Alexander

    2016-07-01

    Imperfections in a lidar's overlap function lead to artefacts in the background, range and overlap-corrected lidar signals. These artefacts can erroneously be interpreted as an aerosol gradient or, in extreme cases, as a cloud base leading to false cloud detection. A correct specification of the overlap function is hence crucial in the use of automatic elastic lidars (ceilometers) for the detection of the planetary boundary layer or of low cloud. In this study, an algorithm is presented to correct such artefacts. It is based on the assumption of a homogeneous boundary layer and a correct specification of the overlap function down to a minimum range, which must be situated within the boundary layer. The strength of the algorithm lies in a sophisticated quality-check scheme which allows the reliable identification of favourable atmospheric conditions. The algorithm was applied to 2 years of data from a CHM15k ceilometer from the company Lufft. Backscatter signals corrected for background, range and overlap were compared using the overlap function provided by the manufacturer and the one corrected with the presented algorithm. Differences between corrected and uncorrected signals reached up to 45 % in the first 300 m above ground. The amplitude of the correction turned out to be temperature dependent and was larger for higher temperatures. A linear model of the correction as a function of the instrument's internal temperature was derived from the experimental data. Case studies and a statistical analysis of the strongest gradient derived from corrected signals reveal that the temperature model is capable of a high-quality correction of overlap artefacts, in particular those due to diurnal variations. The presented correction method has the potential to significantly improve the detection of the boundary layer with gradient-based methods because it removes false candidates and hence simplifies the attribution of the detected gradients to the planetary boundary layer. A particularly significant benefit can be expected for the detection of shallow stable layers typical of night-time situations. The algorithm is completely automatic and does not require any on-site intervention but requires the definition of an adequate instrument-specific configuration. It is therefore suited for use in large ceilometer networks.

  9. Advancement of the anterior maxilla by distraction (case report).

    PubMed

    Karakasis, Dimitri; Hadjipetrou, Loucia

    2004-06-01

    Several techniques of distraction osteogenesis have been applied for the correction of compromised midface in patients with clefts of the lip, alveolus and palate. This article presents a technique of callus distraction applied in a specific case of hypoplasia of a cleft maxilla with the sagittal advancement of the maxilla thus not affecting velopharyngeal function. The decision to apply distraction osteogenesis for advancement of the anterior maxillary segment in cleft patients offers many advantages.

  10. Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples

    DOE PAGES

    Ophus, Colin; Rasool, Haider I.; Linck, Martin; ...

    2016-11-30

    We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less

  11. Automatic software correction of residual aberrations in reconstructed HRTEM exit waves of crystalline samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ophus, Colin; Rasool, Haider I.; Linck, Martin

    We develop an automatic and objective method to measure and correct residual aberrations in atomic-resolution HRTEM complex exit waves for crystalline samples aligned along a low-index zone axis. Our method uses the approximate rotational point symmetry of a column of atoms or single atom to iteratively calculate a best-fit numerical phase plate for this symmetry condition, and does not require information about the sample thickness or precise structure. We apply our method to two experimental focal series reconstructions, imaging a β-Si 3N 4 wedge with O and N doping, and a single-layer graphene grain boundary. We use peak and latticemore » fitting to evaluate the precision of the corrected exit waves. We also apply our method to the exit wave of a Si wedge retrieved by off-axis electron holography. In all cases, the software correction of the residual aberration function improves the accuracy of the measured exit waves.« less

  12. Correction Approach for Delta Function Convolution Model Fitting of Fluorescence Decay Data in the Case of a Monoexponential Reference Fluorophore.

    PubMed

    Talbot, Clifford B; Lagarto, João; Warren, Sean; Neil, Mark A A; French, Paul M W; Dunsby, Chris

    2015-09-01

    A correction is proposed to the Delta function convolution method (DFCM) for fitting a multiexponential decay model to time-resolved fluorescence decay data using a monoexponential reference fluorophore. A theoretical analysis of the discretised DFCM multiexponential decay function shows the presence an extra exponential decay term with the same lifetime as the reference fluorophore that we denote as the residual reference component. This extra decay component arises as a result of the discretised convolution of one of the two terms in the modified model function required by the DFCM. The effect of the residual reference component becomes more pronounced when the fluorescence lifetime of the reference is longer than all of the individual components of the specimen under inspection and when the temporal sampling interval is not negligible compared to the quantity (τR (-1) - τ(-1))(-1), where τR and τ are the fluorescence lifetimes of the reference and the specimen respectively. It is shown that the unwanted residual reference component results in systematic errors when fitting simulated data and that these errors are not present when the proposed correction is applied. The correction is also verified using real data obtained from experiment.

  13. Orbit-orbit relativistic correction calculated with all-electron molecular explicitly correlated Gaussians.

    PubMed

    Stanke, Monika; Palikot, Ewa; Kȩdziera, Dariusz; Adamowicz, Ludwik

    2016-12-14

    An algorithm for calculating the first-order electronic orbit-orbit magnetic interaction correction for an electronic wave function expanded in terms of all-electron explicitly correlated molecular Gaussian (ECG) functions with shifted centers is derived and implemented. The algorithm is tested in calculations concerning the H 2 molecule. It is also applied in calculations for LiH and H 3 + molecular systems. The implementation completes our work on the leading relativistic correction for ECGs and paves the way for very accurate ECG calculations of ground and excited potential energy surfaces (PESs) of small molecules with two and more nuclei and two and more electrons, such as HeH - , H 3 + , HeH 2 + , and LiH 2 + . The PESs will be used to determine rovibrational spectra of the systems.

  14. Modified Displacement Transfer Functions for Deformed Shape Predictions of Slender Curved Structures with Varying Curvatives

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Fleischer, Van Tran

    2014-01-01

    To eliminate the need to use finite-element modeling for structure shape predictions, a new method was invented. This method is to use the Displacement Transfer Functions to transform the measured surface strains into deflections for mapping out overall structural deformed shapes. The Displacement Transfer Functions are expressed in terms of rectilinearly distributed surface strains, and contain no material properties. This report is to apply the patented method to the shape predictions of non-symmetrically loaded slender curved structures with different curvatures up to a full circle. Because the measured surface strains are not available, finite-element analysis had to be used to analytically generate the surface strains. Previously formulated straight-beam Displacement Transfer Functions were modified by introducing the curvature-effect correction terms. Through single-point or dual-point collocations with finite-elementgenerated deflection curves, functional forms of the curvature-effect correction terms were empirically established. The resulting modified Displacement Transfer Functions can then provide quite accurate shape predictions. Also, the uniform straight-beam Displacement Transfer Function was applied to the shape predictions of a section-cut of a generic capsule (GC) outer curved sandwich wall. The resulting GC shape predictions are quite accurate in partial regions where the radius of curvature does not change sharply.

  15. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  16. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  17. Anharmonic effects in the quantum cluster equilibrium method

    NASA Astrophysics Data System (ADS)

    von Domaros, Michael; Perlt, Eva

    2017-03-01

    The well-established quantum cluster equilibrium (QCE) model provides a statistical thermodynamic framework to apply high-level ab initio calculations of finite cluster structures to macroscopic liquid phases using the partition function. So far, the harmonic approximation has been applied throughout the calculations. In this article, we apply an important correction in the evaluation of the one-particle partition function and account for anharmonicity. Therefore, we implemented an analytical approximation to the Morse partition function and the derivatives of its logarithm with respect to temperature, which are required for the evaluation of thermodynamic quantities. This anharmonic QCE approach has been applied to liquid hydrogen chloride and cluster distributions, and the molar volume, the volumetric thermal expansion coefficient, and the isobaric heat capacity have been calculated. An improved description for all properties is observed if anharmonic effects are considered.

  18. How do Stability Corrections Perform in the Stable Boundary Layer Over Snow?

    NASA Astrophysics Data System (ADS)

    Schlögl, Sebastian; Lehning, Michael; Nishimura, Kouichi; Huwald, Hendrik; Cullen, Nicolas J.; Mott, Rebecca

    2017-10-01

    We assess sensible heat-flux parametrizations in stable conditions over snow surfaces by testing and developing stability correction functions for two alpine and two polar test sites. Five turbulence datasets are analyzed with respect to, (a) the validity of the Monin-Obukhov similarity theory, (b) the model performance of well-established stability corrections, and (c) the development of new univariate and multivariate stability corrections. Using a wide range of stability corrections reveals an overestimation of the turbulent sensible heat flux for high wind speeds and a generally poor performance of all investigated functions for large temperature differences between snow and the atmosphere above (>10 K). Applying the Monin-Obukhov bulk formulation introduces a mean absolute error in the sensible heat flux of 6 W m^{-2} (compared with heat fluxes calculated directly from eddy covariance). The stability corrections produce an additional error between 1 and 5 W m^{-2}, with the smallest error for published stability corrections found for the Holtslag scheme. We confirm from previous studies that stability corrections need improvements for large temperature differences and wind speeds, where sensible heat fluxes are distinctly overestimated. Under these atmospheric conditions our newly developed stability corrections slightly improve the model performance. However, the differences between stability corrections are typically small when compared to the residual error, which stems from the Monin-Obukhov bulk formulation.

  19. Seiberg-Witten/Whitham Equations and Instanton Corrections in {\\mathscr{N}}=2 Supersymmetric Yang-Mills Theory

    NASA Astrophysics Data System (ADS)

    Dai, Jia-Liang; Fan, En-Gui

    2018-05-01

    We obtain the instanton correction recursion relations for the low energy effective prepotential in pure {\\mathscr{N}}=2 SU(n) supersymmetric Yang-Mills gauge theory from Whitham hierarchy and Seiberg-Witten/Whitham equations. These formulae provide us a powerful tool to calculate arbitrary order instanton corrections coefficients from the perturbative contributions of the effective prepotential in Seiberg-Witten gauge theory. We apply this idea to evaluate one- and twoorder instanton corrections coefficients explicitly in SU(n) case in detail through the dynamical scale parameter expressed in terms of Riemann’s theta-function. Supported by the National Natural Science Foundation of China under Grant No. 11271079

  20. CoFFEE: Corrections For Formation Energy and Eigenvalues for charged defect simulations

    NASA Astrophysics Data System (ADS)

    Naik, Mit H.; Jain, Manish

    2018-05-01

    Charged point defects in materials are widely studied using Density Functional Theory (DFT) packages with periodic boundary conditions. The formation energy and defect level computed from these simulations need to be corrected to remove the contributions from the spurious long-range interaction between the defect and its periodic images. To this effect, the CoFFEE code implements the Freysoldt-Neugebauer-Van de Walle (FNV) correction scheme. The corrections can be applied to charged defects in a complete range of material shapes and size: bulk, slab (or two-dimensional), wires and nanoribbons. The code is written in Python and features MPI parallelization and optimizations using the Cython package for slow steps.

  1. VizieR Online Data Catalog: 3D correction in 5 photometric systems (Bonifacio+, 2018)

    NASA Astrophysics Data System (ADS)

    Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kucinskas, A.; Prakapavicius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.

    2018-01-01

    We have used the CIFIST grid of CO5BOLD models to investigate the effects of granulation on fluxes and colours of stars of spectral type F, G, and K. We publish tables with 3D corrections that can be applied to colours computed from any 1D model atmosphere. For Teff>=5000K, the corrections are smooth enough, as a function of atmospheric parameters, that it is possible to interpolate the corrections between grid points; thus the coarseness of the CIFIST grid should not be a major limitation. However at the cool end there are still far too few models to allow a reliable interpolation. (20 data files).

  2. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  3. Valence and charge-transfer optical properties for some SinCm (m, n ≤ 12) clusters: Comparing TD-DFT, complete-basis-limit EOMCC, and benchmarks from spectroscopy.

    PubMed

    Lutz, Jesse J; Duan, Xiaofeng F; Ranasinghe, Duminda S; Jin, Yifan; Margraf, Johannes T; Perera, Ajith; Burggraf, Larry W; Bartlett, Rodney J

    2018-05-07

    Accurate optical characterization of the closo-Si 12 C 12 molecule is important to guide experimental efforts toward the synthesis of nano-wires, cyclic nano-arrays, and related array structures, which are anticipated to be robust and efficient exciton materials for opto-electronic devices. Working toward calibrated methods for the description of closo-Si 12 C 12 oligomers, various electronic structure approaches are evaluated for their ability to reproduce measured optical transitions of the SiC 2 , Si 2 C n (n = 1-3), and Si 3 C n (n = 1, 2) clusters reported earlier by Steglich and Maier [Astrophys. J. 801, 119 (2015)]. Complete-basis-limit equation-of-motion coupled-cluster (EOMCC) results are presented and a comparison is made between perturbative and renormalized non-iterative triples corrections. The effect of adding a renormalized correction for quadruples is also tested. Benchmark test sets derived from both measurement and high-level EOMCC calculations are then used to evaluate the performance of a variety of density functionals within the time-dependent density functional theory (TD-DFT) framework. The best-performing functionals are subsequently applied to predict valence TD-DFT excitation energies for the lowest-energy isomers of Si n C and Si n-1 C 7-n (n = 4-6). TD-DFT approaches are then applied to the Si n C n (n = 4-12) clusters and unique spectroscopic signatures of closo-Si 12 C 12 are discussed. Finally, various long-range corrected density functionals, including those from the CAM-QTP family, are applied to a charge-transfer excitation in a cyclic (Si 4 C 4 ) 4 oligomer. Approaches for gauging the extent of charge-transfer character are also tested and EOMCC results are used to benchmark functionals and make recommendations.

  4. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  5. Contact resistance extraction methods for short- and long-channel carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael

    2016-11-01

    Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.

  6. B97-3c: A revised low-cost variant of the B97-D density functional method

    NASA Astrophysics Data System (ADS)

    Brandenburg, Jan Gerit; Bannwarth, Christoph; Hansen, Andreas; Grimme, Stefan

    2018-02-01

    A revised version of the well-established B97-D density functional approximation with general applicability for chemical properties of large systems is proposed. Like B97-D, it is based on Becke's power-series ansatz from 1997 and is explicitly parametrized by including the standard D3 semi-classical dispersion correction. The orbitals are expanded in a modified valence triple-zeta Gaussian basis set, which is available for all elements up to Rn. Remaining basis set errors are mostly absorbed in the modified B97 parametrization, while an established atom-pairwise short-range potential is applied to correct for the systematically too long bonds of main group elements which are typical for most semi-local density functionals. The new composite scheme (termed B97-3c) completes the hierarchy of "low-cost" electronic structure methods, which are all mainly free of basis set superposition error and account for most interactions in a physically sound and asymptotically correct manner. B97-3c yields excellent molecular and condensed phase geometries, similar to most hybrid functionals evaluated in a larger basis set expansion. Results on the comprehensive GMTKN55 energy database demonstrate its good performance for main group thermochemistry, kinetics, and non-covalent interactions, when compared to functionals of the same class. This also transfers to metal-organic reactions, which is a major area of applicability for semi-local functionals. B97-3c can be routinely applied to hundreds of atoms on a single processor and we suggest it as a robust computational tool, in particular, for more strongly correlated systems where our previously published "3c" schemes might be problematic.

  7. Applying Formal Verification Techniques to Ambient Assisted Living Systems

    NASA Astrophysics Data System (ADS)

    Benghazi, Kawtar; Visitación Hurtado, María; Rodríguez, María Luisa; Noguera, Manuel

    This paper presents a verification approach based on timed traces semantics and MEDISTAM-RT [1] to check the fulfillment of non-functional requirements, such as timeliness and safety, and assure the correct functioning of the Ambient Assisted Living (AAL) systems. We validate this approach by its application to an Emergency Assistance System for monitoring people suffering from cardiac alteration with syncope.

  8. Extracting the pair distribution function of liquids and liquid-vapor surfaces by grazing incidence x-ray diffraction mode.

    PubMed

    Vaknin, David; Bu, Wei; Travesset, Alex

    2008-07-28

    We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.

  9. Impact of respiratory motion correction and spatial resolution on lesion detection in PET: a simulation study based on real MR dynamic data

    NASA Astrophysics Data System (ADS)

    Polycarpou, Irene; Tsoumpas, Charalampos; King, Andrew P.; Marsden, Paul K.

    2014-02-01

    The aim of this study is to investigate the impact of respiratory motion correction and spatial resolution on lesion detectability in PET as a function of lesion size and tracer uptake. Real respiratory signals describing different breathing types are combined with a motion model formed from real dynamic MR data to simulate multiple dynamic PET datasets acquired from a continuously moving subject. Lung and liver lesions were simulated with diameters ranging from 6 to 12 mm and lesion to background ratio ranging from 3:1 to 6:1. Projection data for 6 and 3 mm PET scanner resolution were generated using analytic simulations and reconstructed without and with motion correction. Motion correction was achieved using motion compensated image reconstruction. The detectability performance was quantified by a receiver operating characteristic (ROC) analysis obtained using a channelized Hotelling observer and the area under the ROC curve (AUC) was calculated as the figure of merit. The results indicate that respiratory motion limits the detectability of lung and liver lesions, depending on the variation of the breathing cycle length and amplitude. Patients with large quiescent periods had a greater AUC than patients with regular breathing cycles and patients with long-term variability in respiratory cycle or higher motion amplitude. In addition, small (less than 10 mm diameter) or low contrast (3:1) lesions showed the greatest improvement in AUC as a result of applying motion correction. In particular, after applying motion correction the AUC is improved by up to 42% with current PET resolution (i.e. 6 mm) and up to 51% for higher PET resolution (i.e. 3 mm). Finally, the benefit of increasing the scanner resolution is small unless motion correction is applied. This investigation indicates high impact of respiratory motion correction on lesion detectability in PET and highlights the importance of motion correction in order to benefit from the increased resolution of future PET scanners.

  10. Application of London-type dispersion corrections to the solid-state density functional theory simulation of the terahertz spectra of crystalline pharmaceuticals.

    PubMed

    King, Matthew D; Buchanan, William D; Korter, Timothy M

    2011-03-14

    The effects of applying an empirical dispersion correction to solid-state density functional theory methods were evaluated in the simulation of the crystal structure and low-frequency (10 to 90 cm(-1)) terahertz spectrum of the non-steroidal anti-inflammatory drug, naproxen. The naproxen molecular crystal is bound largely by weak London force interactions, as well as by more prominent interactions such as hydrogen bonding, and thus serves as a good model for the assessment of the pair-wise dispersion correction term in systems influenced by intermolecular interactions of various strengths. Modifications to the dispersion parameters were tested in both fully optimized unit cell dimensions and those determined by X-ray crystallography, with subsequent simulations of the THz spectrum being performed. Use of the unmodified PBE density functional leads to an unrealistic expansion of the unit cell volume and the poor representation of the THz spectrum. Inclusion of a modified dispersion correction enabled a high-quality simulation of the THz spectrum and crystal structure of naproxen to be achieved without the need for artificially constraining the unit cell dimensions.

  11. Nominal SARAL Transfer Function

    NASA Technical Reports Server (NTRS)

    Arnold, David A.; Lemoine, Frank (Editor)

    2015-01-01

    This paper gives a calculation of the range correction and cross section of the SARAL (Satellite with Argos and ALtiKa) Indian/French ocean radar satellite retroreflector array assuming the cube corners are coated and have a dihedral angle offset of about 1.5 arcseconds to account for velocity aberration. The cubes are assumed to all have the same orientation within the mounting. The derived range correction may be applied in precise orbit determination analyses that use Satellite Laser Ranging (SLR) data to SARAL.

  12. Effect of tubing length on the dispersion correction of an arterially sampled input function for kinetic modeling in PET.

    PubMed

    O'Doherty, Jim; Chilcott, Anna; Dunn, Joel

    2015-11-01

    Arterial sampling with dispersion correction is routinely performed for kinetic analysis of PET studies. Because of the the advent of PET-MRI systems, non-MR safe instrumentation will be required to be kept outside the scan room, which requires the length of the tubing between the patient and detector to increase, thus worsening the effects of dispersion. We examined the effects of dispersion in idealized radioactive blood studies using various lengths of tubing (1.5, 3, and 4.5 m) and applied a well-known transmission-dispersion model to attempt to correct the resulting traces. A simulation study was also carried out to examine noise characteristics of the model. The model was applied to patient traces using a 1.5 m acquisition tubing and extended to its use at 3 m. Satisfactory dispersion correction of the blood traces was achieved in the 1.5 m line. Predictions on the basis of experimental measurements, numerical simulations and noise analysis of resulting traces show that corrections of blood data can also be achieved using the 3 m tubing. The effects of dispersion could not be corrected for the 4.5 m line by the selected transmission-dispersion model. On the basis of our setup, correction of dispersion in arterial sampling tubing up to 3 m by the transmission-dispersion model can be performed. The model could not dispersion correct data acquired using a 4.5 m arterial tubing.

  13. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  14. Nonlinear responses of chiral fluids from kinetic theory

    NASA Astrophysics Data System (ADS)

    Hidaka, Yoshimasa; Pu, Shi; Yang, Di-Lun

    2018-01-01

    The second-order nonlinear responses of inviscid chiral fluids near local equilibrium are investigated by applying the chiral kinetic theory (CKT) incorporating side-jump effects. It is shown that the local equilibrium distribution function can be nontrivially introduced in a comoving frame with respect to the fluid velocity when the quantum corrections in collisions are involved. For the study of anomalous transport, contributions from both quantum corrections in anomalous hydrodynamic equations of motion and those from the CKT and Wigner functions are considered under the relaxation-time (RT) approximation, which result in anomalous charge Hall currents propagating along the cross product of the background electric field and the temperature (or chemical-potential) gradient and of the temperature and chemical-potential gradients. On the other hand, the nonlinear quantum correction on the charge density vanishes in the classical RT approximation, which in fact satisfies the matching condition given by the anomalous equation obtained from the CKT.

  15. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    PubMed

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  16. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  17. Peptidic tools applied to redirect alternative splicing events.

    PubMed

    Nancy, Martínez-Montiel; Nora, Rosas-Murrieta; Rebeca, Martínez-Contreras

    2015-05-01

    Peptides are versatile and attractive biomolecules that can be applied to modulate genetic mechanisms like alternative splicing. In this process, a single transcript yields different mature RNAs leading to the production of protein isoforms with diverse or even antagonistic functions. During splicing events, errors can be caused either by mutations present in the genome or by defects or imbalances in regulatory protein factors. In any case, defects in alternative splicing have been related to several genetic diseases including muscular dystrophy, Alzheimer's disease and cancer from almost every origin. One of the most effective approaches to redirect alternative splicing events has been to attach cell-penetrating peptides to oligonucleotides that can modulate a single splicing event and restore correct gene expression. Here, we summarize how natural existing and bioengineered peptides have been applied over the last few years to regulate alternative splicing and genetic expression. Under different genetic and cellular backgrounds, peptides have been shown to function as potent vehicles for splice correction, and their therapeutic benefits have reached clinical trials and patenting stages, emphasizing the use of regulatory peptides as an exciting therapeutic tool for the treatment of different genetic diseases. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Density-functional theory applied to d- and f-electron systems

    NASA Astrophysics Data System (ADS)

    Wu, Xueyuan

    Density functional theory (DFT) has been applied to study the electronic and geometric structures of prototype d- and f-electron systems. For the d-electron system, all electron DFT with gradient corrections to the exchange and correlation functionals has been used to investigate the properties of small neutral and cationic vanadium clusters. Results are in good agreement with available experimental and other theoretical data. For the f-electron system, a hybrid DFT, namely, B3LYP (Becke's 3-parameter hybrid functional using the correlation functional of Lee, Yang and Parr) with relativistic effective core potentials and cluster models has been applied to investigate the nature of chemical bonding of both the bulk and the surfaces of plutonium monoxide and dioxide. Using periodic models, the electronic and geometric structures of PuO2 and its (110) surface, as well as water adsorption on this surface have also been investigated using DFT in both local density approximation (LDA) and generalized gradient approximation (GGA) formalisms.

  19. [Correction of respiratory movement using ultrasound for cardiac nuclear medicine examinations: fundamental study using an X-ray TV machine].

    PubMed

    Yoda, Kazushige; Umeda, Tokuo; Hasegawa, Tomoyuki

    2003-11-01

    Organ movements that occur naturally as a result of vital functions such as respiration and heartbeat cause deterioration of image quality in nuclear medicine imaging. Among these movements, respiration has a large effect, but there has been no practical method of correcting for this. In the present study, we examined a method of correction that uses ultrasound images to correct baseline shifts caused by respiration in cardiac nuclear medicine examinations. To evaluate the validity of this method, simulation studies were conducted with an X-ray TV machine instead of a nuclear medicine scanner. The X-ray TV images and ultrasound images were recorded as digital movies and processed with public domain software (Scion Image). Organ movements were detected in the ultrasound images of the subcostal four-chamber view mode using slit regions of interest and were measured on a two-dimensional image coordinate. Then translational shifts were applied to the X-ray TV images to correct these movements by using macro-functions of the software. As a result, respiratory movements of about 20.1 mm were successfully reduced to less than 2.6 mm. We conclude that this correction technique is potentially useful in nuclear medicine cardiology.

  20. Fuzzy cluster analysis of simple physicochemical properties of amino acids for recognizing secondary structure in proteins.

    PubMed Central

    Mocz, G.

    1995-01-01

    Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882

  1. Validation of drift and diffusion coefficients from experimental data

    NASA Astrophysics Data System (ADS)

    Riera, R.; Anteneodo, C.

    2010-04-01

    Many fluctuation phenomena, in physics and other fields, can be modeled by Fokker-Planck or stochastic differential equations whose coefficients, associated with drift and diffusion components, may be estimated directly from the observed time series. Its correct characterization is crucial to determine the system quantifiers. However, due to the finite sampling rates of real data, the empirical estimates may significantly differ from their true functional forms. In the literature, low-order corrections, or even no corrections, have been applied to the finite-time estimates. A frequent outcome consists of linear drift and quadratic diffusion coefficients. For this case, exact corrections have been recently found, from Itô-Taylor expansions. Nevertheless, model validation constitutes a necessary step before determining and applying the appropriate corrections. Here, we exploit the consequences of the exact theoretical results obtained for the linear-quadratic model. In particular, we discuss whether the observed finite-time estimates are actually a manifestation of that model. The relevance of this analysis is put into evidence by its application to two contrasting real data examples in which finite-time linear drift and quadratic diffusion coefficients are observed. In one case the linear-quadratic model is readily rejected while in the other, although the model constitutes a very good approximation, low-order corrections are inappropriate. These examples give warning signs about the proper interpretation of finite-time analysis even in more general diffusion processes.

  2. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. On-sky Closed-loop Correction of Atmospheric Dispersion for High-contrast Coronagraphy and Astrometry

    NASA Astrophysics Data System (ADS)

    Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.

    2018-02-01

    Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.

  4. Correctional officers' perceptions of a solution-focused training program: potential implications for working with offenders.

    PubMed

    Pan, Peter Jen Der; Deng, Liang-Yu F; Chang, Shona Shih Hua; Jiang, Karen Jye-Ru

    2011-09-01

    The purpose of this exploratory study was to explore correctional officers' perceptions and experiences during a solution-focused training program and to initiate development of a modified pattern for correctional officers to use in jails. The study uses grounded theory procedures combined with a follow-up survey. The findings identified six emergent themes: obstacles to doing counseling work in prisons, offenders' amenability to change, correctional officers' self-image, advantages of a solution-focused approach (SFA), potential advantages of applying SFA to offenders, and the need for the consolidation of learning and transformation. Participants perceived the use of solution-focused techniques as appropriate, important, functional, and of only moderate difficulty in interacting with offenders. Finally, a modified pattern was developed for officers to use when working with offenders in jails. Suggestions and recommendations are made for correctional interventions and future studies.

  5. Adjustment of spatio-temporal precipitation patterns in a high Alpine environment

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter

    2018-01-01

    This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.

  6. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  7. Satellite clock corrections estimation to accomplish real time ppp: experiments for brazilian real time network

    NASA Astrophysics Data System (ADS)

    Marques, Haroldo; Monico, João; Aquino, Marcio; Melo, Weyller

    2014-05-01

    The real time PPP method requires the availability of real time precise orbits and satellites clocks corrections. Currently, it is possible to apply the solutions of clocks and orbits available by BKG within the context of IGS Pilot project or by using the operational predicted IGU ephemeris. The accuracy of the satellite position available in the IGU is enough for several applications requiring good quality. However, the satellites clocks corrections do not provide enough accuracy (3 ns ~ 0.9 m) to accomplish real time PPP with the same level of accuracy. Therefore, for real time PPP application it is necessary to further research and develop appropriated methodologies for estimating the satellite clock corrections in real time with better accuracy. Currently, it is possible to apply the real time solutions of clocks and orbits available by Federal Agency for Cartography and Geodesy (BKG) within the context of IGS Pilot project. The BKG corrections are disseminated by a new proposed format of the RTCM 3.x and can be applied in the broadcasted orbits and clocks. Some investigations have been proposed for the estimation of the satellite clock corrections using GNSS code and phase observable at the double difference level between satellites and epochs (MERVAT, DOUSA, 2007). Another possibility consists of applying a Kalman Filter in the PPP network mode (HAUSCHILD, 2010) and it is also possible the integration of both methods, using network PPP and observables at double difference level in specific time intervals (ZHANG; LI; GUO, 2010). For this work the methodology adopted consists in the estimation of the satellite clock corrections based on the data adjustment in the PPP mode, but for a network of GNSS stations. The clock solution can be solved by using two types of observables: code smoothed by carrier phase or undifferenced code together with carrier phase. In the former, we estimate receiver clock error; satellite clock correction and troposphere, considering that the phase ambiguities are eliminated when applying differences between consecutive epochs. However, when using undifferenced code and phase, the ambiguities may be estimated together with receiver clock errors, satellite clock corrections and troposphere parameters. In both strategies it is also possible to correct the troposphere delay from a Numerical Weather Forecast Model instead of estimating it. The prediction of the satellite clock correction can be performed using a straight line or a second degree polynomial using the time series of the estimated satellites clocks. To estimate satellite clock correction and to accomplish real time PPP two pieces of software have been developed, respectively, "RT_PPP" and "RT_SAT_CLOCK". The system (RT_PPP) is able to process GNSS code and phase data using precise ephemeris and precise satellites clocks corrections together with several corrections required for PPP. In the software RT_SAT_CLOCK we apply a Kalman filter algorithm to estimate satellite clock correction in the network PPP mode. In this case, all PPP corrections must be applied for each station. The experiments were generated in real time and post-processed mode (simulating real time) considering data from the Brazilian continuous GPS network and also from the IGS network in a global satellite clock solution. We have used IGU ephemeris for satellite position and estimated the satellite clock corrections, performing the updates as soon as new ephemeris files were available. Experiments were accomplished in order to assess the accuracy of the estimated clocks when using the Brazilian Numerical Weather Forecast Model (BNWFM) from CPTEC/INPE and also using the ZTD from European Centre for Medium-Range Weather Forecasts (ECMWF) together with Vienna Mapping Function VMF or estimating troposphere with clocks and ambiguities in the Kalman Filter. The daily precision of the estimated satellite clock corrections reached the order of 0.15 nanoseconds. The clocks were applied in the Real Time PPP for Brazilian network stations and also for flight test of the Brazilian airplanes and the results show that it is possible to accomplish real time PPP in the static and kinematic modes with accuracy of the order of 10 to 20 cm, respectively.

  8. Including screening in van der Waals corrected density functional theory calculations: the case of atoms and small molecules physisorbed on graphene.

    PubMed

    Silvestrelli, Pier Luigi; Ambrosetti, Alberto

    2014-03-28

    The Density Functional Theory (DFT)/van der Waals-Quantum Harmonic Oscillator-Wannier function (vdW-QHO-WF) method, recently developed to include the vdW interactions in approximated DFT by combining the quantum harmonic oscillator model with the maximally localized Wannier function technique, is applied to the cases of atoms and small molecules (X=Ar, CO, H2, H2O) weakly interacting with benzene and with the ideal planar graphene surface. Comparison is also presented with the results obtained by other DFT vdW-corrected schemes, including PBE+D, vdW-DF, vdW-DF2, rVV10, and by the simpler Local Density Approximation (LDA) and semilocal generalized gradient approximation approaches. While for the X-benzene systems all the considered vdW-corrected schemes perform reasonably well, it turns out that an accurate description of the X-graphene interaction requires a proper treatment of many-body contributions and of short-range screening effects, as demonstrated by adopting an improved version of the DFT/vdW-QHO-WF method. We also comment on the widespread attitude of relying on LDA to get a rough description of weakly interacting systems.

  9. Cognitive Diagnostic Attribute-Level Discrimination Indices

    ERIC Educational Resources Information Center

    Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming

    2008-01-01

    Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…

  10. Asymptotic One-Point Functions in Gauge-String Duality with Defects.

    PubMed

    Buhl-Mortensen, Isak; de Leeuw, Marius; Ipsen, Asger C; Kristjansen, Charlotte; Wilhelm, Matthias

    2017-12-29

    We take the first step in extending the integrability approach to one-point functions in AdS/dCFT to higher loop orders. More precisely, we argue that the formula encoding all tree-level one-point functions of SU(2) operators in the defect version of N=4 supersymmetric Yang-Mills theory, dual to the D5-D3 probe-brane system with flux, has a natural asymptotic generalization to higher loop orders. The asymptotic formula correctly encodes the information about the one-loop correction to the one-point functions of nonprotected operators once dressed by a simple flux-dependent factor, as we demonstrate by an explicit computation involving a novel object denoted as an amputated matrix product state. Furthermore, when applied to the Berenstein-Maldacena-Nastase vacuum state, the asymptotic formula gives a result for the one-point function which in a certain double-scaling limit agrees with that obtained in the dual string theory up to wrapping order.

  11. [PREPARATIONS OF PAMIDRONOVIC ACID IN COMPLEX TREATMENT ON OSTEOGENESIS IMPERFECTA].

    PubMed

    Zyma, A M; Guk, Yu M; Magomedov, O M; Gayko, O G; Kincha-Polishchuk, T A

    2015-07-01

    Modern view of drug therapy in the complex treatment of orthopedic manifestations of osteogenesis imperfecta (OI) was submitted. Developed and tested system of drug correction of structural and functional state of bone tissue (BT) using drugs pamidronovic acid, depending on osteoporosis severity and type of disease. Such therapy is appropriate to apply both independently and in conjunction with surgery to correct deformations of long bones of the lower extremities. Effectiveness and feasibility of the proposed methods of drug therapy was proved, most patients resume features walking and support.

  12. Metallicity-Corrected Tip of the Red Giant Branch Distances to M66 and M96

    NASA Astrophysics Data System (ADS)

    Mager, Violet; Madore, Barry F.; Freedman, Wendy L.

    2018-06-01

    We present distances to M66 and M96 obtained through measurements of the tip of the red giant branch (TRGB) in HST ACS/WFC images, and give details of our method. The TRGB can be difficult to determine in color-magnitude diagrams where it is not a hard, well-defined edge. We discuss our approach to this in our edge-detection algorithm. Furthermore, metals affect the magnitude of the TRGB as a function of color, creating a slope to the edge that has been dealt with in the past by applying a red color cut-off. We instead apply a metallicity correction to the data that removes this effect, increasing the number of useable stars and providing a more accurate distance measurement.

  13. A reference tristimulus colorimeter

    NASA Astrophysics Data System (ADS)

    Eppeldauer, George P.

    2002-06-01

    A reference tristimulus colorimeter has been developed at NIST with a transmission-type silicon trap detector (1) and four temperature-controlled filter packages to realize the Commission Internationale de l'Eclairage (CIE) x(λ), y(λ) and z(λ) color matching functions (2). Instead of lamp standards, high accuracy detector standards are used for the colorimeter calibration. A detector-based calibration procedure is being suggested for tristimulus colorimeters wehre the absolute spectral responsivity of the tristimulus channels is determined. Then, color (spectral) correct and peak (amplitude) normalization are applied to minimize uncertainties caused by the imperfect realizations of the CIE functions. As a result of the corrections, the chromaticity coordinates of stable light sources with different spectral power distributions can be measured with uncertainties less than 0.0005 (k=1).

  14. Can quantile mapping improve precipitation extremes from regional climate models?

    NASA Astrophysics Data System (ADS)

    Tani, Satyanarayana; Gobiet, Andreas

    2015-04-01

    The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.

  15. Retrieval of background surface reflectance with BRD components from pre-running BRDF

    NASA Astrophysics Data System (ADS)

    Choi, Sungwon; Lee, Kyeong-Sang; Jin, Donghyun; Lee, Darae; Han, Kyung-Soo

    2016-10-01

    Many countries try to launch satellite to observe the Earth surface. As important of surface remote sensing is increased, the reflectance of surface is a core parameter of the ground climate. But observing the reflectance of surface by satellite have weakness such as temporal resolution and being affected by view or solar angles. The bidirectional effects of the surface reflectance may make many noises to the time series. These noises can lead to make errors when determining surface reflectance. To correct bidirectional error of surface reflectance, using correction model for normalized the sensor data is necessary. A Bidirectional Reflectance Distribution Function (BRDF) is making accuracy higher method to correct scattering (Isotropic scattering, Geometric scattering, Volumetric scattering). To correct bidirectional error of surface reflectance, BRDF was used in this study. To correct bidirectional error of surface reflectance, we apply Bidirectional Reflectance Distribution Function (BRDF) to retrieve surface reflectance. And we apply 2 steps for retrieving Background Surface Reflectance (BSR). The first step is retrieving Bidirectional Reflectance Distribution (BRD) coefficients. Before retrieving BSR, we did pre-running BRDF to retrieve BRD coefficients to correct scatterings (Isotropic scattering, Geometric scattering, Volumetric scattering). In pre-running BRDF, we apply BRDF with observed surface reflectance of SPOT/VEGETATION (VGT-S1) and angular data to get BRD coefficients for calculating scattering. After that, we apply BRDF again in the opposite direction with BRD coefficients and angular data to retrieve BSR as a second step. As a result, BSR has very similar reflectance to one of VGT-S1. And reflectance in BSR is shown adequate. The highest reflectance of BSR is not over 0.4μm in blue channel, 0.45μm in red channel, 0.55μm in NIR channel. And for validation we compare reflectance of clear sky pixel from SPOT/VGT status map data. As a result of comparing BSR with VGT-S1, bias is from 0.0116 to 0.0158 and RMSE is from 0.0459 to 0.0545. They are very reasonable results, so we confirm that BSR is similar to VGT-S1. And weakness of this study is missing pixel in BSR which are observed less time to retrieve BRD components. If missing pixels are filled, BSR is better to retrieve surface products with more accuracy. And we think that after filling the missing pixel and being more accurate, it can be useful data to retrieve surface product which made by surface reflectance like cloud masking and retrieving aerosol.

  16. Relative importance of first and second derivatives of nuclear magnetic resonance chemical shifts and spin-spin coupling constants for vibrational averaging.

    PubMed

    Dracínský, Martin; Kaminský, Jakub; Bour, Petr

    2009-03-07

    Relative importance of anharmonic corrections to molecular vibrational energies, nuclear magnetic resonance (NMR) chemical shifts, and J-coupling constants was assessed for a model set of methane derivatives, differently charged alanine forms, and sugar models. Molecular quartic force fields and NMR parameter derivatives were obtained quantum mechanically by a numerical differentiation. In most cases the harmonic vibrational function combined with the property second derivatives provided the largest correction of the equilibrium values, while anharmonic corrections (third and fourth energy derivatives) were found less important. The most computationally expensive off-diagonal quartic energy derivatives involving four different coordinates provided a negligible contribution. The vibrational corrections of NMR shifts were small and yielded a convincing improvement only for very accurate wave function calculations. For the indirect spin-spin coupling constants the averaging significantly improved already the equilibrium values obtained at the density functional theory level. Both first and complete second shielding derivatives were found important for the shift corrections, while for the J-coupling constants the vibrational parts were dominated by the diagonal second derivatives. The vibrational corrections were also applied to some isotopic effects, where the corrected values reasonably well reproduced the experiment, but only if a full second-order expansion of the NMR parameters was included. Contributions of individual vibrational modes for the averaging are discussed. Similar behavior was found for the methane derivatives, and for the larger and polar molecules. The vibrational averaging thus facilitates interpretation of previous experimental results and suggests that it can make future molecular structural studies more reliable. Because of the lengthy numerical differentiation required to compute the NMR parameter derivatives their analytical implementation in future quantum chemistry packages is desirable.

  17. Diagrammatic expansion for positive spectral functions beyond GW: Application to vertex corrections in the electron gas

    NASA Astrophysics Data System (ADS)

    Stefanucci, G.; Pavlyukh, Y.; Uimonen, A.-M.; van Leeuwen, R.

    2014-09-01

    We present a diagrammatic approach to construct self-energy approximations within many-body perturbation theory with positive spectral properties. The method cures the problem of negative spectral functions which arises from a straightforward inclusion of vertex diagrams beyond the GW approximation. Our approach consists of a two-step procedure: We first express the approximate many-body self-energy as a product of half-diagrams and then identify the minimal number of half-diagrams to add in order to form a perfect square. The resulting self-energy is an unconventional sum of self-energy diagrams in which the internal lines of half a diagram are time-ordered Green's functions, whereas those of the other half are anti-time-ordered Green's functions, and the lines joining the two halves are either lesser or greater Green's functions. The theory is developed using noninteracting Green's functions and subsequently extended to self-consistent Green's functions. Issues related to the conserving properties of diagrammatic approximations with positive spectral functions are also addressed. As a major application of the formalism we derive the minimal set of additional diagrams to make positive the spectral function of the GW approximation with lowest-order vertex corrections and screened interactions. The method is then applied to vertex corrections in the three-dimensional homogeneous electron gas by using a combination of analytical frequency integrations and numerical Monte Carlo momentum integrations to evaluate the diagrams.

  18. Calibration and correction procedures for cosmic-ray neutron soil moisture probes located across Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, Aaron; McJannet, David; Wallace, Jim

    2014-06-01

    The cosmic-ray probe (CRP) provides continuous estimates of soil moisture over an area of ˜30 ha by counting fast neutrons produced from cosmic rays which are predominantly moderated by water molecules in the soil. This paper describes the setup, measurement correction procedures, and field calibration of CRPs at nine locations across Australia with contrasting soil type, climate, and land cover. These probes form the inaugural Australian CRP network, which is known as CosmOz. CRP measurements require neutron count rates to be corrected for effects of atmospheric pressure, water vapor pressure changes, and variations in incoming neutron intensity. We assess the magnitude and importance of these corrections and present standardized approaches for network-wide analysis. In particular, we present a new approach to correct for incoming neutron intensity variations and test its performance against existing procedures used in other studies. Our field calibration results indicate that a generalized calibration function for relating neutron counts to soil moisture is suitable for all soil types, with the possible exception of very sandy soils with low water content. Using multiple calibration data sets, we demonstrate that the generalized calibration function only applies after accounting for persistent sources of hydrogen in the soil profile. Finally, we demonstrate that by following standardized correction procedures and scaling neutron counting rates of all CRPs to a single reference location, differences in calibrations between sites are related to site biomass. This observation provides a means for estimating biomass at a given location or for deriving coefficients for the calibration function in the absence of field calibration data.

  19. Calibrating the Grigg's' Apparatus using Experiments performed at the Quartz-Coesite Transition

    NASA Astrophysics Data System (ADS)

    Heilbronner, R.; Stunitz, H.; Richter, B.

    2015-12-01

    The Griggs deformation apparatus is increasingly used for shear experiments. The tested material is placed on a 45° pre-cut between two forcing blocks. During the experiment, the axial displacement, load, temperature, and confining pressure are recorded as a function of time. From these records, stress, strain, and other mechanical data can be calculated - provided the machine is calibrated. Experimentalists are well aware that calibrating a Griggs apparatus is not easy. The stiffness correction accounts for the elastic extension of the rig as load is applied to the sample. An 'area correction' accounts for the decreasing overlap of the forcing blocks as slip along the pre-cut progresses. Other corrections are sometimes used to account for machine specific behaviour. While the rig stiffness can be measured very accurately, the area correction involves model assumptions. Depending on the choice of the model, the calculated stresses may vary by as much as 100 MPa. Also, while the assumptions appear to be theoretically valid, in practice they tend to over-correct the data, yielding strain hardening curves even in cases where constant flow stress or weakening is expected. Using the results of experiments on quartz gouge at the quartz-coesite transition (see Richter et al. this conference), we are now able to improve and constrain our corrections. We introduce an elastic salt correction based on the assumption that the confining pressure is increased as the piston advances and reduces the volume in the confining medium. As the compressibility of salt is low, the correction is significant and increases with strain. Applying this correction, the strain hardening artefact introduced by the area correction can be counter-balanced. Using a combination of area correction and salt correction we can now reproduce strain weakening, for which there is evidence in samples where coesite transforms back to quartz.

  20. Functional Characterization of Two Novel Human Prostate Cancer Metastasis Related Genes

    DTIC Science & Technology

    2008-02-01

    systems (27-29), a major leap in functional genomic investigation would be the ability to perform genetic subtractive analysis with in vivo-derived...been designed to detect and isolate different DNA sequences present in one complimentary (31) or genomic (32) DNA library but absent in another. The...many disorders if applied correctly, the use of control specimens different from the native tissue for subtractive genomic analysis in some studies has

  1. Beyond Kohn-Sham Approximation: Hybrid Multistate Wave Function and Density Functional Theory.

    PubMed

    Gao, Jiali; Grofe, Adam; Ren, Haisheng; Bao, Peng

    2016-12-15

    A multistate density functional theory (MSDFT) is presented in which the energies and densities for the ground and excited states are treated on the same footing using multiconfigurational approaches. The method can be applied to systems with strong correlation and to correctly describe the dimensionality of the conical intersections between strongly coupled dissociative potential energy surfaces. A dynamic-then-static framework for treating electron correlation is developed to first incorporate dynamic correlation into contracted state functions through block-localized Kohn-Sham density functional theory (KSDFT), followed by diagonalization of the effective Hamiltonian to include static correlation. MSDFT can be regarded as a hybrid of wave function and density functional theory. The method is built on and makes use of the current approximate density functional developed in KSDFT, yet it retains its computational efficiency to treat strongly correlated systems that are problematic for KSDFT but too large for accurate WFT. The results presented in this work show that MSDFT can be applied to photochemical processes involving conical intersections.

  2. MRS proof-of-concept on atmospheric corrections. Atmospheric corrections using an orbital pointable imaging system

    NASA Technical Reports Server (NTRS)

    Slater, P. N. (Principal Investigator)

    1980-01-01

    The feasibility of using a pointable imager to determine atmospheric parameters was studied. In particular the determination of the atmospheric extinction coefficient and the path radiance, the two quantities that have to be known in order to correct spectral signatures for atmospheric effects, was simulated. The study included the consideration of the geometry of ground irradiance and observation conditions for a pointable imager in a LANDSAT orbit as a function of time of year. A simulation study was conducted on the sensitivity of scene classification accuracy to changes in atmospheric condition. A two wavelength and a nonlinear regression method for determining the required atmospheric parameters were investigated. The results indicate the feasibility of using a pointable imaging system (1) for the determination of the atmospheric parameters required to improve classification accuracies in urban-rural transition zones and to apply in studies of bi-directional reflectance distribution function data and polarization effects; and (2) for the determination of the spectral reflectances of ground features.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  4. Properties of the Bayesian Knowledge Tracing Model

    ERIC Educational Resources Information Center

    van de Sande, Brett

    2013-01-01

    Bayesian Knowledge Tracing is used very widely to model student learning. It comes in two different forms: The first form is the Bayesian Knowledge Tracing "hidden Markov model" which predicts the probability of correct application of a skill as a function of the number of previous opportunities to apply that skill and the model…

  5. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  6. Fermi orbital self-interaction corrected electronic structure of molecules beyond local density approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, T., E-mail: torsten.hahn@physik.tu-freiberg.de; Liebing, S.; Kortus, J.

    2015-12-14

    The correction of the self-interaction error that is inherent to all standard density functional theory calculations is an object of increasing interest. In this article, we apply the very recently developed Fermi-orbital based approach for the self-interaction correction [M. R. Pederson et al., J. Chem. Phys. 140, 121103 (2014) and M. R. Pederson, J. Chem. Phys. 142, 064112 (2015)] to a set of different molecular systems. Our study covers systems ranging from simple diatomic to large organic molecules. We focus our analysis on the direct estimation of the ionization potential from orbital eigenvalues. Further, we show that the Fermi orbitalmore » positions in structurally similar molecules appear to be transferable.« less

  7. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  8. Performance and Self-Consistency of the Generalized Dielectric Dependent Hybrid Functional

    DOE PAGES

    Brawand, Nicholas P.; Govoni, Marco; Vörös, Márton; ...

    2017-05-24

    Here, we analyze the performance of the recently proposed screened exchange constant functional (SX) on the GW100 test set, and we discuss results obtained at different levels of self-consistency. The SX functional is a generalization of dielectric dependent hybrid functionals to finite systems; it is nonempirical and depends on the average screening of the exchange interaction. We compare results for ionization potentials obtained with SX to those of CCSD(T) calculations and experiments, and we find excellent agreement, on par with recent state of the art methods based on many body perturbation theory. Applying SX perturbatively to correct PBE eigenvalues yieldsmore » improved results in most cases, except for ionic molecules, for which wave function self-consistency is instead crucial. Calculations where wave functions and the screened exchange constant (α SX) are determined self-consistently, and those where α SX is fixed to the value determined within PBE, yield results of comparable accuracy. Perturbative G 0W 0 corrections of eigenvalues obtained with self-consistent αSX are small on average, for all molecules in the GW100 test set.« less

  9. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  10. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  11. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  12. Distortion correction for diffusion-weighted MRI tractography and fMRI in the temporal lobes.

    PubMed

    Embleton, Karl V; Haroon, Hamied A; Morris, David M; Ralph, Matthew A Lambon; Parker, Geoff J M

    2010-10-01

    Single shot echo-planar imaging (EPI) sequences are currently the most commonly used sequences for diffusion-weighted imaging (DWI) and functional magnetic resonance imaging (fMRI) as they allow relatively high signal to noise with rapid acquisition time. A major drawback of EPI is the substantial geometric distortion and signal loss that can occur due to magnetic field inhomogeneities close to air-tissue boundaries. If DWI-based tractography and fMRI are to be applied to these regions, then the distortions must be accurately corrected to achieve meaningful results. We describe robust acquisition and processing methods for correcting such distortions in spin echo (SE) EPI using a variant of the reversed direction k space traversal method with a number of novel additions. We demonstrate that dual direction k space traversal with maintained diffusion-encoding gradient strength and direction results in correction of the great majority of eddy current-associated distortions in DWI, in addition to those created by variations in magnetic susceptibility. We also provide examples to demonstrate that the presence of severe distortions cannot be ignored if meaningful tractography results are desired. The distortion correction routine was applied to SE-EPI fMRI acquisitions and allowed detection of activation in the temporal lobe that had been previously found using PET but not conventional fMRI. © 2010 Wiley-Liss, Inc.

  13. Parton distribution functions from reduced Ioffe-time distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Hui; Chen, Jiunn-Wei; Monahan, Christopher

    2018-04-01

    We show that the correct way to extract parton distribution functions from the reduced Ioffe-time distribution, a ratio of the Ioffe-time distribution for a moving hadron and a hadron at rest, is through a factorization formula. This factorization exists because, at small distances, forming the ratio does not change the infrared behavior of the numerator, which is factorizable. We illustrate the effect of such a factorization by applying it to results in the literature.

  14. Novel Hamiltonian method for collective dynamics analysis of an intense charged particle beam propagating through a periodic focusing quadrupole lattice a)

    NASA Astrophysics Data System (ADS)

    Startsev, Edward A.; Davidson, Ronald C.

    2011-05-01

    Identifying regimes for quiescent propagation of intense beams over long distances has been a major challenge in accelerator research. In particular, the development of systematic theoretical approaches that are able to treat self-consistently the applied oscillating force and the nonlinear self-field force of the beam particles simultaneously has been a major challenge of modern beam physics. In this paper, the recently developed Hamiltonian averaging technique [E. A. Startsev, R. C. Davidson, and M. Dorf, Phys. Rev. ST Accel. Beams 13, 064402 (2010)] which incorporates both the applied periodic focusing force and the self-field force of the beam particles, is generalized to the case of time-dependent beam distributions. The new formulation allows not only a determination of quasi-equilibrium solutions of the non-linear Vlasov-Poison system of equations but also a detailed study of their stability properties. The corrections to the well-known "smooth-focusing" approximation are derived, and the results are applied to a matched beam with thermal equilibrium distribution function. It is shown that the corrections remain small even for moderate values of the vacuum phase advance συ. Nonetheless, because the corrections to the average self-field potential are non-axisymmetric, the stability properties of the different beam quasi-equilibria can change significantly.

  15. Limitations of the method of complex basis functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumel, R.T.; Crocker, M.C.; Nuttall, J.

    1975-08-01

    The method of complex basis functions proposed by Rescigno and Reinhardt is applied to the calculation of the amplitude in a model problem which can be treated analytically. It is found for an important class of potentials, including some of infinite range and also the square well, that the method does not provide a converging sequence of approximations. However, in some cases, approximations of relatively low order might be close to the correct result. The method is also applied to S-wave e-H elastic scattering above the ionization threshold, and spurious ''convergence'' to the wrong result is found. A procedure whichmore » might overcome the difficulties of the method is proposed.« less

  16. Total variation approach for adaptive nonuniformity correction in focal-plane arrays.

    PubMed

    Vera, Esteban; Meza, Pablo; Torres, Sergio

    2011-01-15

    In this Letter we propose an adaptive scene-based nonuniformity correction method for fixed-pattern noise removal in imaging arrays. It is based on the minimization of the total variation of the estimated irradiance, and the resulting function is optimized by an isotropic total variation approach making use of an alternating minimization strategy. The proposed method provides enhanced results when applied to a diverse set of real IR imagery, accurately estimating the nonunifomity parameters of each detector in the focal-plane array at a fast convergence rate, while also forming fewer ghosting artifacts.

  17. Bistatic scattering from a cone frustum

    NASA Technical Reports Server (NTRS)

    Ebihara, W.; Marhefka, R. J.

    1986-01-01

    The bistatic scattering from a perfectly conducting cone frustum is investigated using the Geometrical Theory of Diffraction (GTD). The first-order GTD edge-diffraction solution has been extended by correcting for its failure in the specular region off the curved surface and in the rim-caustic regions of the endcaps. The corrections are accomplished by the use of transition functions which are developed and introduced into the diffraction coefficients. Theoretical results are verified in the principal plane by comparison with the moment method solution and experimental measurements. The resulting solution for the scattered fields is accurate, easy to apply, and fast to compute.

  18. [THE CORRECTION OF TROPHIC DISORDERS IN CHILDREN OF CHRONIC GASTRODUODENITIS WITH METHOD LOW-FREQUENCY LIGHT-MAGNETOTHERAPY].

    PubMed

    Kolosova, T A; Sadovnikova, I V; Belousova, T E

    2015-01-01

    The results of a survey of school children with chronic gastroduodenitis when applying at an early period the medical rehabilitation with method low-frequency light-magnetotherapy. During treatment of hospital was evaluated vegetative-trophic status with methods of cardiointervalography and thermovision functional tests. In normalizes clinical parameters was correction in dynamics of the vegetative status in children, it confirms the effectiveness of the therapy. It is proved, that the use of low-frequency light-magnetotherapy has a positive effect on the vegetative--trophic provision an organism and normalizes the vegetative dysfunction.

  19. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  20. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations.

    PubMed

    van de Streek, Jacco; Neumann, Marcus A

    2010-10-01

    This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.

  1. Statistical bias correction modelling for seasonal rainfall forecast for the case of Bali island

    NASA Astrophysics Data System (ADS)

    Lealdi, D.; Nurdiati, S.; Sopaheluwakan, A.

    2018-04-01

    Rainfall is an element of climate which is highly influential to the agricultural sector. Rain pattern and distribution highly determines the sustainability of agricultural activities. Therefore, information on rainfall is very useful for agriculture sector and farmers in anticipating the possibility of extreme events which often cause failures of agricultural production. This research aims to identify the biases from seasonal forecast products from ECMWF (European Centre for Medium-Range Weather Forecasts) rainfall forecast and to build a transfer function in order to correct the distribution biases as a new prediction model using quantile mapping approach. We apply this approach to the case of Bali Island, and as a result, the use of bias correction methods in correcting systematic biases from the model gives better results. The new prediction model obtained with this approach is better than ever. We found generally that during rainy season, the bias correction approach performs better than in dry season.

  2. Ka-Band ARM Zenith Radar Corrections Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Karen; Toto, Tami; Giangrande, Scott

    The KAZRCOR Value -added Product (VAP) performs several corrections to the ingested KAZR moments and also creates a significant detection mask for each radar mode. The VAP computes gaseous attenuation as a function of time and radial distance from the radar antenna, based on ambient meteorological observations, and corrects observed reflectivities for that effect. KAZRCOR also dealiases mean Doppler velocities to correct velocities whose magnitudes exceed the radar’s Nyquist velocity. Input KAZR data fields are passed through into the KAZRCOR output files, in their native time and range coordinates. Complementary corrected reflectivity and velocity fields are provided, along with amore » mask of significant detections and a number of data quality flags. This report covers the KAZRCOR VAP as applied to the original KAZR radars and the upgraded KAZR2 radars. Currently there are two separate code bases for the different radar versions, but once KAZR and KAZR2 data formats are harmonized, only a single code base will be required.« less

  3. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  4. Corrections for the effects of significant wave height and attitude on Geosat radar altimeter measurements

    NASA Technical Reports Server (NTRS)

    Hayne, G. S.; Hancock, D. W., III

    1990-01-01

    Range estimates from a radar altimeter have biases which are a function of the significant wave height (SWH) and the satellite attitude angle (AA). Based on results of prelaunch Geosat modeling and simulation, a correction for SWH and AA was already applied to the sea-surface height estimates from Geosat's production data processing. By fitting a detailed model radar return waveform to Geosat waveform sampler data, it is possible to provide independent estimates of the height bias, the SWH, and the AA. The waveform fitting has been carried out for 10-sec averages of Geosat waveform sampler data over a wide range of SWH and AA values. The results confirm that Geosat sea-surface-height correction is good to well within the original dm-level specification, but that an additional height correction can be made at the level of several cm.

  5. Correcting the influence of vegetation on surface soil moisture indices by using hyperspectral artificial 3D-canopy models

    NASA Astrophysics Data System (ADS)

    Spengler, D.; Kuester, T.; Frick, A.; Scheffler, D.; Kaufmann, H.

    2013-10-01

    Surface soil moisture content is one of the key variables used for many applications especially in hydrology, meteorology and agriculture. Hyperspectral remote sensing provides effective methodologies for mapping soil moisture content over a broad area by different indices such as NSMI [1,2] and SMGM [3]. Both indices can achieve a high accuracy for non-vegetation influenced soil samples, but their accuracy is limited in case of the presence of vegetation. Since, the increase of the vegetation cover leads to non-linear variations of the indices. In this study a new methodology for moisture indices correcting the influence of vegetation is presented consisting of several processing steps. First, hyperspectral reflectance data are classified in terms of crop type and growth stage. Second, based on these parameters 3D plant models from a database used to simulate typical canopy reflectance considering variations in the canopy structure (e.g. plant density and distribution) and the soil moisture content for actual solar illumination and sensor viewing angles. Third, a vegetation correction function is developed, based on the calculated soil moisture indices and vegetation indices of the simulated canopy reflectance data. Finally this function is applied on hyperspectral image data. The method is tested on two hyperspectral image data sets of the AISA DUAL at the test site Fichtwald in Germany. The results show a significant improvements compared to solely use of NSMI index. Up to a vegetation cover of 75 % the correction function minimise the influences of vegetation cover significantly. If the vegetation is denser the method leads to inadequate quality to predict the soil moisture content. In summary it can be said that applying the method on weakly to moderately overgrown with vegetation locations enables a significant improvement in the quantification of soil moisture and thus greatly expands the scope of NSMI.

  6. Reconstructing functional near-infrared spectroscopy (fNIRS) signals impaired by extra-cranial confounds: an easy-to-use filter method.

    PubMed

    Haeussinger, F B; Dresler, T; Heinzel, S; Schecklmann, M; Fallgatter, A J; Ehlis, A-C

    2014-07-15

    Functional near-infrared spectroscopy (fNIRS) is an optical neuroimaging method that detects temporal concentration changes of oxygenated and deoxygenated hemoglobin within the cortex, so that neural activation can be inferred. However, even though fNIRS is a very practical and well-tolerated method with several advantages particularly in methodically challenging measurement situations (e.g., during tasks involving movement or open speech), it has been shown to be confounded by systemic compounds of non-cerebral, extra-cranial origin (e.g. changes in blood pressure, heart rate). Especially event-related signal patterns induced by dilation or constriction of superficial forehead and temple veins impair the detection of frontal brain activation elicited by cognitive tasks. To further investigate this phenomenon, we conducted a simultaneous fNIRS-fMRI study applying a working memory paradigm (n-back). Extra-cranial signals were obtained by extracting the BOLD signal from fMRI voxels within the skin. To develop a filter method that corrects for extra-cranial skin blood flow, particularly intended for fNIRS data sets recorded by widely used continuous wave systems with fixed optode distances, we identified channels over the forehead with probable major extra-cranial signal contributions. The averaged signal from these channels was then subtracted from all fNIRS channels of the probe set. Additionally, the data were corrected for motion and non-evoked systemic artifacts. Applying these filters, we can show that measuring brain activation in frontal brain areas with fNIRS was substantially improved. The resulting signal resembled the fMRI parameters more closely than before the correction. Future fNIRS studies measuring functional brain activation in the forehead region need to consider the use of different filter options to correct for interfering extra-cranial signals. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  8. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  9. Epistasis analysis for quantitative traits by functional regression model.

    PubMed

    Zhang, Futao; Boerwinkle, Eric; Xiong, Momiao

    2014-06-01

    The critical barrier in interaction analysis for rare variants is that most traditional statistical methods for testing interactions were originally designed for testing the interaction between common variants and are difficult to apply to rare variants because of their prohibitive computational time and poor ability. The great challenges for successful detection of interactions with next-generation sequencing (NGS) data are (1) lack of methods for interaction analysis with rare variants, (2) severe multiple testing, and (3) time-consuming computations. To meet these challenges, we shift the paradigm of interaction analysis between two loci to interaction analysis between two sets of loci or genomic regions and collectively test interactions between all possible pairs of SNPs within two genomic regions. In other words, we take a genome region as a basic unit of interaction analysis and use high-dimensional data reduction and functional data analysis techniques to develop a novel functional regression model to collectively test interactions between all possible pairs of single nucleotide polymorphisms (SNPs) within two genome regions. By intensive simulations, we demonstrate that the functional regression models for interaction analysis of the quantitative trait have the correct type 1 error rates and a much better ability to detect interactions than the current pairwise interaction analysis. The proposed method was applied to exome sequence data from the NHLBI's Exome Sequencing Project (ESP) and CHARGE-S study. We discovered 27 pairs of genes showing significant interactions after applying the Bonferroni correction (P-values < 4.58 × 10(-10)) in the ESP, and 11 were replicated in the CHARGE-S study. © 2014 Zhang et al.; Published by Cold Spring Harbor Laboratory Press.

  10. Impact force identification for composite helicopter blades using minimal sensing

    NASA Astrophysics Data System (ADS)

    Budde, Carson N.

    In this research a method for online impact identification using minimal sensors is developed for rotor hubs with composite blades. Modal impact data and the corresponding responses are recorded at several locations to develop a frequency response function model for each composite blade on the rotor hub. The frequency response model for each blade is used to develop an impact identification algorithm which can be used to identify the location and magnitude of impacts. Impacts are applied in two experimental setups, including a four-blade spin test rig and a cantilevered full-sized composite blade. The impacts are estimated to have been applied at the correct location 92.3% of the time for static fiberglass blades, 97.4% of the time for static carbon fiber blades and 99.2% of the time for a full sized-static blade. The estimated location is assessed further and determined to have been estimated in the correct chord position 96.1% of the time for static fiberglass, 100% of the time for carbon fiber blades and 99.2% of the time for the full-sized blades. Projectile impacts are also applied statically and during rotation to the carbon fiber blades on the spin test rig at 57 and 83 RPM. The applied impacts can be located to the correct position 63.9%, 41.7% and 33.3% for the 0, 57 and 83 RPM speeds, respectively, while the correct chord location is estimated 100% of the time. The impact identification algorithm also estimates the force of an impact with an average percent difference of 4.64, 2.61 and 1.00 for static fiberglass, full sized, and carbon fiber blades, respectively. Using a load cell and work equations, the force of impact for a projectile fired from a dynamic firing setup is estimated at about 400 N. The average force measured for applied projectile impacts to the carbon fiber blades, rotating at 0, 57 and 83 RPM, is 368.8, 373.7 and 432.4 N, respectively.

  11. Improved H-κ Method by Harmonic Analysis on Ps and Crustal Multiples in Receiver Functions with respect to Dipping Moho and Crustal Anisotropy

    NASA Astrophysics Data System (ADS)

    Li, J.; Song, X.; Wang, P.; Zhu, L.

    2017-12-01

    The H-κ method (Zhu and Kanamori, 2000) has been widely used to estimate the crustal thickness and Vp/Vs ratio with receiver functions. However, in regions where the crustal structure is complicated, the method may produce uncertain or even unrealistic results, arising particularly from dipping Moho and/or crustal anisotropy. Here, we propose an improved H-κ method, which corrects for these effects first before stacking. The effect of dipping Moho and crustal anisotropy on Ps receiver function has been well studied, but not as much on crustal multiples (PpPs and PpSs+PsPs). Synthetic tests show that the effect of crustal anisotropy on the multiples are similar to Ps, while the effect of dipping Moho on the multiples is 5 times that on Ps (same cosine trend but 5 times in time shift). A Harmonic Analysis (HA) method for dipping/anisotropy was developed by Wang et al. (2017) for crustal Ps receiver functions to extract parameters of dipping Moho and crustal azimuthal anisotropy. In real data, the crustal multiples are much more complicated than the Ps. Therefore, we use the HA method (Wang et al., 2017), but apply separately to Ps and the multiples. It shows that although complicated, the trend of multiples can still be reasonably well represented by the HA. We then perform separate azimuthal corrections for Ps and the multiples and stack to obtain a combined receiver function. Lastly, the traditional H-κ procedure is applied to the stacked receiver function. We apply the improved H-κ method on 40 CNDSN (Chinese National Digital Seismic Network) stations distributed in a variety of geological setting across the Chinese continent. The results show apparent improvement compared to the traditional H-κ method, with clearer traces of multiples and stronger stacking energy in the grid search, as well as more reliable H-κ values.

  12. Detection of trans–cis flips and peptide-plane flips in protein structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Touw, Wouter G., E-mail: wouter.touw@radboudumc.nl; Joosten, Robbie P.; Vriend, Gert, E-mail: wouter.touw@radboudumc.nl

    A method is presented to detect peptide bonds that need either a trans–cis flip or a peptide-plane flip. A coordinate-based method is presented to detect peptide bonds that need correction either by a peptide-plane flip or by a trans–cis inversion of the peptide bond. When applied to the whole Protein Data Bank, the method predicts 4617 trans–cis flips and many thousands of hitherto unknown peptide-plane flips. A few examples are highlighted for which a correction of the peptide-plane geometry leads to a correction of the understanding of the structure–function relation. All data, including 1088 manually validated cases, are freely availablemore » and the method is available from a web server, a web-service interface and through WHAT-CHECK.« less

  13. ISR corrections to associated HZ production at future Higgs factories

    NASA Astrophysics Data System (ADS)

    Greco, Mario; Montagna, Guido; Nicrosini, Oreste; Piccinini, Fulvio; Volpi, Gabriele

    2018-02-01

    We evaluate the QED corrections due to initial state radiation (ISR) to associated Higgs boson production in electron-positron (e+e-) annihilation at typical energies of interest for the measurement of the Higgs properties at future e+e- colliders, such as CEPC and FCC-ee. We apply the QED Structure Function approach to the four-fermion production process e+e- →μ+μ- b b bar , including both signal and background contributions. We emphasize the relevance of the ISR corrections particularly near threshold and show that finite third order collinear contributions are mandatory to meet the expected experimental accuracy. We analyze in turn the rôle played by a full four-fermion calculation and beam energy spread in precision calculations for Higgs physics at future e+e- colliders.

  14. A bias-corrected CMIP5 dataset for Africa using the CDF-t method - a contribution to agricultural impact studies

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Janicot, Serge; Delfin Ochou, Abe; Vrac, Mathieu; Defrance, Dimitri; Sultan, Benjamin; Noël, Thomas

    2018-03-01

    The objective of this paper is to present a new dataset of bias-corrected CMIP5 global climate model (GCM) daily data over Africa. This dataset was obtained using the cumulative distribution function transform (CDF-t) method, a method that has been applied to several regions and contexts but never to Africa. Here CDF-t has been applied over the period 1950-2099 combining Historical runs and climate change scenarios for six variables: precipitation, mean near-surface air temperature, near-surface maximum air temperature, near-surface minimum air temperature, surface downwelling shortwave radiation, and wind speed, which are critical variables for agricultural purposes. WFDEI has been used as the reference dataset to correct the GCMs. Evaluation of the results over West Africa has been carried out on a list of priority user-based metrics that were discussed and selected with stakeholders. It includes simulated yield using a crop model simulating maize growth. These bias-corrected GCM data have been compared with another available dataset of bias-corrected GCMs using WATCH Forcing Data as the reference dataset. The impact of WFD, WFDEI, and also EWEMBI reference datasets has been also examined in detail. It is shown that CDF-t is very effective at removing the biases and reducing the high inter-GCM scattering. Differences with other bias-corrected GCM data are mainly due to the differences among the reference datasets. This is particularly true for surface downwelling shortwave radiation, which has a significant impact in terms of simulated maize yields. Projections of future yields over West Africa are quite different, depending on the bias-correction method used. However all these projections show a similar relative decreasing trend over the 21st century.

  15. Topside correction of IRI by global modeling of ionospheric scale height using COSMIC radio occultation data

    NASA Astrophysics Data System (ADS)

    Wu, M. J.; Guo, P.; Fu, N. F.; Xu, T. L.; Xu, X. S.; Jin, H. L.; Hu, X. G.

    2016-06-01

    The ionosphere scale height is one of the most significant ionospheric parameters, which contains information about the ion and electron temperatures and dynamics in upper ionosphere. In this paper, an empirical orthogonal function (EOF) analysis method is applied to process all the ionospheric radio occultations of GPS/COSMIC (Constellation Observing System for Meteorology, Ionosphere, and Climate) from the year 2007 to 2011 to reconstruct a global ionospheric scale height model. This monthly medium model has spatial resolution of 5° in geomagnetic latitude (-87.5° ~ 87.5°) and temporal resolution of 2 h in local time. EOF analysis preserves the characteristics of scale height quite well in the geomagnetic latitudinal, anural, seasonal, and diurnal variations. In comparison with COSMIC measurements of the year of 2012, the reconstructed model indicates a reasonable accuracy. In order to improve the topside model of International Reference Ionosphere (IRI), we attempted to adopt the scale height model in the Bent topside model by applying a scale factor q as an additional constraint. With the factor q functioning in the exponent profile of topside ionosphere, the IRI scale height should be forced equal to the precise COSMIC measurements. In this way, the IRI topside profile can be improved to get closer to the realistic density profiles. Internal quality check of this approach is carried out by comparing COSMIC realistic measurements and IRI with or without correction, respectively. In general, the initial IRI model overestimates the topside electron density to some extent, and with the correction introduced by COSMIC scale height model, the deviation of vertical total electron content (VTEC) between them is reduced. Furthermore, independent validation with Global Ionospheric Maps VTEC implies a reasonable improvement in the IRI VTEC with the topside model correction.

  16. Early correction of septum JJ deformity in unilateral cleft lip-cleft palate.

    PubMed

    Morselli, Paolo G; Pinto, Valentina; Negosanti, Luca; Firinu, Antonella; Fabbri, Erich

    2012-09-01

    The treatment of patients affected by unilateral cleft lip-cleft palate is based on a multistage procedure of surgical and nonsurgical treatments in accordance with the different types of deformity. Over time, the surgical approach for the correction of a nasal deformity in a cleft lip-cleft palate has changed notably and the protocol of treatment has evolved continuously. Not touching the cleft lip nose in the primary repair was dogmatic in the past, even though this meant severe functional, aesthetic, and psychological problems for the child. McComb reported a new technique for placement of the alar cartilage during lip repair. The positive results of this new approach proved that the early correction of the alar cartilage anomaly is essential for harmonious facial growth with stable results and without discomfort for the child. The authors applied the same principles used for the treatment of the alar cartilage for correction of the septum deformity, introducing a primary rhinoseptoplasty during the cheiloplasty. The authors compared two groups: group A, which underwent septoplasty during cleft lip repair; and group B, which did not. After the anthropometric evaluation of the two groups, the authors observed better symmetry regarding nasal shape, correct growth of the nose, and a strong reduction of the nasal deformity in the patients who underwent primary JJ septum deformity correction. The authors can assume that, similar to the alar cartilage, the septum can be repositioned during the primary surgery, without causing growth anomaly, improving the morphologic/functional results.

  17. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  18. Generalized Green's function molecular dynamics for canonical ensemble simulations

    NASA Astrophysics Data System (ADS)

    Coluci, V. R.; Dantas, S. O.; Tewary, V. K.

    2018-05-01

    The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.

  19. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  1. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-20

    Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that for imagesmore » with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  2. Precipitate shape fitting and reconstruction by means of 3D Zernike functions

    NASA Astrophysics Data System (ADS)

    Callahan, P. G.; De Graef, M.

    2012-01-01

    3D Zernike functions are defined and used for the reconstruction of precipitate shapes. These functions are orthogonal over the unit ball and allow for an arbitrary shape, scaled to fit inside an embedding sphere, to be decomposed into 3D harmonics. Explicit expressions are given for the general Zernike moments, correcting typographical errors in the literature. Explicit expressions of the Zernike moments for the ellipsoid and the cube are given. The 3D Zernike functions and moments are applied to the reconstruction of γ' precipitate shapes in two Ni-based superalloys, one with nearly cuboidal precipitate shapes, and one with more complex dendritic shapes.

  3. Modeling the archetype cysteine protease reaction using dispersion corrected density functional methods in ONIOM-type hybrid QM/MM calculations; the proteolytic reaction of papain.

    PubMed

    Fekete, Attila; Komáromi, István

    2016-12-07

    A proteolytic reaction of papain with a simple peptide model substrate N-methylacetamide has been studied. Our aim was twofold: (i) we proposed a plausible reaction mechanism with the aid of potential energy surface scans and second geometrical derivatives calculated at the stationary points, and (ii) we investigated the applicability of the dispersion corrected density functional methods in comparison with the popular hybrid generalized gradient approximations (GGA) method (B3LYP) without such a correction in the QM/MM calculations for this particular problem. In the resting state of papain the ion pair and neutral forms of the Cys-His catalytic dyad have approximately the same energy and they are separated by only a small barrier. Zero point vibrational energy correction shifted this equilibrium slightly to the neutral form. On the other hand, the electrostatic solvation free energy corrections, calculated using the Poisson-Boltzmann method for the structures sampled from molecular dynamics simulation trajectories, resulted in a more stable ion-pair form. All methods we applied predicted at least a two elementary step acylation process via a zwitterionic tetrahedral intermediate. Using dispersion corrected DFT methods the thioester S-C bond formation and the proton transfer from histidine occur in the same elementary step, although not synchronously. The proton transfer lags behind (or at least does not precede) the S-C bond formation. The predicted transition state corresponds mainly to the S-C bond formation while the proton is still on the histidine Nδ atom. In contrast, the B3LYP method using larger basis sets predicts a transition state in which the S-C bond is almost fully formed and the transition state can be mainly featured by the Nδ(histidine) to N(amid) proton transfer. Considerably lower activation energy was predicted (especially by the B3LYP method) for the next amide bond breaking elementary step of acyl-enzyme formation. Deacylation appeared to be a single elementary step process in all the methods we applied.

  4. Application of the N-quantum approximation to the proton radius problem

    NASA Astrophysics Data System (ADS)

    Cowen, Steven

    This thesis is organized into three parts: 1. Introduction and bound state calculations of electronic and muonic hydrogen, 2. Bound states in motion, and 3.Treatment of soft photons. In the first part, we apply the N-Quantum Approximation (NQA) to electronic and muonic hydrogen and search for any new corrections to energy levels that could account for the 0.31 meV discrepancy of the proton radius problem. We derive a bound state equation and compare our numerical solutions and wave functions to those of the Dirac equation. We find NQA Lamb shift diagrams and calculate the associated energy shift contributions. We do not find any new corrections large enough to account for the discrepancy. In part 2, we discuss the effects of motion on bound states using the NQA. We find classical Lorentz contraction of the lowest order NQA wave function. Finally, in part 3, we develop a clothing transformation for interacting fields in order to produce the correct asymptotic limits. We find the clothing eliminates a trilinear interacting Hamiltonian term and produces a quadrilinear soft photon interaction term.

  5. Discrete Thermodynamics

    DOE PAGES

    Margolin, L. G.; Hunter, A.

    2017-10-18

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  6. Discrete Thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, L. G.; Hunter, A.

    Here, we consider the dependence of velocity probability distribution functions on the finite size of a thermodynamic system. We are motivated by applications to computational fluid dynamics, hence discrete thermodynamics. We then begin by describing a coarsening process that represents geometric renormalization. Then, based only on the requirements of conservation, we demonstrate that the pervasive assumption of local thermodynamic equilibrium is not form invariant. We develop a perturbative correction that restores form invariance to second-order in a small parameter associated with macroscopic gradients. Finally, we interpret the corrections in terms of unresolved kinetic energy and discuss the implications of ourmore » results both in theory and as applied to numerical simulation.« less

  7. [Principles of PET].

    PubMed

    Beuthien-Baumann, B

    2018-05-01

    Positron emission tomography (PET) is a procedure in nuclear medicine, which is applied predominantly in oncological diagnostics. In the form of modern hybrid machines, such as PET computed tomography (PET/CT) and PET magnetic resonance imaging (PET/MRI) it has found wide acceptance and availability. The PET procedure is more than just another imaging technique, but a functional method with the capability for quantification in addition to the distribution pattern of the radiopharmaceutical, the results of which are used for therapeutic decisions. A profound knowledge of the principles of PET including the correct indications, patient preparation, and possible artifacts is mandatory for the correct interpretation of PET results.

  8. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  9. Why not energy conservation?

    NASA Astrophysics Data System (ADS)

    Carlson, Shawn

    2016-01-01

    Energy conservation is a deep principle that is obeyed by all of the fundamental forces of nature. It puts stringent constraints on all systems, particularly systems that are ‘isolated,’ meaning that no energy can enter or escape. Notwithstanding the success of the principle of stationary action, it is fair to wonder to what extent physics can be formulated from the principle of stationary energy. We show that if one interprets mechanical energy as a state function, then its stationarity leads to a novel formulation of classical mechanics. However, unlike Lagrangian and Hamiltonian mechanics, which deliver their state functions via algebraic proscriptions (i.e., the Lagrangian is always the difference between a system’s kinetic and potential energies), this new formalism identifies its state functions as the solutions to a differential equation. This is an important difference because differential equations can generate more general solutions than algebraic recipes. When applied to Newtonian systems for which the energy function is separable, these state functions are always the mechanical energy. However, while the stationary state function for a charged particle moving in an electromagnetic field proves not to be energy, the function nevertheless correctly encodes the dynamics of the system. Moreover, the stationary state function for a free relativistic particle proves not to be the energy either. Rather, our differential equation yields the relativistic free-particle Lagrangian (plus a non-dynamical constant) in its correct dynamical context. To explain how this new formalism can consistently deliver stationary state functions that give the correct dynamics but that are not always the mechanical energy, we propose that energy conservation is a specific realization of a deeper principle of stationarity that governs both relativistic and non-relativistic mechanics.

  10. Correction of clock errors in seismic data using noise cross-correlations

    NASA Astrophysics Data System (ADS)

    Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline

    2017-04-01

    Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock drifts (1 ms/day) as well as large clock jumps (6 min) are identified. The same method is applied to records of five OBS stations deployed within a radius of 150 km around La Réunion. The assumption of a linear clock drift is verified by correlating OBS for which GPS-based skew corrections were available with land stations. For two OBS stations without skew estimates, we find clock drifts of 0.9 ms/day and 0.4 ms/day. This study salvages expensive seismic records from remote regions that would be otherwise lost for seismicity or tomography studies.

  11. A comparative study of nonparametric methods for pattern recognition

    NASA Technical Reports Server (NTRS)

    Hahn, S. F.; Nelson, G. D.

    1972-01-01

    The applied research discussed in this report determines and compares the correct classification percentage of the nonparametric sign test, Wilcoxon's signed rank test, and K-class classifier with the performance of the Bayes classifier. The performance is determined for data which have Gaussian, Laplacian and Rayleigh probability density functions. The correct classification percentage is shown graphically for differences in modes and/or means of the probability density functions for four, eight and sixteen samples. The K-class classifier performed very well with respect to the other classifiers used. Since the K-class classifier is a nonparametric technique, it usually performed better than the Bayes classifier which assumes the data to be Gaussian even though it may not be. The K-class classifier has the advantage over the Bayes in that it works well with non-Gaussian data without having to determine the probability density function of the data. It should be noted that the data in this experiment was always unimodal.

  12. Computer-assisted design and synthesis of a highly selective smart adsorbent for extraction of clonazepam from human serum.

    PubMed

    Aqababa, Heydar; Tabandeh, Mehrdad; Tabatabaei, Meisam; Hasheminejad, Meisam; Emadi, Masoomeh

    2013-01-01

    A computational approach was applied to screen functional monomers and polymerization solvents for rational design of molecular imprinted polymers (MIPs) as smart adsorbents for solid-phase extraction of clonazepam (CLO) form human serum. The comparison of the computed binding energies of the complexes formed between the template and functional monomers was conducted. The primary computational results were corrected by taking into calculation both the basis set superposition error (BSSE) and the effect of the polymerization solvent using the counterpoise (CP) correction and the polarizable continuum model, respectively. Based on the theoretical calculations, trifluoromethyl acrylic acid (TFMAA) and acrylonitrile (ACN) were found as the best and the worst functional monomers, correspondingly. To test the accuracy of the computational results, three MIPs were synthesized by different functional monomers and their Langmuir-Freundlich (LF) isotherms were studied. The experimental results obtained confirmed the computational results and indicated that the MIP synthesized using TFMAA had the highest affinity for CLO in human serum despite the presence of a vast spectrum of ions. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Accurate donor electron wave functions from a multivalley effective mass theory.

    NASA Astrophysics Data System (ADS)

    Pendo, Luke; Hu, Xuedong

    Multivalley effective mass (MEM) theories combine physical intuition with a marginal need for computational resources, but they tend to be insensitive to variations in the wavefunction. However, recent papers suggest full Bloch functions and suitable central cell donor potential corrections are essential to replicating qualitative and quantitative features of the wavefunction. In this talk, we consider a variational MEM method that can accurately predict both spectrum and wavefunction of isolated phosphorus donors. As per Gamble et. al, we employ a truncated series representation of the Bloch function with a tetrahedrally symmetric central cell correction. We use a dynamic dielectric constant, a feature commonly seen in tight-binding methods. Uniquely, we use a freely extensible basis of either all Slater- or all Gaussian-type functions. With a large basis able to capture the influence of higher energy eigenstates, this method is well positioned to consider the influence of external perturbations, such as electric field or applied strain, on the charge density. This work is supported by the US Army Research Office (W911NF1210609).

  14. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  16. Applying a pelvic corrective force induces forced use of the paretic leg and improves paretic leg EMG activities of individuals post-stroke during treadmill walking.

    PubMed

    Hsu, Chao-Jung; Kim, Janis; Tang, Rongnian; Roth, Elliot J; Rymer, William Z; Wu, Ming

    2017-10-01

    To determine whether applying a mediolateral corrective force to the pelvis during treadmill walking would enhance muscle activity of the paretic leg and improve gait symmetry in individuals with post-stroke hemiparesis. Fifteen subjects with post-stroke hemiparesis participated in this study. A customized cable-driven robotic system based over a treadmill generated a mediolateral corrective force to the pelvis toward the paretic side during early stance phase. Three different amounts of corrective force were applied. Electromyographic (EMG) activity of the paretic leg, spatiotemporal gait parameters and pelvis lateral displacement were collected. Significant increases in integrated EMG of hip abductor, medial hamstrings, soleus, rectus femoris, vastus medialis and tibialis anterior were observed when pelvic corrective force was applied, with pelvic corrective force at 9% of body weight inducing greater muscle activity than 3% or 6% of body weight. Pelvis lateral displacement was more symmetric with pelvic corrective force at 9% of body weight. Applying a mediolateral pelvic corrective force toward the paretic side may enhance muscle activity of the paretic leg and improve pelvis displacement symmetry in individuals post-stroke. Forceful weight shift to the paretic side could potentially force additional use of the paretic leg and improve the walking pattern. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  17. A vibration correction method for free-fall absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  18. Protein-ligand interaction energies with dispersion corrected density functional theory and high-level wave function based methods.

    PubMed

    Antony, Jens; Grimme, Stefan; Liakos, Dimitrios G; Neese, Frank

    2011-10-20

    With dispersion-corrected density functional theory (DFT-D3) intermolecular interaction energies for a diverse set of noncovalently bound protein-ligand complexes from the Protein Data Bank are calculated. The focus is on major contacts occurring between the drug molecule and the binding site. Generalized gradient approximation (GGA), meta-GGA, and hybrid functionals are used. DFT-D3 interaction energies are benchmarked against the best available wave function based results that are provided by the estimated complete basis set (CBS) limit of the local pair natural orbital coupled-electron pair approximation (LPNO-CEPA/1) and compared to MP2 and semiempirical data. The size of the complexes and their interaction energies (ΔE(PL)) varies between 50 and 300 atoms and from -1 to -65 kcal/mol, respectively. Basis set effects are considered by applying extended sets of triple- to quadruple-ζ quality. Computed total ΔE(PL) values show a good correlation with the dispersion contribution despite the fact that the protein-ligand complexes contain many hydrogen bonds. It is concluded that an adequate, for example, asymptotically correct, treatment of dispersion interactions is necessary for the realistic modeling of protein-ligand binding. Inclusion of the dispersion correction drastically reduces the dependence of the computed interaction energies on the density functional compared to uncorrected DFT results. DFT-D3 methods provide results that are consistent with LPNO-CEPA/1 and MP2, the differences of about 1-2 kcal/mol on average (<5% of ΔE(PL)) being on the order of their accuracy, while dispersion-corrected semiempirical AM1 and PM3 approaches show a deviating behavior. The DFT-D3 results are found to depend insignificantly on the choice of the short-range damping model. We propose to use DFT-D3 as an essential ingredient in a QM/MM approach for advanced virtual screening approaches of protein-ligand interactions to be combined with similarly "first-principle" accounts for the estimation of solvation and entropic effects.

  19. Transient Spectra in TDDFT: Corrections and Correlations

    NASA Astrophysics Data System (ADS)

    Parkhill, John; Nguyen, Triet

    We introduce an atomistic, all-electron, black-box electronic structure code to simulate transient absorption (TA) spectra and apply it to simulate pyrazole and a GFP chromophore derivative. The method is an application of OSCF2, our dissipative extension of time-dependent density functional theory. We compare our simulated spectra directly with recent ultra-fast spectroscopic experiments, showing that they are usefully predicted. We also relate bleaches in the TA signal to Fermi-blocking which would be missed in a simplified model. An important ingredient in the method is the stationary-TDDFT correction scheme recently put forwards by Fischer, Govind, and Cramer which allows us to overcome a limitation of adiabatic TDDFT. We demonstrate that OSCF2 is able to predict both the energies of bleaches and induced absorptions, as well as the decay of the transient spectrum, with only the molecular structure as input. With remaining time we will discuss corrections which resolve the non-resonant behavior of driven TDDFT, and correlated corrections to mean-field dynamics.

  20. Evaluation of different approaches to modeling the second-order ionospheric delay on GPS measurements

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, M.; Desai, S. D.; Butala, M. D.; Komjathy, A.

    2013-12-01

    This work evaluates various approaches to compute the second order ionospheric correction (SOIC) to Global Positioning System (GPS) measurements. When estimating the reference frame using GPS, applying this correction is known to primarily affect the realization of the origin of the Earth's reference frame along the spin axis (Z coordinate). Therefore, the Z translation relative to the International Terrestrial Reference Frame 2008 is used as the metric to evaluate various published approaches to determining the slant total electron content (TEC) for the SOIC: getting the slant TEC from GPS measurements, and using the vertical total electron content (TEC) given by a Global Ionospheric Model (GIM) to transform it to slant TEC via a mapping function. All of these approaches agree to 1 mm if the ionospheric shell height needed in GIM-based approaches is set to 600 km. The commonly used shell height of 450 km introduces an offset of 1 to 2 mm. When the SOIC is not applied, the Z axis translation can be reasonably modeled with a ratio of +0.23 mm/TEC units of the daily median GIM vertical TEC. Also, precise point positioning (PPP) solutions (positions and clocks) determined with and without SOIC differ by less than 1 mm only if they are based upon GPS orbit and clock solutions that have consistently applied or not applied the correction, respectively. Otherwise, deviations of few millimeters in the north component of the PPP solutions can arise due to inconsistencies with the satellite orbit and clock products, and those deviations exhibit a dependency on solar cycle conditions.

  1. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H 2, Be, N 2, H 2O, and C 2H 4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  2. Science reflects history as society influences science: brief history of "race," "race correction," and the spirometer.

    PubMed

    Lujan, Heidi L; DiCarlo, Stephen E

    2018-06-01

    Spirometers are used globally to diagnose respiratory diseases, and most commercially available spirometers "correct" for race. "Race correction" is built into the software of spirometers. To evaluate pulmonary function and to make recordings, the operator must enter the subject's race. In fact, the Joint Working Party of the American Thoracic Society/European Respiratory Society recommends the use of race- and ethnic-specific reference values. In the United States, spirometers apply correction factors of 10-15% for individuals labeled "Black" and 4-6% for people labeled "Asian." Thus race is purported to be a biologically important and scientifically valid category. However, history suggests that race corrections may represent an implicit bias, discrimination, and racism. Furthermore, this practice masks economic and environmental factors. The flawed logic of innate, racial difference is also considered with disability estimates, preemployment physicals, and clinical diagnoses that rely on the spirometer. Thomas Jefferson's Notes on the State of Virginia (1832) may have initiated this mistaken belief by noting deficiencies of the "pulmonary apparatus" of blacks. Plantation physicians used Jefferson's statement to support slavery, believing that forced labor was a way to "vitalize the blood" of deficient black slaves. Samuel Cartwright, a Southern physician and slave holder, was the first to use spirometry to record deficiencies in pulmonary function of blacks. A massive study by Benjamin Apthorp Gould (1869) during the Civil War validated his results. The history of slavery created an environment where racial difference in lung capacity become so widely accepted that race correction became a scientifically valid procedure.

  3. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  4. Event-by-Event Continuous Respiratory Motion Correction for Dynamic PET Imaging.

    PubMed

    Yu, Yunhan; Chan, Chung; Ma, Tianyu; Liu, Yaqiang; Gallezot, Jean-Dominique; Naganawa, Mika; Kelada, Olivia J; Germino, Mary; Sinusas, Albert J; Carson, Richard E; Liu, Chi

    2016-07-01

    Existing respiratory motion-correction methods are applied only to static PET imaging. We have previously developed an event-by-event respiratory motion-correction method with correlations between internal organ motion and external respiratory signals (INTEX). This method is uniquely appropriate for dynamic imaging because it corrects motion for each time point. In this study, we applied INTEX to human dynamic PET studies with various tracers and investigated the impact on kinetic parameter estimation. The use of 3 tracers-a myocardial perfusion tracer, (82)Rb (n = 7); a pancreatic β-cell tracer, (18)F-FP(+)DTBZ (n = 4); and a tumor hypoxia tracer, (18)F-fluoromisonidazole ((18)F-FMISO) (n = 1)-was investigated in a study of 12 human subjects. Both rest and stress studies were performed for (82)Rb. The Anzai belt system was used to record respiratory motion. Three-dimensional internal organ motion in high temporal resolution was calculated by INTEX to guide event-by-event respiratory motion correction of target organs in each dynamic frame. Time-activity curves of regions of interest drawn based on end-expiration PET images were obtained. For (82)Rb studies, K1 was obtained with a 1-tissue model using a left-ventricle input function. Rest-stress myocardial blood flow (MBF) and coronary flow reserve (CFR) were determined. For (18)F-FP(+)DTBZ studies, the total volume of distribution was estimated with arterial input functions using the multilinear analysis 1 method. For the (18)F-FMISO study, the net uptake rate Ki was obtained with a 2-tissue irreversible model using a left-ventricle input function. All parameters were compared with the values derived without motion correction. With INTEX, K1 and MBF increased by 10% ± 12% and 15% ± 19%, respectively, for (82)Rb stress studies. CFR increased by 19% ± 21%. For studies with motion amplitudes greater than 8 mm (n = 3), K1, MBF, and CFR increased by 20% ± 12%, 30% ± 20%, and 34% ± 23%, respectively. For (82)Rb rest studies, INTEX had minimal effect on parameter estimation. The total volume of distribution of (18)F-FP(+)DTBZ and Ki of (18)F-FMISO increased by 17% ± 6% and 20%, respectively. Respiratory motion can have a substantial impact on dynamic PET in the thorax and abdomen. The INTEX method using continuous external motion data substantially changed parameters in kinetic modeling. More accurate estimation is expected with INTEX. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  5. Estimation of arterial input by a noninvasive image derived method in brain H2 15O PET study: confirmation of arterial location using MR angiography

    NASA Astrophysics Data System (ADS)

    Muinul Islam, Muhammad; Tsujikawa, Tetsuya; Mori, Tetsuya; Kiyono, Yasushi; Okazawa, Hidehiko

    2017-06-01

    A noninvasive method to estimate input function directly from H2 15O brain PET data for measurement of cerebral blood flow (CBF) was proposed in this study. The image derived input function (IDIF) method extracted the time-activity curves (TAC) of the major cerebral arteries at the skull base from the dynamic PET data. The extracted primordial IDIF showed almost the same radioactivity as the arterial input function (AIF) from sampled blood at the plateau part in the later phase, but significantly lower radioactivity in the initial arterial phase compared with that of AIF-TAC. To correct the initial part of the IDIF, a dispersion function was applied and two constants for the correction were determined by fitting with the individual AIF in 15 patients with unilateral arterial stenoocclusive lesions. The area under the curves (AUC) from the two input functions showed good agreement with the mean AUCIDIF/AUCAIF ratio of 0.92  ±  0.09. The final products of CBF and arterial-to-capillary vascular volume (V 0) obtained from the IDIF and AIF showed no difference, and had with high correlation coefficients.

  6. Combined distribution functions: A powerful tool to identify cation coordination geometries in liquid systems

    NASA Astrophysics Data System (ADS)

    Sessa, Francesco; D'Angelo, Paola; Migliorati, Valentina

    2018-01-01

    In this work we have developed an analytical procedure to identify metal ion coordination geometries in liquid media based on the calculation of Combined Distribution Functions (CDFs) starting from Molecular Dynamics (MD) simulations. CDFs provide a fingerprint which can be easily and unambiguously assigned to a reference polyhedron. The CDF analysis has been tested on five systems and has proven to reliably identify the correct geometries of several ion coordination complexes. This tool is simple and general and can be efficiently applied to different MD simulations of liquid systems.

  7. A minimal multiconfigurational technique.

    PubMed

    Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P

    1986-04-01

    A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.

  8. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  9. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  10. Large deviation principle at work: Computation of the statistical properties of the exact one-point aperture mass

    NASA Astrophysics Data System (ADS)

    Reimberg, Paulo; Bernardeau, Francis

    2018-01-01

    We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.

  11. Detecting long-term growth trends using tree rings: a critical evaluation of methods.

    PubMed

    Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A

    2015-05-01

    Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.

  12. Mathematical Formulation used by MATLAB Code to Convert FTIR Interferograms to Calibrated Spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Derek Elswick

    This report discusses the mathematical procedures used to convert raw interferograms from Fourier transform infrared (FTIR) sensors to calibrated spectra. The work discussed in this report was completed as part of the Helios project at Los Alamos National Laboratory. MATLAB code was developed to convert the raw interferograms to calibrated spectra. The report summarizes the developed MATLAB scripts and functions, along with a description of the mathematical methods used by the code. The first step in working with raw interferograms is to convert them to uncalibrated spectra by applying an apodization function to the raw data and then by performingmore » a Fourier transform. The developed MATLAB code also addresses phase error correction by applying the Mertz method. This report provides documentation for the MATLAB scripts.« less

  13. Calibration and prediction of removal function in magnetorheological finishing.

    PubMed

    Dai, Yifan; Song, Ci; Peng, Xiaoqiang; Shi, Feng

    2010-01-20

    A calibrated and predictive model of the removal function has been established based on the analysis of a magnetorheological finishing (MRF) process. By introducing an efficiency coefficient of the removal function, the model can be used to calibrate the removal function in a MRF figuring process and to accurately predict the removal function of a workpiece to be polished whose material is different from the spot part. Its correctness and feasibility have been validated by simulations. Furthermore, applying this model to the MRF figuring experiments, the efficiency coefficient of the removal function can be identified accurately to make the MRF figuring process deterministic and controllable. Therefore, all the results indicate that the calibrated and predictive model of the removal function can improve the finishing determinacy and increase the model applicability in a MRF process.

  14. [Rapid Identification of Epicarpium Citri Grandis via Infrared Spectroscopy and Fluorescence Spectrum Imaging Technology Combined with Neural Network].

    PubMed

    Pan, Sha-sha; Huang, Fu-rong; Xiao, Chi; Xian, Rui-yi; Ma, Zhi-guo

    2015-10-01

    To explore rapid reliable methods for detection of Epicarpium citri grandis (ECG), the experiment using Fourier Transform Attenuated Total Reflection Infrared Spectroscopy (FTIR/ATR) and Fluorescence Spectrum Imaging Technology combined with Multilayer Perceptron (MLP) Neural Network pattern recognition, for the identification of ECG, and the two methods are compared. Infrared spectra and fluorescence spectral images of 118 samples, 81 ECG and 37 other kinds of ECG, are collected. According to the differences in tspectrum, the spectra data in the 550-1 800 cm(-1) wavenumber range and 400-720 nm wavelength are regarded as the study objects of discriminant analysis. Then principal component analysis (PCA) is applied to reduce the dimension of spectroscopic data of ECG and MLP Neural Network is used in combination to classify them. During the experiment were compared the effects of different methods of data preprocessing on the model: multiplicative scatter correction (MSC), standard normal variable correction (SNV), first-order derivative(FD), second-order derivative(SD) and Savitzky-Golay (SG). The results showed that: after the infrared spectra data via the Savitzky-Golay (SG) pretreatment through the MLP Neural Network with the hidden layer function as sigmoid, we can get the best discrimination of ECG, the correct percent of training set and testing set are both 100%. Using fluorescence spectral imaging technology, corrected by the multiple scattering (MSC) results in the pretreatment is the most ideal. After data preprocessing, the three layers of the MLP Neural Network of the hidden layer function as sigmoid function can get 100% correct percent of training set and 96.7% correct percent of testing set. It was shown that the FTIR/ATR and fluorescent spectral imaging technology combined with MLP Neural Network can be used for the identification study of ECG and has the advantages of rapid, reliable effect.

  15. A non-JKL density matrix functional for intergeminal correlation between closed-shell geminals from analysis of natural orbital configuration interaction expansions

    NASA Astrophysics Data System (ADS)

    van Meer, R.; Gritsenko, O. V.; Baerends, E. J.

    2018-03-01

    Almost all functionals that are currently used in density matrix functional theory have been created by some a priori ansatz that generates approximations to the second-order reduced density matrix (2RDM). In this paper, a more consistent approach is used: we analyze the 2RDMs (in the natural orbital basis) of rather accurate multi-reference configuration interaction expansions for several small molecules (CH4, NH3, H2O, FH, and N2) and use the knowledge gained to generate new functionals. The analysis shows that a geminal-like structure is present in the 2RDMs, even though no geminal theory has been applied from the onset. It is also shown that the leading non-geminal dynamical correlation contributions are generated by a specific set of double excitations. The corresponding determinants give rise to non-JKL (non Coulomb/Exchange like) multipole-multipole dispersive attractive terms between geminals. Due to the proximity of the geminals, these dispersion terms are large and cannot be omitted, proving pure JKL functionals to be essentially deficient. A second correction emerges from the observation that the "normal" geminal-like exchange between geminals breaks down when one breaks multiple bonds. This problem can be fixed by doubling the exchange between bond broken geminals, effectively restoring the often physically correct high-spin configurations on the bond broken fragments. Both of these corrections have been added to the commonly used antisymmetrized product of strongly orthogonal geminals functional. The resulting non-JKL functional Extended Löwdin-Shull Dynamical-Multibond is capable of reproducing complete active space self-consistent field curves, in which one active orbital is used for each valence electron.

  16. Generalized nonequilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    2016-07-01

    Electron transport properties of nanoelectronics can be significantly influenced by the inevitable and randomly distributed impurities/defects. For theoretical simulation of disordered nanoscale electronics, one is interested in both the configurationally averaged transport property and its statistical fluctuation that tells device-to-device variability induced by disorder. However, due to the lack of an effective method to do disorder averaging under the nonequilibrium condition, the important effects of disorders on electron transport remain largely unexplored or poorly understood. In this work, we report a general formalism of Green's function based nonequilibrium effective medium theory to calculate the disordered nanoelectronics. In this method, based on a generalized coherent potential approximation for the Keldysh nonequilibrium Green's function, we developed a generalized nonequilibrium vertex correction method to calculate the average of a two-Keldysh-Green's-function correlator. We obtain nine nonequilibrium vertex correction terms, as a complete family, to express the average of any two-Green's-function correlator and find they can be solved by a set of linear equations. As an important result, the averaged nonequilibrium density matrix, averaged current, disorder-induced current fluctuation, and averaged shot noise, which involve different two-Green's-function correlators, can all be derived and computed in an effective and unified way. To test the general applicability of this method, we applied it to compute the transmission coefficient and its fluctuation with a square-lattice tight-binding model and compared with the exact results and other previously proposed approximations. Our results show very good agreement with the exact results for a wide range of disorder concentrations and energies. In addition, to incorporate with density functional theory to realize first-principles quantum transport simulation, we have also derived a general form of conditionally averaged nonequilibrium Green's function for multicomponent disorders.

  17. Vertical spatial coherence model for a transient signal forward-scattered from the sea surface

    USGS Publications Warehouse

    Yoerger, E.J.; McDaniel, S.T.

    1996-01-01

    The treatment of acoustic energy forward scattered from the sea surface, which is modeled as a random communications scatter channel, is the basis for developing an expression for the time-dependent coherence function across a vertical receiving array. The derivation of this model uses linear filter theory applied to the Fresnel-corrected Kirchhoff approximation in obtaining an equation for the covariance function for the forward-scattered problem. The resulting formulation is used to study the dependence of the covariance on experimental and environmental factors. The modeled coherence functions are then formed for various geometrical and environmental parameters and compared to experimental data.

  18. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  19. Computational logic: its origins and applications.

    PubMed

    Paulson, Lawrence C

    2018-02-01

    Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the 'logic for computable functions (LCF) approach' pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users' code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself.

  20. Practical Weak-lensing Shear Measurement with Metacalibration

    DOE PAGES

    Sheldon, Erin S.; Huff, Eric M.

    2017-05-19

    We report that metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate-sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observemore » that for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five-percent level. In each simulation we applied a small few-percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Finally, using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.« less

  1. Genome-wide gene–gene interaction analysis for next-generation sequencing

    PubMed Central

    Zhao, Jinying; Zhu, Yun; Xiong, Momiao

    2016-01-01

    The critical barrier in interaction analysis for next-generation sequencing (NGS) data is that the traditional pairwise interaction analysis that is suitable for common variants is difficult to apply to rare variants because of their prohibitive computational time, large number of tests and low power. The great challenges for successful detection of interactions with NGS data are (1) the demands in the paradigm of changes in interaction analysis; (2) severe multiple testing; and (3) heavy computations. To meet these challenges, we shift the paradigm of interaction analysis between two SNPs to interaction analysis between two genomic regions. In other words, we take a gene as a unit of analysis and use functional data analysis techniques as dimensional reduction tools to develop a novel statistic to collectively test interaction between all possible pairs of SNPs within two genome regions. By intensive simulations, we demonstrate that the functional logistic regression for interaction analysis has the correct type 1 error rates and higher power to detect interaction than the currently used methods. The proposed method was applied to a coronary artery disease dataset from the Wellcome Trust Case Control Consortium (WTCCC) study and the Framingham Heart Study (FHS) dataset, and the early-onset myocardial infarction (EOMI) exome sequence datasets with European origin from the NHLBI's Exome Sequencing Project. We discovered that 6 of 27 pairs of significantly interacted genes in the FHS were replicated in the independent WTCCC study and 24 pairs of significantly interacted genes after applying Bonferroni correction in the EOMI study. PMID:26173972

  2. Minimisation of Signal Intensity Differences in Distortion Correction Approaches of Brain Magnetic Resonance Diffusion Tensor Imaging.

    PubMed

    Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol

    2018-04-12

    To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.

  3. The correct estimate of the probability of false detection of the matched filter in weak-signal detection problems

    NASA Astrophysics Data System (ADS)

    Vio, R.; Andreani, P.

    2016-05-01

    The reliable detection of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimizing the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.

  4. WE-G-18C-05: Characterization of Cross-Vendor, Cross-Field Strength MR Image Intensity Variations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paulson, E; Prah, D

    2014-06-15

    Purpose: Variations in MR image intensity and image intensity nonuniformity (IINU) can challenge the accuracy of intensity-based image segmentation and registration algorithms commonly applied in radiotherapy. The goal of this work was to characterize MR image intensity variations across scanner vendors and field strengths commonly used in radiotherapy. Methods: ACR-MRI phantom images were acquired at 1.5T and 3.0T on GE (450w and 750, 23.1), Siemens (Espree and Verio, VB17B), and Philips (Ingenia, 4.1.3) scanners using commercial spin-echo sequences with matched parameters (TE/TR: 20/500 ms, rBW: 62.5 kHz, TH/skip: 5/5mm). Two radiofrequency (RF) coil combinations were used for each scanner: bodymore » coil alone, and combined body and phased-array head coils. Vendorspecific B1- corrections (PURE/Pre-Scan Normalize/CLEAR) were applied in all head coil cases. Images were transferred offline, corrected for IINU using the MNI N3 algorithm, and normalized. Coefficients of variation (CV=σ/μ) and peak image uniformity (PIU = 1−(Smax−Smin)/(Smax+Smin)) estimates were calculated for one homogeneous phantom slice. Kruskal-Wallis and Wilcoxon matched-pairs tests compared mean MR signal intensities and differences between original and N3 image CV and PIU. Results: Wide variations in both MR image intensity and IINU were observed across scanner vendors, field strengths, and RF coil configurations. Applying the MNI N3 correction for IINU resulted in significant improvements in both CV and PIU (p=0.0115, p=0.0235). However, wide variations in overall image intensity persisted, requiring image normalization to improve consistency across vendors, field strengths, and RF coils. These results indicate that B1- correction routines alone may be insufficient in compensating for IINU and image scaling, warranting additional corrections prior to use of MR images in radiotherapy. Conclusions: MR image intensities and IINU vary as a function of scanner vendor, field strength, and RF coil configuration. A two-step strategy consisting of MNI N3 correction followed by normalization was required to improve MR image consistency. Funding provided by Advancing a Healthier Wisconsin.« less

  5. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: Exactly solvable two-site Hubbard model

    DOE PAGES

    Kutepov, A. L.

    2015-07-22

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ₁ from the first-order perturbation theory, and the exact vertex Γ E). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. Results obtained with the exact vertex are directly related to the present open question—which approximation is more advantageous for future implementations, GW + DMFT or QPGW +more » DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on Perturbation Theory systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.« less

  6. Full self-consistency versus quasiparticle self-consistency in diagrammatic approaches: exactly solvable two-site Hubbard model.

    PubMed

    Kutepov, A L

    2015-08-12

    Self-consistent solutions of Hedin's equations (HE) for the two-site Hubbard model (HM) have been studied. They have been found for three-point vertices of increasing complexity (Γ = 1 (GW approximation), Γ1 from the first-order perturbation theory, and the exact vertex Γ(E)). Comparison is made between the cases when an additional quasiparticle (QP) approximation for Green's functions is applied during the self-consistent iterative solving of HE and when QP approximation is not applied. The results obtained with the exact vertex are directly related to the present open question-which approximation is more advantageous for future implementations, GW + DMFT or QPGW + DMFT. It is shown that in a regime of strong correlations only the originally proposed GW + DMFT scheme is able to provide reliable results. Vertex corrections based on perturbation theory (PT) systematically improve the GW results when full self-consistency is applied. The application of QP self-consistency combined with PT vertex corrections shows similar problems to the case when the exact vertex is applied combined with QP sc. An analysis of Ward Identity violation is performed for all studied in this work's approximations and its relation to the general accuracy of the schemes used is provided.

  7. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization

    PubMed Central

    Tehrani, Kayvan F.; Zhang, Yiwen; Shen, Ping; Kner, Peter

    2017-01-01

    Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm. PMID:29188105

  8. Single-image-based solution for optics temperature-dependent nonuniformity correction in an uncooled long-wave infrared camera.

    PubMed

    Cao, Yanpeng; Tisse, Christel-Loic

    2014-02-01

    In this Letter, we propose an efficient and accurate solution to remove temperature-dependent nonuniformity effects introduced by the imaging optics. This single-image-based approach computes optics-related fixed pattern noise (FPN) by fitting the derivatives of correction model to the gradient components, locally computed on an infrared image. A modified bilateral filtering algorithm is applied to local pixel output variations, so that the refined gradients are most likely caused by the nonuniformity associated with optics. The estimated bias field is subtracted from the raw infrared imagery to compensate the intensity variations caused by optics. The proposed method is fundamentally different from the existing nonuniformity correction (NUC) techniques developed for focal plane arrays (FPAs) and provides an essential image processing functionality to achieve completely shutterless NUC for uncooled long-wave infrared (LWIR) imaging systems.

  9. Adaptive optics stochastic optical reconstruction microscopy (AO-STORM) by particle swarm optimization.

    PubMed

    Tehrani, Kayvan F; Zhang, Yiwen; Shen, Ping; Kner, Peter

    2017-11-01

    Stochastic optical reconstruction microscopy (STORM) can achieve resolutions of better than 20nm imaging single fluorescently labeled cells. However, when optical aberrations induced by larger biological samples degrade the point spread function (PSF), the localization accuracy and number of localizations are both reduced, destroying the resolution of STORM. Adaptive optics (AO) can be used to correct the wavefront, restoring the high resolution of STORM. A challenge for AO-STORM microscopy is the development of robust optimization algorithms which can efficiently correct the wavefront from stochastic raw STORM images. Here we present the implementation of a particle swarm optimization (PSO) approach with a Fourier metric for real-time correction of wavefront aberrations during STORM acquisition. We apply our approach to imaging boutons 100 μm deep inside the central nervous system (CNS) of Drosophila melanogaster larvae achieving a resolution of 146 nm.

  10. Quantum oscillations in the kinetic energy density: Gradient corrections from the Airy gas

    NASA Astrophysics Data System (ADS)

    Lindmaa, Alexander; Mattsson, Ann E.; Armiento, Rickard

    2014-03-01

    We show how one can systematically derive exact quantum corrections to the kinetic energy density (KED) in the Thomas-Fermi (TF) limit of the Airy gas (AG). The resulting expression is of second order in the density variation and we demonstrate how it applies universally to a certain class of model systems in the slowly varying regime, for which the accuracy of the gradient corrections of the extended Thomas-Fermi (ETF) model is limited. In particular we study two kinds of related electronic edges, the Hermite gas (HG) and the Mathieu gas (MG), which are both relevant for discussing periodic systems. We also consider two systems with finite integer particle number, namely non-interacting electrons subject to harmonic confinement as well as the hydrogenic potential. Finally we discuss possible implications of our findings mainly related to the field of functional development of the local kinetic energy contribution.

  11. Research of generalized wavelet transformations of Haar correctness in remote sensing of the Earth

    NASA Astrophysics Data System (ADS)

    Kazaryan, Maretta; Shakhramanyan, Mihail; Nedkov, Roumen; Richter, Andrey; Borisova, Denitsa; Stankova, Nataliya; Ivanova, Iva; Zaharinova, Mariana

    2017-10-01

    In this paper, Haar's generalized wavelet functions are applied to the problem of ecological monitoring by the method of remote sensing of the Earth. We study generalized Haar wavelet series and suggest the use of Tikhonov's regularization method for investigating them for correctness. In the solution of this problem, an important role is played by classes of functions that were introduced and described in detail by I.M. Sobol for studying multidimensional quadrature formulas and it contains functions with rapidly convergent series of wavelet Haar. A theorem on the stability and uniform convergence of the regularized summation function of the generalized wavelet-Haar series of a function from this class with approximate coefficients is proved. The article also examines the problem of using orthogonal transformations in Earth remote sensing technologies for environmental monitoring. Remote sensing of the Earth allows to receive from spacecrafts information of medium, high spatial resolution and to conduct hyperspectral measurements. Spacecrafts have tens or hundreds of spectral channels. To process the images, the device of discrete orthogonal transforms, and namely, wavelet transforms, was used. The aim of the work is to apply the regularization method in one of the problems associated with remote sensing of the Earth and subsequently to process the satellite images through discrete orthogonal transformations, in particular, generalized Haar wavelet transforms. General methods of research. In this paper, Tikhonov's regularization method, the elements of mathematical analysis, the theory of discrete orthogonal transformations, and methods for decoding of satellite images are used. Scientific novelty. The task of processing of archival satellite snapshots (images), in particular, signal filtering, was investigated from the point of view of an incorrectly posed problem. The regularization parameters for discrete orthogonal transformations were determined.

  12. Reed-Solomon error-correction as a software patch mechanism.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendley, Kevin D.

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  13. Galaxy and mass assembly (GAMA): dust obscuration in galaxies and their recent star formation histories

    NASA Astrophysics Data System (ADS)

    Wijesinghe, D. B.; Hopkins, A. M.; Sharp, R.; Gunawardhana, M.; Brough, S.; Sadler, E. M.; Driver, S.; Baldry, I.; Bamford, S.; Liske, J.; Loveday, J.; Norberg, P.; Peacock, J.; Popescu, C. C.; Tuffs, R. J.; Bland-Hawthorn, J.; Cameron, E.; Croom, S.; Frenk, C.; Hill, D.; Jones, D. H.; van Kampen, E.; Kelvin, L.; Kuijken, K.; Madore, B.; Nichol, B.; Parkinson, H.; Pimbblet, K. A.; Prescott, M.; Robotham, A. S. G.; Seibert, M.; Simmat, E.; Sutherland, W.; Taylor, E.; Thomas, D.

    2011-02-01

    We present self-consistent star formation rates derived through pan-spectral analysis of galaxies drawn from the Galaxy and Mass Assembly (GAMA) survey. We determine the most appropriate form of dust obscuration correction via application of a range of extinction laws drawn from the literature as applied to Hα, [O II] and UV luminosities. These corrections are applied to a sample of 31 508 galaxies from the GAMA survey at z < 0.35. We consider several different obscuration curves, including those of Milky Way, Calzetti and Fischera & Dopita curves and their effects on the observed luminosities. At the core of this technique is the observed Balmer decrement, and we provide a prescription to apply optimal obscuration corrections using the Balmer decrement. We carry out an analysis of the star formation history (SFH) using stellar population synthesis tools to investigate the evolutionary history of our sample of galaxies as well as to understand the effects of variation in the initial mass function (IMF) and the effects this has on the evolutionary history of galaxies. We find that the Fischera & Dopita obscuration curve with an Rv value of 4.5 gives the best agreement between the different SFR indicators. The 2200 Å feature needed to be removed from this curve to obtain complete consistency between all SFR indicators suggesting that this feature may not be common in the average integrated attenuation of galaxy emission. We also find that the UV dust obscuration is strongly dependent on the SFR.

  14. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  15. Controlling Type I Error Rates in Assessing DIF for Logistic Regression Method Combined with SIBTEST Regression Correction Procedure and DIF-Free-Then-DIF Strategy

    ERIC Educational Resources Information Center

    Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung

    2014-01-01

    The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…

  16. Space Vehicle Chemical Interactions and Technologies

    DTIC Science & Technology

    2015-05-26

    the signal intensities for product and transmitted primary ions and applying the Lambert - Beer expression. Measurements are corrected for reactions...other provision of law , no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a...a function of the emitted cluster radius. The surface electric field is calculated from Coulomb’s law and levels off at approximately the

  17. Fast and automatic algorithm for optic disc extraction in retinal images using principle-component-analysis-based preprocessing and curvelet transform.

    PubMed

    Shahbeig, Saleh; Pourghassem, Hossein

    2013-01-01

    Optic disc or optic nerve (ON) head extraction in retinal images has widespread applications in retinal disease diagnosis and human identification in biometric systems. This paper introduces a fast and automatic algorithm for detecting and extracting the ON region accurately from the retinal images without the use of the blood-vessel information. In this algorithm, to compensate for the destructive changes of the illumination and also enhance the contrast of the retinal images, we estimate the illumination of background and apply an adaptive correction function on the curvelet transform coefficients of retinal images. In other words, we eliminate the fault factors and pave the way to extract the ON region exactly. Then, we detect the ON region from retinal images using the morphology operators based on geodesic conversions, by applying a proper adaptive correction function on the reconstructed image's curvelet transform coefficients and a novel powerful criterion. Finally, using a local thresholding on the detected area of the retinal images, we extract the ON region. The proposed algorithm is evaluated on available images of DRIVE and STARE databases. The experimental results indicate that the proposed algorithm obtains an accuracy rate of 100% and 97.53% for the ON extractions on DRIVE and STARE databases, respectively.

  18. Distortion correction of echo planar images applying the concept of finite rate of innovation to point spread function mapping (FRIP).

    PubMed

    Nunes, Rita G; Hajnal, Joseph V

    2018-06-01

    Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.

  19. Correction of patient motion in cone-beam CT using 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Jacobson, M.; Stayman, J. W.; Ehtiati, T.; Weiss, C.; Siewerdsen, J. H.

    2017-12-01

    Cone-beam CT (CBCT) is increasingly common in guidance of interventional procedures, but can be subject to artifacts arising from patient motion during fairly long (~5-60 s) scan times. We present a fiducial-free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in the intrinsic and extrinsic parameters of geometric calibration. The 3D-2D registration process registers each projection to a prior 3D image by maximizing gradient orientation using the covariance matrix adaptation-evolution strategy optimizer. The resulting rigid transforms are applied to the system projection matrices, and a 3D image is reconstructed via model-based iterative reconstruction. Phantom experiments were conducted using a Zeego robotic C-arm to image a head phantom undergoing 5-15 cm translations and 5-15° rotations. To further test the algorithm, clinical images were acquired with a CBCT head scanner in which long scan times were susceptible to significant patient motion. CBCT images were reconstructed using a penalized likelihood objective function. For phantom studies the structural similarity (SSIM) between motion-free and motion-corrected images was  >0.995, with significant improvement (p  <  0.001) compared to the SSIM values of uncorrected images. Additionally, motion-corrected images exhibited a point-spread function with full-width at half maximum comparable to that of the motion-free reference image. Qualitative comparison of the motion-corrupted and motion-corrected clinical images demonstrated a significant improvement in image quality after motion correction. This indicates that the 3D-2D registration method could provide a useful approach to motion artifact correction under assumptions of local rigidity, as in the head, pelvis, and extremities. The method is highly parallelizable, and the automatic correction of residual geometric calibration errors provides added benefit that could be valuable in routine use.

  20. TRIPPy: Trailed Image Photometry in Python

    NASA Astrophysics Data System (ADS)

    Fraser, Wesley; Alexandersen, Mike; Schwamb, Megan E.; Marsset, Michaël; Pike, Rosemary E.; Kavelaars, J. J.; Bannister, Michele T.; Benecchi, Susan; Delsanti, Audrey

    2016-06-01

    Photometry of moving sources typically suffers from a reduced signal-to-noise ratio (S/N) or flux measurements biased to incorrect low values through the use of circular apertures. To address this issue, we present the software package, TRIPPy: TRailed Image Photometry in Python. TRIPPy introduces the pill aperture, which is the natural extension of the circular aperture appropriate for linearly trailed sources. The pill shape is a rectangle with two semicircular end-caps and is described by three parameters, the trail length and angle, and the radius. The TRIPPy software package also includes a new technique to generate accurate model point-spread functions (PSFs) and trailed PSFs (TSFs) from stationary background sources in sidereally tracked images. The TSF is merely the convolution of the model PSF, which consists of a moffat profile, and super-sampled lookup table. From the TSF, accurate pill aperture corrections can be estimated as a function of pill radius with an accuracy of 10 mmag for highly trailed sources. Analogous to the use of small circular apertures and associated aperture corrections, small radius pill apertures can be used to preserve S/Ns of low flux sources, with appropriate aperture correction applied to provide an accurate, unbiased flux measurement at all S/Ns.

  1. Stabilised finite-element methods for solving the level set equation with mass conservation

    NASA Astrophysics Data System (ADS)

    Kabirou Touré, Mamadou; Fahsi, Adil; Soulaïmani, Azzeddine

    2016-01-01

    Finite-element methods are studied for solving moving interface flow problems using the level set approach and a stabilised variational formulation proposed in Touré and Soulaïmani (2012; Touré and Soulaïmani To appear in 2016), coupled with a level set correction method. The level set correction is intended to enhance the mass conservation satisfaction property. The stabilised variational formulation (Touré and Soulaïmani 2012; Touré and Soulaïmani, To appear in 2016) constrains the level set function to remain close to the signed distance function, while the mass conservation is a correction step which enforces the mass balance. The eXtended finite-element method (XFEM) is used to take into account the discontinuities of the properties within an element. XFEM is applied to solve the Navier-Stokes equations for two-phase flows. The numerical methods are numerically evaluated on several test cases such as time-reversed vortex flow, a rigid-body rotation of Zalesak's disc, sloshing flow in a tank, a dam-break over a bed, and a rising bubble subjected to buoyancy. The numerical results show the importance of satisfying global mass conservation to accurately capture the interface position.

  2. Power cepstrum technique with application to model helicopter acoustic data

    NASA Technical Reports Server (NTRS)

    Martin, R. M.; Burley, C. L.

    1986-01-01

    The application of the power cepstrum to measured helicopter-rotor acoustic data is investigated. A previously applied correction to the reconstructed spectrum is shown to be incorrect. For an exact echoed signal, the amplitude of the cepstrum echo spike at the delay time is linearly related to the echo relative amplitude in the time domain. If the measured spectrum is not entirely from the source signal, the cepstrum will not yield the desired echo characteristics and a cepstral aliasing may occur because of the effective sample rate in the frequency domain. The spectral analysis bandwidth must be less than one-half the echo ripple frequency or cepstral aliasing can occur. The power cepstrum editing technique is a useful tool for removing some of the contamination because of acoustic reflections from measured rotor acoustic spectra. The cepstrum editing yields an improved estimate of the free field spectrum, but the correction process is limited by the lack of accurate knowledge of the echo transfer function. An alternate procedure, which does not require cepstral editing, is proposed which allows the complete correction of a contaminated spectrum through use of both the transfer function and delay time of the echo process.

  3. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  4. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  5. Generalization of the Hartree-Fock approach to collision processes

    NASA Astrophysics Data System (ADS)

    Hahn, Yukap

    1997-06-01

    The conventional Hartree and Hartree-Fock approaches for bound states are generalized to treat atomic collision processes. All the single-particle orbitals, for both bound and scattering states, are determined simultaneously by requiring full self-consistency. This generalization is achieved by introducing two Ansäauttze: (a) the weak asymptotic boundary condition, which maintains the correct scattering energy and target orbitals with correct number of nodes, and (b) square integrable amputated scattering functions to generate self-consistent field (SCF) potentials for the target orbitals. The exact initial target and final-state asymptotic wave functions are not required and thus need not be specified a priori, as they are determined simultaneously by the SCF iterations. To check the asymptotic behavior of the solution, the theory is applied to elastic electron-hydrogen scattering at low energies. The solution is found to be stable and the weak asymptotic condition is sufficient to produce the correct scattering amplitudes. The SCF potential for the target orbital shows the strong penetration by the projectile electron during the collision, but the exchange term tends to restore the original form. Potential applicabilities of this extension are discussed, including the treatment of ionization and shake-off processes.

  6. Localized orbital corrections applied to thermochemical errors in density functional theory: The role of basis set and application to molecular reactions

    NASA Astrophysics Data System (ADS)

    Goldfeld, Dahlia A.; Bochevarov, Arteum D.; Friesner, Richard A.

    2008-12-01

    This paper is a logical continuation of the 22 parameter, localized orbital correction (LOC) methodology that we developed in previous papers [R. A. Friesner et al., J. Chem. Phys. 125, 124107 (2006); E. H. Knoll and R. A. Friesner, J. Phys. Chem. B 110, 18787 (2006).] This methodology allows one to redress systematic density functional theory (DFT) errors, rooted in DFT's inherent inability to accurately describe nondynamical correlation. Variants of the LOC scheme, in conjunction with B3LYP (denoted as B3LYP-LOC), were previously applied to enthalpies of formation, ionization potentials, and electron affinities and showed impressive reduction in the errors. In this paper, we demonstrate for the first time that the B3LYP-LOC scheme is robust across different basis sets [6-31G∗, 6-311++G(3df,3pd), cc-pVTZ, and aug-cc-pVTZ] and reaction types (atomization reactions and molecular reactions). For example, for a test set of 70 molecular reactions, the LOC scheme reduces their mean unsigned error from 4.7 kcal/mol [obtained with B3LYP/6-311++G(3df,3pd)] to 0.8 kcal/mol. We also verified whether the LOC methodology would be equally successful if applied to the promising M05-2X functional. We conclude that although M05-2X produces better reaction enthalpies than B3LYP, the LOC scheme does not combine nearly as successfully with M05-2X than with B3LYP. A brief analysis of another functional, M06-2X, reveals that it is more accurate than M05-2X but its combination with LOC still cannot compete in accuracy with B3LYP-LOC. Indeed, B3LYP-LOC remains the best method of computing reaction enthalpies.

  7. Composite Overwrap Pressure Vessels: Mechanics and Stress Rupture Lifting Philosophy

    NASA Technical Reports Server (NTRS)

    Thesken, John C.; Murthy, Pappu L. N.; Phoenix, S. L.

    2009-01-01

    The NASA Engineering and Safety Center (NESC) has been conducting an independent technical assessment to address safety concerns related to the known stress rupture failure mode of filament wound pressure vessels in use on Shuttle and the International Space Station. The Shuttle s Kevlar-49 (DuPont) fiber overwrapped tanks are of particular concern due to their long usage and the poorly understood stress rupture process in Kevlar-49 filaments. Existing long term data show that the rupture process is a function of stress, temperature and time. However due to the presence of load sharing liners and the complex manufacturing procedures, the state of actual fiber stress in flight hardware and test articles is not clearly known. Indeed nonconservative life predictions have been made where stress rupture data and lifing procedures have ignored the contribution of the liner in favor of applied pressure as the controlling load parameter. With the aid of analytical and finite element results, this paper examines the fundamental mechanical response of composite overwrapped pressure vessels including the influence of elastic plastic liners and degraded/creeping overwrap properties. Graphical methods are presented describing the non-linear relationship of applied pressure to Kevlar-49 fiber stress/strain during manufacturing, operations and burst loadings. These are applied to experimental measurements made on a variety of vessel systems to demonstrate the correct calibration of fiber stress as a function of pressure. Applying this analysis to the actual qualification burst data for Shuttle flight hardware revealed that the nominal fiber stress at burst was in some cases 23 percent lower than what had previously been used to predict stress rupture life. These results motivate a detailed discussion of the appropriate stress rupture lifing philosophy for COPVs including the correct transference of stress rupture life data between dissimilar vessels and test articles.

  8. Composite Overwrap Pressure Vessels: Mechanics and Stress Rupture Lifing Philosophy

    NASA Technical Reports Server (NTRS)

    Thesken, John C.; Murthy, Pappu L. N.; Phoenix, Leigh

    2007-01-01

    The NASA Engineering and Safety Center (NESC) has been conducting an independent technical assessment to address safety concerns related to the known stress rupture failure mode of filament wound pressure vessels in use on Shuttle and the International Space Station. The Shuttle's Kevlar-49 fiber overwrapped tanks are of particular concern due to their long usage and the poorly understood stress rupture process in Kevlar-49 filaments. Existing long term data show that the rupture process is a function of stress, temperature and time. However due to the presence of load sharing liners and the complex manufacturing procedures, the state of actual fiber stress in flight hardware and test articles is not clearly known. Indeed non-conservative life predictions have been made where stress rupture data and lifing procedures have ignored the contribution of the liner in favor of applied pressure as the controlling load parameter. With the aid of analytical and finite element results, this paper examines the fundamental mechanical response of composite overwrapped pressure vessels including the influence of elastic-plastic liners and degraded/creeping overwrap properties. Graphical methods are presented describing the non-linear relationship of applied pressure to Kevlar-49 fiber stress/strain during manufacturing, operations and burst loadings. These are applied to experimental measurements made on a variety of vessel systems to demonstrate the correct calibration of fiber stress as a function of pressure. Applying this analysis to the actual qualification burst data for Shuttle flight hardware revealed that the nominal fiber stress at burst was in some cases 23% lower than what had previously been used to predict stress rupture life. These results motivate a detailed discussion of the appropriate stress rupture lifing philosophy for COPVs including the correct transference of stress rupture life data between dissimilar vessels and test articles.

  9. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194

  10. Septal penetration correction in I-131 imaging following thyroid cancer treatment

    NASA Astrophysics Data System (ADS)

    Barrack, Fiona; Scuffham, James; McQuaid, Sarah

    2018-04-01

    Whole body gamma camera images acquired after I-131 treatment for thyroid cancer can suffer from collimator septal penetration artefacts because of the high energy of the gamma photons. This results in the appearance of ‘spoke’ artefacts, emanating from regions of high activity concentration, caused by the non-isotropic attenuation of the collimator. Deconvolution has the potential to reduce such artefacts, by taking into account the non-Gaussian point-spread-function (PSF) of the system. A Richardson–Lucy deconvolution algorithm, with and without prior scatter-correction was tested as a method of reducing septal penetration in planar gamma camera images. Phantom images (hot spheres within a warm background) were acquired and deconvolution using a measured PSF was applied. The results were evaluated through region-of-interest and line profile analysis to determine the success of artefact reduction and the optimal number of deconvolution iterations and damping parameter (λ). Without scatter-correction, the optimal results were obtained with 15 iterations and λ  =  0.01, with the counts in the spokes reduced to 20% of the original value, indicating a substantial decrease in their prominence. When a triple-energy-window scatter-correction was applied prior to deconvolution, the optimal results were obtained with six iterations and λ  =  0.02, which reduced the spoke counts to 3% of the original value. The prior application of scatter-correction therefore produced the best results, with a marked change in the appearance of the images. The optimal settings were then applied to six patient datasets, to demonstrate its utility in the clinical setting. In all datasets, spoke artefacts were substantially reduced after the application of scatter-correction and deconvolution, with the mean spoke count being reduced to 10% of the original value. This indicates that deconvolution is a promising technique for septal penetration artefact reduction that could potentially improve the diagnostic accuracy of I-131 imaging. Novelty and significance This work has demonstrated that scatter correction combined with deconvolution can be used to substantially reduce the appearance of septal penetration artefacts in I-131 phantom and patient gamma camera planar images, enable improved visualisation of the I-131 distribution. Deconvolution with symmetric PSF has previously been used to reduce artefacts in gamma camera images however this work details the novel use of an asymmetric PSF to remove the angularly dependent septal penetration artefacts.

  11. Bilateral Radial Agenesis in a Cat Treated with Bilateral Ulnocarpal Arthrodesis.

    PubMed

    Bezhentseva, Alla; Singh, Harpreet; Boudrieau, Randy J

    2018-06-20

     This article describes corrective antebrachiocarpal re-alignment and arthrodesis for bilateral radial hemimelia (radial agenesis) in an 8-month-old domestic short-haired cat.  Bilateral forelimb deformity of ulnocarpal varus with complete luxation and rotation of the antebrachiocarpal joint spaces, and joint contracture, was observed. Several carpal bones and metacarpal bones I and II and their associated phalanges were absent. Abnormal ambulation and weight bearing on the dorsolateral part of the manus were present. The deformities were treated by bilateral distal ulnar ostectomy and ulnocarpal arthrodesis using a 2.0-mm locking compression plate applied with hybrid fixation and allograft.  Successful deformity correction was obtained with subsequent fusion of the antebrachiocarpal joints. No complications were observed. At long-term follow-up (4.75 years), there was good-to-excellent functional result, with approximately 15° internal rotation of the right forelimb manus and shortened stride with slight circumduction and lameness. All implants remained stable and continued bone remodelling was present. The cat was assessed to have good-to-excellent short- and long-term functional results with excellent owner satisfaction.  Treatment of radial agenesis in the cat has previously been limited to conservative management or limb amputation. While there are several reports of corrective limb-sparing procedures used to treat dogs, this is the first report of a cat with successful salvage corrective surgery. Schattauer GmbH Stuttgart.

  12. Correcting the MoCA for education: effect on sensitivity.

    PubMed

    Gagnon, Genevieve; Hansen, Kevin T; Woolmore-Goodwin, Sarah; Gutmanis, Iris; Wells, Jennie; Borrie, Michael; Fogarty, Jennifer

    2013-09-01

    The goal of this study was to quantify the impact of the suggested education correction on the sensitivity and specificity of the Montreal Cognitive Assessment (MoCA). Twenty-five outpatients with dementia and 39 with amnestic mild cognitive impairment (aMCI) underwent a diagnostic evaluation, which included the MoCA. Thirty-seven healthy controls also completed the MoCA and psychiatric, medical, neurological, functional, and cognitive difficulties were ruled out. For the total MoCA score, unadjusted for education, a cut-off score of 26 yielded the best balance between sensitivity and specificity (80% and 89% respectively) in identifying cognitive impairment (people with either dementia or aMCI, versus controls). When applying the education correction, sensitivity decreased from 80% to 69% for a small specificity increase (89% to 92%). The cut-off score yielding the best balance between sensitivity and specificity for the education adjusted MoCA score fell to 25 (61% and 97%, respectively). Adjusting the MoCA total score for education had a detrimental effect on sensitivity with only a slight increase in specificity. Clinically, this loss in sensitivity can lead to an increased number of false negatives, as education level does not always correlate to premorbid intellectual function. Clinical judgment about premorbid status should guide interpretation. However, as this effect may be cohort specific, age and education corrected norms and cut-offs should be developed to help guide MoCA interpretation.

  13. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  14. Energy design for protein-protein interactions

    PubMed Central

    Ravikant, D. V. S.; Elber, Ron

    2011-01-01

    Proteins bind to other proteins efficiently and specifically to carry on many cell functions such as signaling, activation, transport, enzymatic reactions, and more. To determine the geometry and strength of binding of a protein pair, an energy function is required. An algorithm to design an optimal energy function, based on empirical data of protein complexes, is proposed and applied. Emphasis is made on negative design in which incorrect geometries are presented to the algorithm that learns to avoid them. For the docking problem the search for plausible geometries can be performed exhaustively. The possible geometries of the complex are generated on a grid with the help of a fast Fourier transform algorithm. A novel formulation of negative design makes it possible to investigate iteratively hundreds of millions of negative examples while monotonically improving the quality of the potential. Experimental structures for 640 protein complexes are used to generate positive and negative examples for learning parameters. The algorithm designed in this work finds the correct binding structure as the lowest energy minimum in 318 cases of the 640 examples. Further benchmarks on independent sets confirm the significant capacity of the scoring function to recognize correct modes of interactions. PMID:21842951

  15. A mass-conservative adaptive FAS multigrid solver for cell-centered finite difference methods on block-structured, locally-cartesian grids

    NASA Astrophysics Data System (ADS)

    Feng, Wenqiang; Guo, Zhenlin; Lowengrub, John S.; Wise, Steven M.

    2018-01-01

    We present a mass-conservative full approximation storage (FAS) multigrid solver for cell-centered finite difference methods on block-structured, locally cartesian grids. The algorithm is essentially a standard adaptive FAS (AFAS) scheme, but with a simple modification that comes in the form of a mass-conservative correction to the coarse-level force. This correction is facilitated by the creation of a zombie variable, analogous to a ghost variable, but defined on the coarse grid and lying under the fine grid refinement patch. We show that a number of different types of fine-level ghost cell interpolation strategies could be used in our framework, including low-order linear interpolation. In our approach, the smoother, prolongation, and restriction operations need never be aware of the mass conservation conditions at the coarse-fine interface. To maintain global mass conservation, we need only modify the usual FAS algorithm by correcting the coarse-level force function at points adjacent to the coarse-fine interface. We demonstrate through simulations that the solver converges geometrically, at a rate that is h-independent, and we show the generality of the solver, applying it to several nonlinear, time-dependent, and multi-dimensional problems. In several tests, we show that second-order asymptotic (h → 0) convergence is observed for the discretizations, provided that (1) at least linear interpolation of the ghost variables is employed, and (2) the mass conservation corrections are applied to the coarse-level force term.

  16. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  17. Correcting reaction rates measured by saturation-transfer magnetic resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gabr, Refaat E.; Weiss, Robert G.; Bottomley, Paul A.

    2008-04-01

    Off-resonance or spillover irradiation and incomplete saturation can introduce significant errors in the estimates of chemical rate constants measured by saturation-transfer magnetic resonance spectroscopy (MRS). Existing methods of correction are effective only over a limited parameter range. Here, a general approach of numerically solving the Bloch-McConnell equations to calculate exchange rates, relaxation times and concentrations for the saturation-transfer experiment is investigated, but found to require more measurements and higher signal-to-noise ratios than in vivo studies can practically afford. As an alternative, correction formulae for the reaction rate are provided which account for the expected parameter ranges and limited measurements available in vivo. The correction term is a quadratic function of experimental measurements. In computer simulations, the new formulae showed negligible bias and reduced the maximum error in the rate constants by about 3-fold compared to traditional formulae, and the error scatter by about 4-fold, over a wide range of parameters for conventional saturation transfer employing progressive saturation, and for the four-angle saturation-transfer method applied to the creatine kinase (CK) reaction in the human heart at 1.5 T. In normal in vivo spectra affected by spillover, the correction increases the mean calculated forward CK reaction rate by 6-16% over traditional and prior correction formulae.

  18. Effect of dispersion correction on the Au(1 1 1)-H2O interface: A first-principles study

    NASA Astrophysics Data System (ADS)

    Nadler, Roger; Sanz, Javier Fdez.

    2012-09-01

    A theoretical study of the H2O-Au(1 1 1) interface based on first principles density functional theory (DFT) calculations with and without inclusion of dispersion correction is reported. Three different computational approaches are considered. First, the standard generalized gradient approximation (GGA) functional PBE is employed. Second, an additional energy term is further included that adds a semi-empirically derived dispersion correction (PBE-D2), and, finally, a recently proposed functional that includes van der Waals (vdW) interactions directly in its functional form (optB86b-vdW) was used to represent the state-of-the art of DFT functionals. The monomeric water adsorption was first considered in order to explore the dependency of geometry on the details of the model slab used to represent it (size, thickness, coverage). When the dispersion corrections are included the Au-H2O interaction is stronger, as manifested by the smaller dAu-O and stronger adsorption energies. Additionally, the interfacial region between Au(1 1 1) slab surfaces and a liquid water layer was investigated with Born-Oppenheimer molecular dynamics (BOMD) using the same functionals. Two or three interfacial orientations can be determined, depending on the theoretical methodology applied. Closest to the surface, H2O is adsorbed O-down, whereas further away it is oriented with one OH bond pointing to the surface and the molecular plane parallel to the normal direction. For the optB86b-vdW functional a third orientation is found where one H atom points into the bulk water layer and the second OH bond is oriented parallel to the metal surface. As for the water density in the first adsorption layer we find a very small increase of roughly 8%. From the analysis of vibrational spectra a weakening of the H-bond network is observed upon the inclusion of the Au(1 1 1) slab, however, no disruption of H-bonds is observed. While the PBE and PBE-D2 spectra are very similar, the optB86b-vdW spectrum shows that the H-bonds are even more weakened.

  19. GET electronics samples data analysis

    NASA Astrophysics Data System (ADS)

    Giovinazzo, J.; Goigoux, T.; Anvar, S.; Baron, P.; Blank, B.; Delagnes, E.; Grinyer, G. F.; Pancin, J.; Pedroza, J. L.; Pibernat, J.; Pollacco, E.; Rebii, A.; Roger, T.; Sizun, P.

    2016-12-01

    The General Electronics for TPCs (GET) has been developed to equip a generation of time projection chamber detectors for nuclear physics, and may also be used for a wider range of detector types. The goal of this paper is to propose first analysis procedures to be applied on raw data samples from the GET system, in order to correct for systematic effects observed on test measurements. We also present a method to estimate the response function of the GET system channels. The response function is required in analysis where the input signal needs to be reconstructed, in terms of time distribution, from the registered output samples.

  20. Driving- stress waveform and the determination of rock internal friction by the stress-strain curve method.

    USGS Publications Warehouse

    Hsi-Ping, Liu

    1980-01-01

    Harmonic distortion in the stress-time function applied to rock specimens affects the measurement of rock internal friction in the seismic wave periods by the stress-strain hysteresis loop method. If neglected, the harmonic distortion can cause measurements of rock internal friction to be in error by 3O% in the linear range. The stress-time function therefore must be recorded and Fourier analysed for correct interpretation of the experimental data. Such a procedure would also yield a value for internal friction at the higher harmonic frequencies.-Author

  1. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    NASA Astrophysics Data System (ADS)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator workbook for use in correcting the measured activities. Output from the SigPhi Calculator is automatically produced, and consists of a portion of the STAYSL PNNL input file data that is required to run the spectral adjustment calculations. Within STAYSL PNNL, the least-squares process is performed in one step, without iteration, and provides rapid results on PC platforms. STAYSL PNNL creates multiple output files with tabulated results, data suitable for plotting, and data formatted for use in subsequent radiation damage calculations using the SPECTER computer code (which is not included in the STAYSL PNNL suite). All components of the software suite have undergone extensive testing and validation prior to release and test cases are provided with the package.

  2. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  3. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  4. Improving the CD linearity and proximity performance of photomasks written on the Sigma7500-II DUV laser writer through embedded OPC

    NASA Astrophysics Data System (ADS)

    Österberg, Anders; Ivansen, Lars; Beyerl, Angela; Newman, Tom; Bowhill, Amanda; Sahouria, Emile; Schulze, Steffen

    2007-10-01

    Optical proximity correction (OPC) is widely used in wafer lithography to produce a printed image that best matches the design intent while optimizing CD control. OPC software applies corrections to the mask pattern data, but in general it does not compensate for the mask writer and mask process characteristics. The Sigma7500-II deep-UV laser mask writer projects the image of a programmable spatial light modulator (SLM) using partially coherent optics similar to wafer steppers, and the optical proximity effects of the mask writer are in principle correctable with established OPC methods. To enhance mask patterning, an embedded OPC function, LinearityEqualize TM, has been developed for the Sigma7500- II that is transparent to the user and which does not degrade mask throughput. It employs a Calibre TM rule-based OPC engine from Mentor Graphics, selected for the computational speed necessary for mask run-time execution. A multinode cluster computer applies optimized table-based CD corrections to polygonized pattern data that is then fractured into an internal writer format for subsequent data processing. This embedded proximity correction flattens the linearity behavior for all linewidths and pitches, which targets to improve the CD uniformity on production photomasks. Printing results show that the CD linearity is reduced to below 5 nm for linewidths down to 200 nm, both for clear and dark and for isolated and dense features, and that sub-resolution assist features (SRAF) are reliably printed down to 120 nm. This reduction of proximity effects for main mask features and the extension of the practical resolution for SRAFs expands the application space of DUV laser mask writing.

  5. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  6. Simulating correction of adjustable optics for an x-ray telescope

    NASA Astrophysics Data System (ADS)

    Aldcroft, Thomas L.; Schwartz, Daniel A.; Reid, Paul B.; Cotroneo, Vincenzo; Davis, William N.

    2012-10-01

    The next generation of large X-ray telescopes with sub-arcsecond resolution will require very thin, highly nested grazing incidence optics. To correct the low order figure errors resulting from initial manufacture, the mounting process, and the effects of going from 1 g during ground alignment to zero g on-orbit, we plan to adjust the shapes via piezoelectric "cells" deposited on the backs of the reflecting surfaces. This presentation investigates how well the corrections might be made. We take a benchmark conical glass element, 410×205 mm, with a 20×20 array of piezoelectric cells 19×9 mm in size. We use finite element analysis to calculate the influence function of each cell. We then simulate the correction via pseudo matrix inversion to calculate the stress to be applied by each cell, considering distortion due to gravity as calculated by finite element analysis, and by putative low order manufacturing distortions described by Legendre polynomials. We describe our algorithm and its performance, and the implications for the sensitivity of the resulting slope errors to the optimization strategy.

  7. Particle swarm optimization applied to automatic lens design

    NASA Astrophysics Data System (ADS)

    Qin, Hua

    2011-06-01

    This paper describes a novel application of Particle Swarm Optimization (PSO) technique to lens design. A mathematical model is constructed, and merit functions in an optical system are employed as fitness functions, which combined radiuses of curvature, thicknesses among lens surfaces and refractive indices regarding an optical system. By using this function, the aberration correction is carried out. A design example using PSO is given. Results show that PSO as optical design tools is practical and powerful, and this method is no longer dependent on the lens initial structure and can arbitrarily create search ranges of structural parameters of a lens system, which is an important step towards automatic design with artificial intelligence.

  8. Analysis of pressure distortion testing

    NASA Technical Reports Server (NTRS)

    Koch, K. E.; Rees, R. L.

    1976-01-01

    The development of a distortion methodology, method D, was documented, and its application to steady state and unsteady data was demonstrated. Three methodologies based upon DIDENT, a NASA-LeRC distortion methodology based upon the parallel compressor model, were investigated by applying them to a set of steady state data. The best formulation was then applied to an independent data set. The good correlation achieved with this data set showed that method E, one of the above methodologies, is a viable concept. Unsteady data were analyzed by using the method E methodology. This analysis pointed out that the method E sensitivities are functions of pressure defect level as well as corrected speed and pattern.

  9. Peptide identification

    DOEpatents

    Jarman, Kristin H [Richland, WA; Cannon, William R [Richland, WA; Jarman, Kenneth D [Richland, WA; Heredia-Langner, Alejandro [Richland, WA

    2011-07-12

    Peptides are identified from a list of candidates using collision-induced dissociation tandem mass spectrometry data. A probabilistic model for the occurrence of spectral peaks corresponding to frequently observed partial peptide fragment ions is applied. As part of the identification procedure, a probability score is produced that indicates the likelihood of any given candidate being the correct match. The statistical significance of the score is known without necessarily having reference to the actual identity of the peptide. In one form of the invention, a genetic algorithm is applied to candidate peptides using an objective function that takes into account the number of shifted peaks appearing in the candidate spectrum relative to the test spectrum.

  10. Predictions of nucleation theory applied to Ehrenfest thermodynamic transitions

    NASA Technical Reports Server (NTRS)

    Barker, R. E., Jr.; Campbell, K. W.

    1984-01-01

    A modified nucleation theory is used to determine a critical nucleus size and a critical activation-energy barrier for second-order Ehrenfest thermodynamic transitions as functions of the degree of undercooling, the interfacial energy, the heat-capacity difference, the specific volume of the transformed phase, and the equilibrium transition temperature. The customary approximations of nucleation theory are avoided by expanding the Gibbs free energy in a Maclaurin series and applying analytical thermodynamic expressions to evaluate the expansion coefficients. Nonlinear correction terms for first-order-transition calculations are derived, and numerical results are presented graphically for water and polystyrene as examples of first-order and quasi-second-order transitions, respectively.

  11. Computational logic: its origins and applications

    PubMed Central

    2018-01-01

    Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of mathematical reasoning, the subject now comprises a wide variety of formalisms, techniques and technologies. One strand of work follows the ‘logic for computable functions (LCF) approach’ pioneered by Robin Milner, where proofs can be constructed interactively or with the help of users’ code (which does not compromise correctness). A refinement of LCF, called Isabelle, retains these advantages while providing flexibility in the choice of logical formalism and much stronger automation. The main application of these techniques has been to prove the correctness of hardware and software systems, but increasingly researchers have been applying them to mathematics itself. PMID:29507522

  12. Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid

    2017-03-01

    The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.

  13. Probability of failure prediction for step-stress fatigue under sine or random stress

    NASA Technical Reports Server (NTRS)

    Lambert, R. G.

    1979-01-01

    A previously proposed cumulative fatigue damage law is extended to predict the probability of failure or fatigue life for structural materials with S-N fatigue curves represented as a scatterband of failure points. The proposed law applies to structures subjected to sinusoidal or random stresses and includes the effect of initial crack (i.e., flaw) sizes. The corrected cycle ratio damage function is shown to have physical significance.

  14. Mineralogical Mapping of Asteroid Itokawa using Calibrated Hayabusa AMICA images and NIRS Spectrometer Data

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Becker, Kris J.; Reddy, Vishnu; Li, Jian-Yang; Bhatt, Megha

    2016-10-01

    The goal of our work is to restore data from the Hayabusa spacecraft that is available in the Planetary Data System (PDS) Small Bodies Node. More specifically, our objectives are to radiometrically calibrate and photometrically correct AMICA (Asteroid Multi-Band Imaging Camera) images of Itokawa. The existing images archived in the PDS are not in reflectance and not corrected from the effect of viewing geometry. AMICA images are processed with the Integrated Software for Imagers and Spectrometers (ISIS) system from USGS, widely used for planetary image analysis. The processing consists in the ingestion of the images in ISIS (amica2isis), updates to AMICA start time (sumspice), radiometric calibration (amicacal) including smear correction, applying SPICE ephemeris, adjusting control using Gaskell SUMFILEs (sumspice), projecting individual images (cam2map) and creating global or local mosaics. The application amicacal has also an option to remove pixels corresponding to the polarizing filters on the left side of the image frame. The amicacal application will include a correction for the Point Spread Function. The last version of the PSF published by Ishiguro et al. in 2014 includes correction for the effect of scattered light. This effect is important to correct because it can add 10% level in error and is affecting mostly the longer wavelength filters such as zs and p. The Hayabusa team decided to use the color data for six of the filters for scientific analysis after correcting for the scattered light. We will present calibrated data in I/F for all seven AMICA color filters. All newly implemented ISIS applications and map projections from this work have been or will be distributed to the community via ISIS public releases. We also processed the NIRS spectrometer data, and we will perform photometric modeling, then apply photometric corrections, and finally extract mineralogical parameters. The end results will be the creation of pyroxene chemistry and olivine/pyroxene ratio maps of Itokawa using NIRS and AMICA map products. All the products from this work will be archived on the PDS website. This work was supported by NASA Planetary Missions Data Analysis Program grant NNX13AP27G.

  15. Correcting Biases in a lower resolution global circulation model with data assimilation

    NASA Astrophysics Data System (ADS)

    Canter, Martin; Barth, Alexander

    2016-04-01

    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model's equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model's equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d'études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented.

  16. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme.

    PubMed

    Li, Shaohong L; Truhlar, Donald G

    2015-07-14

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations and atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.

  17. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaohong L.; Truhlar, Donald G.

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  18. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE PAGES

    Li, Shaohong L.; Truhlar, Donald G.

    2015-05-22

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  19. Response functions of Fuji imaging plates to monoenergetic protons in the energy range 0.6-3.2 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnet, T.; Denis-Petit, D.; Gobet, F.

    2013-01-15

    We have measured the responses of Fuji MS, SR, and TR imaging plates (IPs) to protons with energies ranging from 0.6 to 3.2 MeV. Monoenergetic protons were produced with the 3.5 MV AIFIRA (Applications Interdisciplinaires de Faisceaux d'Ions en Region Aquitaine) accelerator at the Centre d'Etudes Nucleaires de Bordeaux Gradignan (CENBG). The IPs were irradiated with protons backscattered off a tantalum target. We present the photo-stimulated luminescence response of the IPs together with the fading measurements for these IPs. A method is applied to allow correction of fading effects for variable proton irradiation duration. Using the IP fading corrections, amore » model of the IP response function to protons was developed. The model enables extrapolation of the IP response to protons up to proton energies of 10 MeV. Our work is finally compared to previous works conducted on Fuji TR IP response to protons.« less

  20. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  1. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  2. One-loop quantum gravity repulsion in the early Universe.

    PubMed

    Broda, Bogusław

    2011-03-11

    Perturbative quantum gravity formalism is applied to compute the lowest order corrections to the classical spatially flat cosmological Friedmann-Lemaître-Robertson-Walker solution (for the radiation). The presented approach is analogous to the approach applied to compute quantum corrections to the Coulomb potential in electrodynamics, or rather to the approach applied to compute quantum corrections to the Schwarzschild solution in gravity. In the framework of the standard perturbative quantum gravity, it is shown that the corrections to the classical deceleration, coming from the one-loop graviton vacuum polarization (self-energy), have (UV cutoff free) opposite to the classical repulsive properties which are not negligible in the very early Universe. The repulsive "quantum forces" resemble those known from loop quantum cosmology.

  3. Unified double- and single-sided homogeneous Green’s function representations

    PubMed Central

    van der Neut, Joost; Slob, Evert

    2016-01-01

    In wave theory, the homogeneous Green’s function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green’s function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green’s function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green’s function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green’s function retrieval. PMID:27436983

  4. Unified double- and single-sided homogeneous Green's function representations

    NASA Astrophysics Data System (ADS)

    Wapenaar, Kees; van der Neut, Joost; Slob, Evert

    2016-06-01

    In wave theory, the homogeneous Green's function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green's function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green's function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green's function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green's function retrieval.

  5. Measurement of regional cerebral blood flow with copper-62-PTSM and a three-compartment model.

    PubMed

    Okazawa, H; Yonekura, Y; Fujibayashi, Y; Mukai, T; Nishizawa, S; Magata, Y; Ishizu, K; Tamaki, N; Konishi, J

    1996-07-01

    We evaluated quantitatively 62Cu-labeled pyruvaldehyde bis(N4-methylthiosemicarbazone) copper II (62Cu-PTSM) as a brain perfusion tracer for positron emission tomography (PET). For quantitative measurement, the octanol extraction method is needed to correct for arterial radioactivity in estimating the lipophilic input function, but the procedure is not practical for clinical studies. To measure regional cerebral blood flow (rCBF) by 62Cu-PTSM with simple arterial blood sampling, a standard curve of the octanol extraction ratio and a three-compartment model were applied. We performed both 15O-labeled water PET and 62 Cu-PTSM PET with dynamic data acquisition and arterial sampling in six subjects. Data obtained in 10 subjects studied previously were used for the standard octanol extraction curve. Arterial activity was measured and corrected to obtain the true input function using the standard curve. Graphical analysis (Gjedde-Patlak plot) with the data for each subject fitted by a straight regression line suggested that 62Cu-PTSM can be analyzed by the three-compartment model with negligible K4. Using this model, K1-K3 were estimated from curve fitting of the cerebral time-activity curve and the corrected input function. The fractional uptake of 62Cu-PTSM was corrected to rCBF with the individual extraction at steady state calculated from K1-K3. The influx rates (Ki) obtained from three-compartment model and graphical analyses were compared for the validation of the model. A comparison of rCBF values obtained from 62Cu-PTSM and 150-water studies demonstrated excellent correlation. The results suggest the potential feasibility of quantitation of cerebral perfusion with 62Cu-PTSM accompanied by dynamic PET and simple arterial sampling.

  6. Understanding the difference in cohesive energies between alpha and beta tin in DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Legrain, Fleur; Manzhos, Sergei, E-mail: mpemanzh@nus.edu.sg

    2016-04-15

    The transition temperature between the low-temperature alpha phase of tin to beta tin is close to the room temperature (T{sub αβ} = 13{sup 0}C), and the difference in cohesive energy of the two phases at 0 K of about ΔE{sub coh} =0.02 eV/atom is at the limit of the accuracy of DFT (density functional theory) with available exchange-correlation functionals. It is however critically important to model the relative phase energies correctly for any reasonable description of phenomena and technologies involving these phases, for example, the performance of tin electrodes in electrochemical batteries. Here, we show that several commonly used andmore » converged DFT setups using the most practical and widely used PBE functional result in ΔE{sub coh} ≈0.04 eV/atom, with different types of basis sets and with different models of core electrons (all-electron or pseudopotentials of different types), which leads to a significant overestimation of T{sub αβ}. We show that this is due to the errors in relative positions of s and p –like bands, which, combined with different populations of these bands in α and β Sn, leads to overstabilization of alpha tin. We show that this error can be effectively corrected by applying a Hubbard +U correction to s –like states, whereby correct cohesive energies of both α and β Sn can be obtained with the same computational scheme. We quantify for the first time the effects of anharmonicity on ΔE{sub coh} and find that it is negligible.« less

  7. [CORRECTION OF VARUS KNEE WITH REDUCTION OSTEOTOMY DURING TOTAL KNEE ARTHROPLASTY].

    PubMed

    Su, Weiping; Xie, Jie; Li, Mingqing; Zeng, Min; Lei, Pengfei; Wang, Long; Hu, Yihe

    2015-12-01

    To evaluate the effectiveness of reduction osteotomy for correction of varus knee during total knee arthroplasty. A retrospective analysis was made on the clinical data of 16 patients (24 knees) who received reduction osteotomy for correcting varus knee during total knee arthroplasty between May 2010 and July 2012. There were 2 males (3 knees) and 14 females (21 knees), with an average age of 67 years (range, 57-79 years). The disease duration ranged from 3 to 15 years (mean, 9.1 years). The Knee Society Score (KSS) was 38.71 ± 10.04 for clinical score and 50.31 ± 14.31 for functional score. The range of motion (ROM) of the knee was (91.88 ± 13.01). The tibiofemoral angle was (9.04 ± 4.53)° of varus deformity. Reduction osteotomy was applied to correct varus knee. The operation time was 85-245 minutes (mean, 165.5 minutes); the obvious blood loss was 10-800 mL (mean, 183.1 mL); the hospitalization time was 8-22 days (mean, 13.6 days). All incisions healed by first intention. No neurovascular injury or patellar fracture occurred. The follow-up duration ranged from 37 to 62 months (mean, 48 months). The tibiofemoral angle was corrected to (3.92 ± 1.89)° of valgus at 48 hours after operation. The lower limb alignment recovered to normal. The X-ray films showed no evidence of obvious radiolucent line, osteolysis, or prosthesis subsidence. The results of KSS were significantly improved to 84.21 ± 6.49 for clinical score and 85.31 ± 6.95 for functional score (t = 20.665, P = 0.000; t = 9.585, P = 0.000); and ROM of the knee was significantly increased to (105.83 ± 11.29)° (t = 8.333, P = 0.000) at last follow-up. The effectiveness of reduction osteotomy for varus knee deformity during total knee arthroplasty is satisfactory. Proper alignment, ROM, and function of knee can be achieved.

  8. Combining MRI With PET for Partial Volume Correction Improves Image-Derived Input Functions in Mice

    NASA Astrophysics Data System (ADS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C.; Ansorge, Richard E.; Krieg, Thomas; Carpenter, T. Adrian; Sawiak, Stephen J.

    2015-06-01

    Accurate kinetic modelling using dynamic PET requires knowledge of the tracer concentration in plasma, known as the arterial input function (AIF). AIFs are usually determined by invasive blood sampling, but this is prohibitive in murine studies due to low total blood volumes. As a result of the low spatial resolution of PET, image-derived input functions (IDIFs) must be extracted from left ventricular blood pool (LVBP) ROIs of the mouse heart. This is challenging because of partial volume and spillover effects between the LVBP and myocardium, contaminating IDIFs with tissue signal. We have applied the geometric transfer matrix (GTM) method of partial volume correction (PVC) to 12 mice injected with 18F - FDG affected by a Myocardial Infarction (MI), of which 6 were treated with a drug which reduced infarction size [1]. We utilised high resolution MRI to assist in segmenting mouse hearts into 5 classes: LVBP, infarcted myocardium, healthy myocardium, lungs/body and background. The signal contribution from these 5 classes was convolved with the point spread function (PSF) of the Cambridge split magnet PET scanner and a non-linear fit was performed on the 5 measured signal components. The corrected IDIF was taken as the fitted LVBP component. It was found that the GTM PVC method could recover an IDIF with less contamination from spillover than an IDIF extracted from PET data alone. More realistic values of Ki were achieved using GTM IDIFs, which were shown to be significantly different (p <; 0.05) between the treated and untreated groups.

  9. Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain.

    PubMed

    Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W; Altmeyer, Wilson; Gutierrez, Juan E; Li, Jinqi; Lancaster, Jack L; Gonzalez-Lima, Francisco; Duong, Timothy Q

    2016-11-01

    Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22-62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9-3.4, P = .01-.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9-4.2, P = .03-.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article.

  10. Determination of sex from the hyoid bone in a contemporary White population.

    PubMed

    Logar, Ciara J; Peckmann, Tanya R; Meek, Susan; Walls, Stephen G

    2016-04-01

    Six discriminant functions, developed from an historic White population, were tested on a contemporary White population for determination of sex from the hyoid. One hundred and thirty four fused and unfused hyoids from a contemporary White population were used. Individuals ranged between 20 and 49 years old. Six historic White discriminant functions were applied to the fused and unfused hyoids of the pooled contemporary White population, i.e. all males and females and all age ranges combined. The overall accuracy rates were between 72.1% and 92.3%. Correct sex determination for contemporary White males ranged between 88.2% and 96.3%, while correct sex determination for contemporary White females ranged between 31.3% and 92.0%. Discriminant functions were created for the contemporary White population with overall mean accuracy rates between 67.0% and 93.0%. The multivariate discriminant function overall accuracy rates were between 89.0% and 93.0% and the univariate discriminant function overall accuracy rates were between 67.0% and 86.8%. The contemporary White population data were compared to other populations and showed significant differences between many of the variables measured. This study illustrated the need for population-specific and temporally-specific discriminant functions for determination of sex from the hyoid bone. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  11. The externally corrected coupled cluster approach with four- and five-body clusters from the CASSCF wave function.

    PubMed

    Xu, Enhua; Li, Shuhua

    2015-03-07

    An externally corrected CCSDt (coupled cluster with singles, doubles, and active triples) approach employing four- and five-body clusters from the complete active space self-consistent field (CASSCF) wave function (denoted as ecCCSDt-CASSCF) is presented. The quadruple and quintuple excitation amplitudes within the active space are extracted from the CASSCF wave function and then fed into the CCSDt-like equations, which can be solved in an iterative way as the standard CCSDt equations. With a size-extensive CASSCF reference function, the ecCCSDt-CASSCF method is size-extensive. When the CASSCF wave function is readily available, the computational cost of the ecCCSDt-CASSCF method scales as the popular CCSD method (if the number of active orbitals is small compared to the total number of orbitals). The ecCCSDt-CASSCF approach has been applied to investigate the potential energy surface for the simultaneous dissociation of two O-H bonds in H2O, the equilibrium distances and spectroscopic constants of 4 diatomic molecules (F2(+), O2(+), Be2, and NiC), and the reaction barriers for the automerization reaction of cyclobutadiene and the Cl + O3 → ClO + O2 reaction. In most cases, the ecCCSDt-CASSCF approach can provide better results than the CASPT2 (second order perturbation theory with a CASSCF reference function) and CCSDT methods.

  12. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  13. Quantum corrections of the truncated Wigner approximation applied to an exciton transport model.

    PubMed

    Ivanov, Anton; Breuer, Heinz-Peter

    2017-04-01

    We modify the path integral representation of exciton transport in open quantum systems such that an exact description of the quantum fluctuations around the classical evolution of the system is possible. As a consequence, the time evolution of the system observables is obtained by calculating the average of a stochastic difference equation which is weighted with a product of pseudoprobability density functions. From the exact equation of motion one can clearly identify the terms that are also present if we apply the truncated Wigner approximation. This description of the problem is used as a basis for the derivation of a new approximation, whose validity goes beyond the truncated Wigner approximation. To demonstrate this we apply the formalism to a donor-acceptor transport model.

  14. Fatigue reliability of deck structures subjected to correlated crack growth

    NASA Astrophysics Data System (ADS)

    Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.

    2013-12-01

    The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.

  15. Study of thermodynamic properties of liquid binary alloys by a pseudopotential method

    NASA Astrophysics Data System (ADS)

    Vora, Aditya M.

    2010-11-01

    On the basis of the Percus-Yevick hard-sphere model as a reference system and the Gibbs-Bogoliubov inequality, a thermodynamic perturbation method is applied with the use of the well-known model potential. By applying a variational method, the hard-core diameters are found which correspond to a minimum free energy. With this procedure, the thermodynamic properties such as the internal energy, entropy, Helmholtz free energy, entropy of mixing, and heat of mixing are computed for liquid NaK binary systems. The influence of the local-field correction functions of Hartree, Taylor, Ichimaru-Utsumi, Farid-Heine-Engel-Robertson, and Sarkar-Sen-Haldar-Roy is also investigated. The computed excess entropy is in agreement with available experimental data in the case of liquid alloys, whereas the agreement for the heat of mixing is poor. This may be due to the sensitivity of the latter to the potential parameters and dielectric function.

  16. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    NASA Astrophysics Data System (ADS)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  17. Test of Monin-Obukhov similarity theory using distributed temperature sensing

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Sayde, C.; Li, Q.; Gentine, P.

    2017-12-01

    Monin-Obukhov similarity theory [Monin and Obukhov, 1954] (MOST) has been widely used to calculate atmospheric surface fluxes applying the structure correction functions [Stull, 1988]. The exact forms of the structure correction functions for momentum and heat, which depend on the vertical gradient velocity and temperature, have been determined empirically mostly from the Kansas experiment [Kaimal et al., 1972]. However, due to the limitation of point measurement, the vertical gradient of temperature and horizontal wind speed are not well captured. Here we propose a way to measure the vertical gradient of temperature and horizontal wind speed with high resolution in space (every 12.7 cm) and time (every second) using the Distributed Temperature Sensing [Selker et al., 2006] (DTS), thus determining the exact form of the structure correction functions of MOST under various stability conditions. Two parallel vertical fiber optics will be placed on a tower at the central facility of ARM SGP site. Vertical air temperature will be measured every 12.7 cm by the fiber optics and horizontal wind speed along fiber will be measured. Then vertical gradient of temperature and horizontal wind speed will be calculated and stability correction functions for momentum and heat will be determined. ReferencesKaimal, J. C., Wyngaard, J. C., Izumi, Y., and Cote, O. R. (1972), Spectral characteristics of surface-layer turbulence, Quarterly Journal of the Royal Meteorological Society, 98(417), 563-589, doi: 10.1002/qj.49709841707. Monin, A., and Obukhov, A. (1954), Basic laws of turbulent mixing in the surface layer of the atmosphere, Contrib. Geophys. Inst. Acad. Sci. USSR, 24(151), 163-187. Selker, J., Thévenaz, L., Huwald, H., Mallet, A., Luxemburg, W., van de Giesen, N., Stejskal, M., Zeman, J., Westhoff, M., and Parlange, M. B. (2006), Distributed fiber-optic temperature sensing for hydrologic systems, Water Resources Research, 42, W12202, doi: 10.1029/2006wr005326. Stull, R. (1988), An Introduction to Boundary Layer Meteorology, pp. 666, Kluwer Academic Publishers, Dordrecht.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagesh, S Setlur; Rana, R; Russ, M

    Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less

  19. Variationally consistent approximation scheme for charge transfer

    NASA Technical Reports Server (NTRS)

    Halpern, A. M.

    1978-01-01

    The author has developed a technique for testing various charge-transfer approximation schemes for consistency with the requirements of the Kohn variational principle for the amplitude to guarantee that the amplitude is correct to second order in the scattering wave functions. Applied to Born-type approximations for charge transfer it allows the selection of particular groups of first-, second-, and higher-Born-type terms that obey the consistency requirement, and hence yield more reliable approximation to the amplitude.

  20. A Viable Paradigm for Quantum Reality

    NASA Astrophysics Data System (ADS)

    Srivastava, Jagdish

    2010-10-01

    After a brief discussion of the EPR paradox, Bell's inequality, and Aspect's experiment, arguments will be presented in favor of the following statements: ``As it stands, Quantum mechanics is incomplete. There is further hidden structure, which would involve variables. No influence can move faster than light. The wave function is one whole thing and any change in its structure instantly influences its outcomes. Bell's theorem has not been applied correctly. There is a better paradigm.'' The said paradigm will be presented.

  1. GEEC All the Way Down

    DTIC Science & Technology

    2015-01-13

    applying formal methods to systems software, e.g., IronClad [16] and seL4 [19], promise that this vision is not a fool’s er- rand after all. In this...kernel seL4 [19] is fully verified for functional correct- ness and it runs with other deprivileged services. How- ever, the verification process used...portion, which is non-trivial for theorem proving-based approaches. In our COSS example, adding the trusted network logging extensions to seL4 will

  2. False vacuum decay in quantum mechanics and four dimensional scalar field theory

    NASA Astrophysics Data System (ADS)

    Bezuglov, Maxim

    2018-04-01

    When the Higgs boson was discovered in 2012 it was realized that electroweak vacuum may suffer a possible metastability on the Planck scale and can eventually decay. To understand this problem it is important to have reliable predictions for the vacuum decay rate within the framework of quantum field theory. For now, it can only be done at one loop level, which is apparently is not enough. The aim of this work is to develop a technique for the calculation of two and higher order radiative corrections to the false vacuum decay rate in the framework of four dimensional scalar quantum field theory and then apply it to the case of the Standard Model. To achieve this goal, we first start from the case of d=1 dimensional QFT i.e. quantum mechanics. We show that for some potentials two and three loop corrections can be very important and must be taken into account. Next, we use quantum mechanical example as a template for the general d=4 dimensional theory. In it we are concentrating on the calculations of bounce solution and corresponding Green function in so called thin wall approximation. The obtained Green function is then used as a main ingredient for the calculation of two loop radiative corrections to the false vacuum decay rate.

  3. Logistic regression function for detection of suspicious performance during baseline evaluations using concussion vital signs.

    PubMed

    Hill, Benjamin David; Womble, Melissa N; Rohling, Martin L

    2015-01-01

    This study utilized logistic regression to determine whether performance patterns on Concussion Vital Signs (CVS) could differentiate known groups with either genuine or feigned performance. For the embedded measure development group (n = 174), clinical patients and undergraduate students categorized as feigning obtained significantly lower scores on the overall test battery mean for the CVS, Shipley-2 composite score, and California Verbal Learning Test-Second Edition subtests than did genuinely performing individuals. The final full model of 3 predictor variables (Verbal Memory immediate hits, Verbal Memory immediate correct passes, and Stroop Test complex reaction time correct) was significant and correctly classified individuals in their known group 83% of the time (sensitivity = .65; specificity = .97) in a mixed sample of young-adult clinical cases and simulators. The CVS logistic regression function was applied to a separate undergraduate college group (n = 378) that was asked to perform genuinely and identified 5% as having possibly feigned performance indicating a low false-positive rate. The failure rate was 11% and 16% at baseline cognitive testing in samples of high school and college athletes, respectively. These findings have particular relevance given the increasing use of computerized test batteries for baseline cognitive testing and return-to-play decisions after concussion.

  4. Artifacts reduction in VIR/Dawn data.

    PubMed

    Carrozzo, F G; Raponi, A; De Sanctis, M C; Ammannito, E; Giardino, M; D'Aversa, E; Fonte, S; Tosi, F

    2016-12-01

    Remote sensing images are generally affected by different types of noise that degrade the quality of the spectral data (i.e., stripes and spikes). Hyperspectral images returned by a Visible and InfraRed (VIR) spectrometer onboard the NASA Dawn mission exhibit residual systematic artifacts. VIR is an imaging spectrometer coupling high spectral and spatial resolutions in the visible and infrared spectral domain (0.25-5.0 μm). VIR data present one type of noise that may mask or distort real features (i.e., spikes and stripes), which may lead to misinterpretation of the surface composition. This paper presents a technique for the minimization of artifacts in VIR data that include a new instrument response function combining ground and in-flight radiometric measurements, correction of spectral spikes, odd-even band effects, systematic vertical stripes, high-frequency noise, and comparison with ground telescopic spectra of Vesta and Ceres. We developed a correction of artifacts in a two steps process: creation of the artifacts matrix and application of the same matrix to the VIR dataset. In the approach presented here, a polynomial function is used to fit the high frequency variations. After applying these corrections, the resulting spectra show improvements of the quality of the data. The new calibrated data enhance the significance of results from the spectral analysis of Vesta and Ceres.

  5. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    NASA Astrophysics Data System (ADS)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  6. Impact of Atmospheric Chromatic Effects on Weak Lensing Measurements

    NASA Astrophysics Data System (ADS)

    Meyers, Joshua E.; Burchat, Patricia R.

    2015-07-01

    Current and future imaging surveys will measure cosmic shear with statistical precision that demands a deeper understanding of potential systematic biases in galaxy shape measurements than has been achieved to date. We use analytic and computational techniques to study the impact on shape measurements of two atmospheric chromatic effects for ground-based surveys such as the Dark Energy Survey and the Large Synoptic Survey Telescope (LSST): (1) atmospheric differential chromatic refraction and (2) wavelength dependence of seeing. We investigate the effects of using the point-spread function (PSF) measured with stars to determine the shapes of galaxies that have different spectral energy distributions than the stars. We find that both chromatic effects lead to significant biases in galaxy shape measurements for current and future surveys, if not corrected. Using simulated galaxy images, we find a form of chromatic “model bias” that arises when fitting a galaxy image with a model that has been convolved with a stellar, instead of galactic, PSF. We show that both forms of atmospheric chromatic biases can be predicted (and corrected) with minimal model bias by applying an ordered set of perturbative PSF-level corrections based on machine-learning techniques applied to six-band photometry. Catalog-level corrections do not address the model bias. We conclude that achieving the ultimate precision for weak lensing from current and future ground-based imaging surveys requires a detailed understanding of the wavelength dependence of the PSF from the atmosphere, and from other sources such as optics and sensors. The source code for this analysis is available at https://github.com/DarkEnergyScienceCollaboration/chroma.

  7. Die Bestimmung der Ephemeridenzeitkorrektur aus Aufnahmen der totalen Sonnenfinsternis vom 30. Juni 1973.

    NASA Astrophysics Data System (ADS)

    Haupt, H. F.; Firneis, M. G.; Fritzer, J. M.

    Photographs of the partial phases of the 1973 June 30 total solar eclipse taken in Mauretania were used to derive the correction ΔT = ET-UT. The introduction describes the problems and history of ΔT determinations. After a brief description of the expedition, location, instruments, and the observations, the reduction is treated in some detail. Probably due to the improper exposure of the films the radius of the Sun turned out too small, while the lunar radius showed a peculiar increase after totality. Correspondingly, this increase resulted in a decrease of the Sun-Moon distances that had to be accounted for by the introduction of a correction function. After having applied this plausible correction several assumptions of ΔT were made in the computations and the one yielding the mean value of the residuals equal to zero, was taken as the ΔT of the date, namely ΔT1973.5 = +54s-91, which is in good agreement with other determinations.

  8. Protection of Mobile Agents Execution Using a Modified Self-Validating Branch-Based Software Watermarking with External Sentinel

    NASA Astrophysics Data System (ADS)

    Tomàs-Buliart, Joan; Fernández, Marcel; Soriano, Miguel

    Critical infrastructures are usually controlled by software entities. To monitor the well-function of these entities, a solution based in the use of mobile agents is proposed. Some proposals to detect modifications of mobile agents, as digital signature of code, exist but they are oriented to protect software against modification or to verify that an agent have been executed correctly. The aim of our proposal is to guarantee that the software is being executed correctly by a non trusted host. The way proposed to achieve this objective is by the improvement of the Self-Validating Branch-Based Software Watermarking by Myles et al.. The proposed modification is the incorporation of an external element called sentinel which controls branch targets. This technique applied in mobile agents can guarantee the correct operation of an agent or, at least, can detect suspicious behaviours of a malicious host during the execution of the agent instead of detecting when the execution of the agent have finished.

  9. HST image restoration: A comparison of pre- and post-servicing mission results

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Mo, J.

    1992-01-01

    A variety of image restoration techniques (e.g., Wiener filter, Lucy-Richardson, MEM) have been applied quite successfully to the aberrated HST images. The HST servicing mission (scheduled for late 1993 or early 1994) will install a corrective optics system (COSTAR) for the Faint Object Camera and spectrographs and replace the Wide Field/Planetary Camera with a second generation instrument (WF/PC-II) having its own corrective elements. The image quality is expected to be improved substantially with these new instruments. What then is the role of image restoration for the HST in the long term? Through a series of numerical experiments using model point-spread functions for both aberrated and unaberrated optics, we find that substantial improvements in image resolution can be obtained for post-servicing mission data using the same or similar algorithms as being employed now to correct aberrated images. Included in our investigations are studies of the photometric integrity of the restoration algorithms and explicit models for HST pointing errors (spacecraft jitter).

  10. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  11. Assessment of C-band Polarimetric Radar Rainfall Measurements During Strong Attenuation.

    NASA Astrophysics Data System (ADS)

    Paredes-Victoria, P. N.; Rico-Ramirez, M. A.; Pedrozo-Acuña, A.

    2016-12-01

    In the modern hydrological modelling and their applications on flood forecasting systems and climate modelling, reliable spatiotemporal rainfall measurements are the keystone. Raingauges are the foundation in hydrology to collect rainfall data, however they are prone to errors (e.g. systematic, malfunctioning, and instrumental errors). Moreover rainfall data from gauges is often used to calibrate and validate weather radar rainfall, which is distributed in space. Therefore, it is important to apply techniques to control the quality of the raingauge data in order to guarantee a high level of confidence in rainfall measurements for radar calibration and numerical weather modelling. Also, the reliability of radar data is often limited because of the errors in the radar signal (e.g. clutter, variation of the vertical reflectivity profile, beam blockage, attenuation, etc) which need to be corrected in order to increase the accuracy of the radar rainfall estimation. This paper presents a method for raingauge-measurement quality-control correction based on the inverse distance weighted as a function of correlated climatology (i.e. performed by using the reflectivity from weather radar). Also a Clutter Mitigation Decision (CMD) algorithm is applied for clutter filtering process, finally three algorithms based on differential phase measurements are applied for radar signal attenuation correction. The quality-control method proves that correlated climatology is very sensitive in the first 100 kilometres for this area. The results also showed that ground clutter affects slightly the radar measurements due to the low gradient of the terrain in the area. However, strong radar signal attenuation is often found in this data set due to the heavy storms that take place in this region and the differential phase measurements are crucial to correct for attenuation at C-band frequencies. The study area is located in Sabancuy-Campeche, Mexico (Latitude 18.97 N, Longitude 91.17º W) and the radar rainfall measurements are obtained from a C-band polarimetric radar whereas raingauge measurements come from stations with 10-min and 24-hr time resolutions.

  12. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  13. Bracing of pectus carinatum: A quantitative analysis.

    PubMed

    Bugajski, Tomasz; Murari, Kartikeya; Lopushinsky, Steven; Schneider, Marc; Ronsky, Janet

    2018-05-01

    Primary treatment of pectus carinatum (PC) is performed with an external brace that compresses the protrusion. Patients are 'prescribed' a brace tightening force. However, no visual guides exist to display this force magnitude. The purpose of this study was to determine the repeatability of patients in applying their prescribed force over time and to determine whether the protrusion stiffness influences the patient-applied forces and the protrusion correction rate. Twenty-one male participants (12-17years) with chondrogladiolar PC were recruited at the time of brace fitting. Participants were evaluated on three visits: fitting, one month postfitting, and two months postfitting. Differences between prescribed force and patient-applied force were evaluated. Relationships of patient-applied force and correction rate with protrusion stiffness were assessed. Majority of individuals followed for two months (75%) had a significantly different patient-applied force (p<0.05) from their prescribed force. Protrusion stiffness had a positive relationship with patient-applied force, but no relationship with correction rate. Patients did not follow their prescribed force. Magnitudes of these differences require further investigation to determine clinical significance. Patient-applied forces were influenced by protrusion stiffness, but correction rate was not. Other factors may influence these variables, such as patient compliance. Treatment Study - Level IV. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Casual instrument corrections for short-period and broadband seismometers

    USGS Publications Warehouse

    Haney, Matthew M.; Power, John; West, Michael; Michaels, Paul

    2012-01-01

    Of all the filters applied to recordings of seismic waves, which include source, path, and site effects, the one we know most precisely is the instrument filter. Therefore, it behooves seismologists to accurately remove the effect of the instrument from raw seismograms. Applying instrument corrections allows analysis of the seismogram in terms of physical units (e.g., displacement or particle velocity of the Earth’s surface) instead of the output of the instrument (e.g., digital counts). The instrument correction can be considered the most fundamental processing step in seismology since it relates the raw data to an observable quantity of interest to seismologists. Complicating matters is the fact that, in practice, the term “instrument correction” refers to more than simply the seismometer. The instrument correction compensates for the complete recording system including the seismometer, telemetry, digitizer, and any anti‐alias filters. Knowledge of all these components is necessary to perform an accurate instrument correction. The subject of instrument corrections has been covered extensively in the literature (Seidl, 1980; Scherbaum, 1996). However, the prospect of applying instrument corrections still evokes angst among many seismologists—the authors of this paper included. There may be several reasons for this. For instance, the seminal paper by Seidl (1980) exists in a journal that is not currently available in electronic format and cannot be accessed online. Also, a standard method for applying instrument corrections involves the programs TRANSFER and EVALRESP in the Seismic Analysis Code (SAC) package (Goldstein et al., 2003). The exact mathematical methods implemented in these codes are not thoroughly described in the documentation accompanying SAC.

  15. Functional neuroimaging of high-risk 6-month-old infants predicts a diagnosis of autism at 24 months of age

    PubMed Central

    Emerson, Robert W.; Adams, Chloe; Nishino, Tomoyuki; Hazlett, Heather Cody; Wolff, Jason J.; Zwaigenbaum, Lonnie; Constantino, John N.; Shen, Mark D.; Swanson, Meghan R.; Elison, Jed T.; Kandala, Sridhar; Estes, Annette M.; Botteron, Kelly N.; Collins, Louis; Dager, Stephen R.; Evans, Alan C.; Gerig, Guido; Gu, Hongbin; McKinstry, Robert C.; Paterson, Sarah; Schultz, Robert T.; Styner, Martin; Network, IBIS; Schlaggar, Bradley L.; Pruett, John R.; Piven, Joseph

    2018-01-01

    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by social deficits and repetitive behaviors that typically emerge by 24 months of age. To develop effective early interventions that can potentially ameliorate the defining deficits of ASD and improve long-term outcomes, early detection is essential. Using prospective neuroimaging of 59 6-month-old infants with a high familial risk for ASD, we show that functional connectivity magnetic resonance imaging correctly identified which individual children would receive a research clinical best-estimate diagnosis of ASD at 24 months of age. Functional brain connections were defined in 6-month-old infants that correlated with 24-month scores on measures of social behavior, language, motor development, and repetitive behavior, which are all features common to the diagnosis of ASD. A fully cross-validated machine learning algorithm applied at age 6 months had a positive predictive value of 100% [95% confidence interval (CI), 62.9 to 100], correctly predicting 9 of 11 infants who received a diagnosis of ASD at 24 months (sensitivity, 81.8%; 95% CI, 47.8 to 96.8). All 48 6-month-old infants who were not diagnosed with ASD were correctly classified [specificity, 100% (95% CI, 90.8 to 100); negative predictive value, 96.0% (95% CI, 85.1 to 99.3)]. These findings have clinical implications for early risk assessment and the feasibility of developing early preventative interventions for ASD. PMID:28592562

  16. Anamorphic quasiperiodic universes in modified and Einstein gravity with loop quantum gravity corrections

    NASA Astrophysics Data System (ADS)

    Amaral, Marcelo M.; Aschheim, Raymond; Bubuianu, Laurenţiu; Irwin, Klee; Vacaru, Sergiu I.; Woolridge, Daniel

    2017-09-01

    The goal of this work is to elaborate on new geometric methods of constructing exact and parametric quasiperiodic solutions for anamorphic cosmology models in modified gravity theories, MGTs, and general relativity, GR. There exist previously studied generic off-diagonal and diagonalizable cosmological metrics encoding gravitational and matter fields with quasicrystal like structures, QC, and holonomy corrections from loop quantum gravity, LQG. We apply the anholonomic frame deformation method, AFDM, in order to decouple the (modified) gravitational and matter field equations in general form. This allows us to find integral varieties of cosmological solutions determined by generating functions, effective sources, integration functions and constants. The coefficients of metrics and connections for such cosmological configurations depend, in general, on all spacetime coordinates and can be chosen to generate observable (quasi)-periodic/aperiodic/fractal/stochastic/(super) cluster/filament/polymer like (continuous, stochastic, fractal and/or discrete structures) in MGTs and/or GR. In this work, we study new classes of solutions for anamorphic cosmology with LQG holonomy corrections. Such solutions are characterized by nonlinear symmetries of generating functions for generic off-diagonal cosmological metrics and generalized connections, with possible nonholonomic constraints to Levi-Civita configurations and diagonalizable metrics depending only on a time like coordinate. We argue that anamorphic quasiperiodic cosmological models integrate the concept of quantum discrete spacetime, with certain gravitational QC-like vacuum and nonvacuum structures. And, that of a contracting universe that homogenizes, isotropizes and flattens without introducing initial conditions or multiverse problems.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, X.; Rungger, I.; Zapol, P.

    Understanding electronic properties of substoichiometric phases of titanium oxide such as Magneli phase Ti 4O 7 is crucial in designing and modeling resistive switching devices. Here we present our study on Magneli phase Ti 4O 7 together with rutile TiO 2 and Ti 2O 3 using density functional theory methods with atomic-orbital-based self-interaction correction (ASIC). We predict a new antiferromagnetic (AF) ground state in the low temperature (LT) phase, and we explain energy difference with a competing AF state using a Heisenberg model. The predicted energy ordering of these states in the LT phase is calculated to be robust inmore » a wide range of modeled isotropic strain. We have also investigated the dependence of the electronic structures of the Ti-O phases on stoichiometry. The splitting of titanium t 2g orbitals is enhanced with increasing oxygen deficiency as Ti-O is reduced. Furthermore, the electronic properties of all these phases can be reasonably well described by applying ASIC with a "standard" value for transition metal oxides of the empirical parameter alpha of 0.5 representing the magnitude of the applied self-interaction correction.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, X.; Rungger, I.; Zapol, P.

    Understanding electronic properties of substoichiometric phases of titanium oxide such as Magneli phase Ti4O7 is crucial in designing and modeling resistive switching devices. Here we present our study on Magneli phase Ti4O7 together with rutile TiO2 and Ti2O3 using density functional theory methods with atomic-orbital-based self-interaction correction (ASIC). We predict a new antiferromagnetic (AF) ground state in the low temperature (LT) phase, and we explain energy difference with a competing AF state using a Heisenberg model. The predicted energy ordering of these states in the LT phase is calculated to be robust in a wide range of modeled isotropic strain.more » We have also investigated the dependence of the electronic structures of the Ti-O phases on stoichiometry. The splitting of titanium t(2g) orbitals is enhanced with increasing oxygen deficiency as Ti-O is reduced. The electronic properties of all these phases can be reasonably well described by applying ASIC with a "standard" value for transition metal oxides of the empirical parameter alpha of 0.5 representing the magnitude of the applied self-interaction correction.« less

  19. On-site identification of meat species in processed foods by a rapid real-time polymerase chain reaction system.

    PubMed

    Furutani, Shunsuke; Hagihara, Yoshihisa; Nagai, Hidenori

    2017-09-01

    Correct labeling of foods is critical for consumers who wish to avoid a specific meat species for religious or cultural reasons. Therefore, gene-based point-of-care food analysis by real-time Polymerase Chain Reaction (PCR) is expected to contribute to the quality control in the food industry. In this study, we perform rapid identification of meat species by our portable rapid real-time PCR system, following a very simple DNA extraction method. Applying these techniques, we correctly identified beef, pork, chicken, rabbit, horse, and mutton in processed foods in 20min. Our system was sensitive enough to detect the interfusion of about 0.1% chicken egg-derived DNA in a processed food sample. Our rapid real-time PCR system is expected to contribute to the quality control in food industries because it can be applied for the identification of meat species, and future applications can expand its functionality to the detection of genetically modified organisms or mutations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors

    NASA Astrophysics Data System (ADS)

    Jacobson, N. Tobias; Witzel, Wayne M.; Nielsen, Erik; Carroll, Malcolm S.

    2013-03-01

    Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial increase in gate fidelities. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  1. PSF mapping-based correction of eddy-current-induced distortions in diffusion-weighted echo-planar imaging.

    PubMed

    In, Myung-Ho; Posnansky, Oleg; Speck, Oliver

    2016-05-01

    To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.

  2. Concurrent variation of response bias and sensitivity in an operant-psychophysical test.

    NASA Technical Reports Server (NTRS)

    Terman, M.; Terman, J. S.

    1972-01-01

    The yes-no signal detection procedure was applied to a single-response operant paradigm in which rats discriminated between a standard auditory intensity and attenuated comparison values. The payoff matrix was symmetrical (with reinforcing brain stimulation for correct detections and brief time-out for errors), but signal probability and intensity differences were varied to generate a family of isobias and isosensitivity functions. The d' parameter remained fairly constant across a wide range of bias levels. Isobias functions deviated from a strict matching strategy as discrimination difficulty increased, although an orderly relation was maintained between signal probability value and the degree and direction of response bias.

  3. Application of Van Der Waals Density Functional Theory to Study Physical Properties of Energetic Materials

    NASA Astrophysics Data System (ADS)

    Conroy, M. W.; Budzevich, M. M.; Lin, Y.; Oleynik, I. I.; White, C. T.

    2009-12-01

    An empirical correction to account for van der Waals interactions based on the work of Neumann and Perrin [J. Phys. Chem. B 109, 15531 (2005)] was applied to density functional theory calculations of energetic molecular crystals. The calculated equilibrium unit-cell volumes of FOX-7, β-HMX, solid nitromethane, PETN-I, α-RDX, and TATB show a significant improvement in the agreement with experimental results. Hydrostatic-compression simulations of β-HMX, PETN-I, and α-RDX were also performed. The isothermal equations of state calculated from the results show increased agreement with experiment in the pressure intervals studied.

  4. A Systematic Methodology for Verifying Superscalar Microprocessors

    NASA Technical Reports Server (NTRS)

    Srivas, Mandayam; Hosabettu, Ravi; Gopalakrishnan, Ganesh

    1999-01-01

    We present a systematic approach to decompose and incrementally build the proof of correctness of pipelined microprocessors. The central idea is to construct the abstraction function by using completion functions, one per unfinished instruction, each of which specifies the effect (on the observables) of completing the instruction. In addition to avoiding the term size and case explosion problem that limits the pure flushing approach, our method helps localize errors, and also handles stages with interactive loops. The technique is illustrated on pipelined and superscalar pipelined implementations of a subset of the DLX architecture. It has also been applied to a processor with out-of-order execution.

  5. LANDSAT data preprocessing

    NASA Technical Reports Server (NTRS)

    Austin, W. W.

    1983-01-01

    The effect on LANDSAT data of a Sun angle correction, an intersatellite LANDSAT-2 and LANDSAT-3 data range adjustment, and the atmospheric correction algorithm was evaluated. Fourteen 1978 crop year LACIE sites were used as the site data set. The preprocessing techniques were applied to multispectral scanner channel data and transformed data were plotted and used to analyze the effectiveness of the preprocessing techniques. Ratio transformations effectively reduce the need for preprocessing techniques to be applied directly to the data. Subtractive transformations are more sensitive to Sun angle and atmospheric corrections than ratios. Preprocessing techniques, other than those applied at the Goddard Space Flight Center, should only be applied as an option of the user. While performed on LANDSAT data the study results are also applicable to meteorological satellite data.

  6. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  7. Using bipedicled myocutaneous Tripier flap to correct ectropion after excision of lower eyelid basal cell carcinoma.

    PubMed

    Maghsodnia, Gholamreza; Ebrahimi, Ali; Arshadi, Amirabbas

    2011-03-01

    Many techniques have been described for correcting ectropion, but when the ectropion follows skin cancer excision, only a technique that replaces missing skin should be used. The bipedicled Tripier flap tends to give some excess bulk at each end but gives an excellent correction of ectropion. The aim of this study was to apply musculocutaneous bipedicled Tripier flap from upper lid for correction of ectropion due to previous excision of lower-lid malignancies and evaluate its outcome. This was a prospective case-series study. In this study, 15 patients (6 women, 9 men), ranging from 35 to 72 years old (mean, 51 years) underwent operation with Tripier flap for reconstruction of ectropion because of basal cell carcinoma (BCC) resection. In patients with ectropion, Tripier flap with or without ear or nasal septal cartilage was used for reconstruction of deformities 3 months after lower-lid reconstruction with local flaps. All patients were satisfied, and ectropion was corrected in all cases. There were no complications such as dry eye or corneal abrasion after operation. Also, we had not any case of ischemic flap. We suggest that Tripier flap is one of the best methods for reconstruction of lower-lid retraction or ectropion. This is a desirable method, functionally and aesthetically.

  8. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  9. Analysis of Cricoid Pressure Force and Technique Among Anesthesiologists, Nurse Anesthetists, and Registered Nurses.

    PubMed

    Lefave, Melissa; Harrell, Brad; Wright, Molly

    2016-06-01

    The purpose of this project was to assess the ability of anesthesiologists, nurse anesthetists, and registered nurses to correctly identify anatomic landmarks of cricoid pressure and apply the correct amount of force. The project included an educational intervention with one group pretest-post-test design. Participants demonstrated cricoid pressure on a laryngotracheal model. After an educational intervention video, participants were asked to repeat cricoid pressure on the model. Participants with a nurse anesthesia background applied more appropriate force pretest than other participants; however, post-test results, while improved, showed no significant difference among providers. Participant identification of the correct anatomy of the cricoid cartilage and application of correct force were significantly improved after education. This study revealed that participants lacked prior knowledge of correct cricoid anatomy and pressure as well as the ability to apply correct force to the laryngotracheal model before an educational intervention. The intervention used in this study proved successful in educating health care providers. Copyright © 2016 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  10. Evaluation of three methods for retrospective correction of vignetting on medical microscopy images utilizing two open source software tools.

    PubMed

    Babaloukas, Georgios; Tentolouris, Nicholas; Liatis, Stavros; Sklavounou, Alexandra; Perrea, Despoina

    2011-12-01

    Correction of vignetting on images obtained by a digital camera mounted on a microscope is essential before applying image analysis. The aim of this study is to evaluate three methods for retrospective correction of vignetting on medical microscopy images and compare them with a prospective correction method. One digital image from four different tissues was used and a vignetting effect was applied on each of these images. The resulted vignetted image was replicated four times and in each replica a different method for vignetting correction was applied with fiji and gimp software tools. The highest peak signal-to-noise ratio from the comparison of each method to the original image was obtained from the prospective method in all tissues. The morphological filtering method provided the highest peak signal-to-noise ratio value amongst the retrospective methods. The prospective method is suggested as the method of choice for correction of vignetting and if it is not applicable, then the morphological filtering may be suggested as the retrospective alternative method. © 2011 The Authors Journal of Microscopy © 2011 Royal Microscopical Society.

  11. Linearized semiclassical initial value time correlation functions with maximum entropy analytic continuation.

    PubMed

    Liu, Jian; Miller, William H

    2008-09-28

    The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. LSC-IVR provides a very effective "prior" for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25 and 14 K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T=25 K, but the MEAC procedure produces a significant correction at the lower temperature (T=14 K). Comparisons are also made as to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.

  12. Characterizing Bonding Patterns in Diradicals and Triradicals by Density-Based Wave Function Analysis: A Uniform Approach.

    PubMed

    Orms, Natalie; Rehn, Dirk R; Dreuw, Andreas; Krylov, Anna I

    2018-02-13

    Density-based wave function analysis enables unambiguous comparisons of the electronic structure computed by different methods and removes ambiguity of orbital choices. We use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high- and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such as polyradicals. We show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of the bonding pattern.

  13. Spatial Point Pattern Analysis of Neurons Using Ripley's K-Function in 3D

    PubMed Central

    Jafari-Mamaghani, Mehrdad; Andersson, Mikael; Krieger, Patrik

    2010-01-01

    The aim of this paper is to apply a non-parametric statistical tool, Ripley's K-function, to analyze the 3-dimensional distribution of pyramidal neurons. Ripley's K-function is a widely used tool in spatial point pattern analysis. There are several approaches in 2D domains in which this function is executed and analyzed. Drawing consistent inferences on the underlying 3D point pattern distributions in various applications is of great importance as the acquisition of 3D biological data now poses lesser of a challenge due to technological progress. As of now, most of the applications of Ripley's K-function in 3D domains do not focus on the phenomenon of edge correction, which is discussed thoroughly in this paper. The main goal is to extend the theoretical and practical utilization of Ripley's K-function and corresponding tests based on bootstrap resampling from 2D to 3D domains. PMID:20577588

  14. Spatially coupled low-density parity-check error correction for holographic data storage

    NASA Astrophysics Data System (ADS)

    Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro

    2017-09-01

    The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.

  15. TLD linearity vs. beam energy and modality.

    PubMed

    Troncalli, Andrew J; Chapman, Jane

    2002-01-01

    Thermoluminescent dosimetry (TLD) is considered to be a valuable dosimetric tool in determining patient dose. Lithium fluoride doped with magnesium and titanium (TLD-100) is widely used, as it does not display widely divergent energy dependence. For many years, we have known that TLD-100 shows supralinearity to dose. In a radiotherapy clinic, there are multiple energies and modality beams. This work investigates whether individual linearity corrections must be used for each beam or whether a single correction can be applied to all beams. The response of TLD as a function of dose was measured from 25 cGy to 1000 cGy on both electrons and photons from 6 to 18 MeV. This work shows that, within our measurement uncertainty, TLD-100 exhibits supralinearity at all megavoltage energies and modalities.

  16. GENERAL: Scattering Phase Correction for Semiclassical Quantization Rules in Multi-Dimensional Quantum Systems

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung

    2010-02-01

    While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.

  17. Heavy quarkonium production at collider energies: Partonic cross section and polarization

    DOE PAGES

    Qiu, Jian -Wei; Kang, Zhong -Bo; Ma, Yan -Qing; ...

    2015-01-27

    We calculate the O(α³ s) short-distance, QCD collinear-factorized coefficient functions for all partonic channels that include the production of a heavy quark pair at short distances. Thus, this provides the first power correction to the collinear-factorized inclusive hadronic production of heavy quarkonia at large transverse momentum, pT, including the full leading-order perturbative contributions to the production of heavy quark pairs in all color and spin states employed in NRQCD treatments of this process. We discuss the role of the first power correction in the production rates and the polarizations of heavy quarkonia in high-energy hadronic collisions. The consistency of QCDmore » collinear factorization and nonrelativistic QCD factorization applied to heavy quarkonium production is also discussed.« less

  18. HTGR plant availability and reliability evaluations. Volume I. Summary of evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cadwallader, G.J.; Hannaman, G.W.; Jacobsen, F.K.

    1976-12-01

    The report (1) describes a reliability assessment methodology for systematically locating and correcting areas which may contribute to unavailability of new and uniquely designed components and systems, (2) illustrates the methodology by applying it to such components in a high-temperature gas-cooled reactor (Public Service Company of Colorado's Fort St. Vrain 330-MW(e) HTGR), and (3) compares the results of the assessment with actual experience. The methodology can be applied to any component or system; however, it is particularly valuable for assessments of components or systems which provide essential functions, or the failure or mishandling of which could result in relatively largemore » economic losses.« less

  19. A sentence sliding window approach to extract protein annotations from biomedical articles

    PubMed Central

    Krallinger, Martin; Padron, Maria; Valencia, Alfonso

    2005-01-01

    Background Within the emerging field of text mining and statistical natural language processing (NLP) applied to biomedical articles, a broad variety of techniques have been developed during the past years. Nevertheless, there is still a great ned of comparative assessment of the performance of the proposed methods and the development of common evaluation criteria. This issue was addressed by the Critical Assessment of Text Mining Methods in Molecular Biology (BioCreative) contest. The aim of this contest was to assess the performance of text mining systems applied to biomedical texts including tools which recognize named entities such as genes and proteins, and tools which automatically extract protein annotations. Results The "sentence sliding window" approach proposed here was found to efficiently extract text fragments from full text articles containing annotations on proteins, providing the highest number of correctly predicted annotations. Moreover, the number of correct extractions of individual entities (i.e. proteins and GO terms) involved in the relationships used for the annotations was significantly higher than the correct extractions of the complete annotations (protein-function relations). Conclusion We explored the use of averaging sentence sliding windows for information extraction, especially in a context where conventional training data is unavailable. The combination of our approach with more refined statistical estimators and machine learning techniques might be a way to improve annotation extraction for future biomedical text mining applications. PMID:15960831

  20. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  1. On Study of Air/Space-borne Dual-Wavelength Radar for Estimates of Rain Profiles

    NASA Technical Reports Server (NTRS)

    Liao, Liang; Meneghini, Robert

    2004-01-01

    In this study, a framework is discussed to apply air/space-borne dual-wavelength radar for the estimation of characteristic parameters of hydrometeors. The focus of our study is on the Global Precipitation Measurements (GPM) precipitation radar, a dual-wavelength radar that operates at Ku (13.8 GHz) and Ka (35 GHz) bands. As the droplet size distributions (DSD) of rain are expressed as the Gamma function, a procedure is described to derive the median volume diameter (D(sub 0)) and particle number concentration (N(sub T)) of rain. The correspondences of an important quantity of dual-wavelength radar, defined as deferential frequency ratio (DFR), to the D(sub 0) in the melting region are given as a function of the distance from the 0 C isotherm. A self-consistent iterative algorithm that shows a promising to account for rain attenuation of radar and infer the DSD without use of surface reference technique (SRT) is examined by applying it to the apparent radar reflectivity profiles simulated from the DSD model and then comparing the estimates with the model (true) results. For light to moderate rain the self-consistent rain profiling approach converges to unique and correct solutions only if the same shape factors of Gamma functions are used both to generate and retrieve the rain profiles, but does not converges to the true solutions if the DSD form is not chosen correctly. To further examine the dual-wavelength techniques, the self-consistent algorithm, along with forward and backward rain profiling algorithms, is then applied to the measurements taken from the 2nd generation Precipitation Radar (PR-2) built by Jet Propulsion Laboratory. It is found that rain profiles estimated from the forward and backward approaches are not sensitive to shape factor of DSD Gamma distribution, but the self-consistent method is.

  2. Reduction of CMIP5 models bias using Cumulative Distribution Function transform and impact on crops yields simulations across West Africa.

    NASA Astrophysics Data System (ADS)

    Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu

    2017-04-01

    Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009

  3. Alternative formulation of explicitly correlated third-order Møller-Plesset perturbation theory

    NASA Astrophysics Data System (ADS)

    Ohnishi, Yu-ya; Ten-no, Seiichiro

    2013-09-01

    The second-order wave operator in the explicitly correlated wave function theory has been newly defined as an extension of the conventional s- and p-wave (SP) ansatz (also referred to as the FIXED amplitude ansatz) based on the linked-diagram theorem. The newly defined second-order wave operator has been applied to the calculation of the F12 correction to the third-order many-body perturbation (MP3) energy. In addition to this new wave operator, the F12 correction with the conventional first-order wave operator has been derived and calculated. Among three components of the MP3 correlation energy, the particle ladder contribution, which has shown the slowest convergence with respect to the basis set size, is fairly ameliorated by employing these F12 corrections. Both the newly defined and conventional formalisms of the F12 corrections exhibit a similar recovery of over 90% of the complete basis set limit of the particle ladder contribution of the MP3 correlation energy with a triple-zeta quality basis set for the neon atom, while the amount is about 75% without the F12 correction. The corrections to the ring term are small but the corrected energy has shown similar recovery as the particle ladder term. The hole ladder term has shown a rapid convergence even without the F12 corrections. Owing to these balanced recoveries, the deviation of the total MP3 correlation energy from the complete basis set limit has been calculated to be about 1 kcal/mol with the triple-zeta quality basis set, which is more than five times smaller than the error without the F12 correction.

  4. Four Theorems on the Psychometric Function

    PubMed Central

    May, Keith A.; Solomon, Joshua A.

    2013-01-01

    In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, . This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull “slope” parameter, , can be approximated by , where is the of the Weibull function that fits best to the cumulative noise distribution, and depends on the transducer. We derive general expressions for and , from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when , . We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4–0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian. PMID:24124456

  5. 40 CFR 264.90 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... applying to a regulated unit with alternative requirements for groundwater monitoring and corrective action for releases to groundwater set out in the permit (or in an enforceable document) (as defined in 40... contributed to the release; and (2) It is not necessary to apply the groundwater monitoring and corrective...

  6. 40 CFR 264.90 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... applying to a regulated unit with alternative requirements for groundwater monitoring and corrective action for releases to groundwater set out in the permit (or in an enforceable document) (as defined in 40... contributed to the release; and (2) It is not necessary to apply the groundwater monitoring and corrective...

  7. 40 CFR 264.90 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... applying to a regulated unit with alternative requirements for groundwater monitoring and corrective action for releases to groundwater set out in the permit (or in an enforceable document) (as defined in 40... contributed to the release; and (2) It is not necessary to apply the groundwater monitoring and corrective...

  8. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  9. 76 FR 59897 - Branded Prescription Drug Fee; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Prescription Drug Fee; Correction AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Correcting... branded prescription drugs. This fee was enacted by section 9008 of the Patient Protection and Affordable...: This correction is effective on September 28, 2011 and applies to any fee on branded prescription drug...

  10. Phase-space quantum mechanics study of two identical particles in an external oscillatory potential

    NASA Technical Reports Server (NTRS)

    Nieto, Luis M.; Gadella, Manuel

    1993-01-01

    This simple example is used to show how the formalism of Moyal works when it is applied to systems of identical particles. The symmetric and antisymmetric Moyal propagators are evaluated for this case; from them, the correct energy levels of energy are obtained, as well as the Wigner functions for the symmetric and antisymmetric states of the two identical particle system. Finally, the solution of the Bloch equation is straightforwardly obtained from the expressions of the Moyal propagators.

  11. Location of laccase in ordered mesoporous materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayoral, Álvaro; Gascón, Victoria; Blanco, Rosa M.

    2014-11-01

    The functionalization with amine groups was developed on the SBA-15, and its effect in the laccase immobilization was compared with that of a Periodic Mesoporous Aminosilica. A method to encapsulate the laccase in situ has now been developed. In this work, spherical aberration (C{sub s}) corrected scanning transmission electron microscopy combined with high angle annular dark field detector and electron energy loss spectroscopy were applied to identify the exact location of the enzyme in the matrix formed by the ordered mesoporous solids.

  12. Seasat low-rate data system

    NASA Technical Reports Server (NTRS)

    Brown, J. W.; Cleven, G. C.; Klose, J. C.; Lame, D. B.; Yamarone, C. A.

    1979-01-01

    The Seasat low-rate data system, an end-to-end data-processing and data-distribution system for the four low-rate sensors (radar altimeter, Seasat-A scatterometer system, scanning multichannel microwave radiometer, and visible and infrared radiometer) carried aboard the satellite, is discussed. The function of the distributed, nonreal-time, magnetic-tape system is to apply necessary calibrations, corrections, and conversions to yield geophysically meaningful products from raw telemetry data. The algorithms developed for processing data from the different sensors are described, together with the data catalogs compiled.

  13. Location of laccase in ordered mesoporous materials

    NASA Astrophysics Data System (ADS)

    Mayoral, Álvaro; Gascón, Victoria; Blanco, Rosa M.; Márquez-Álvarez, Carlos; Díaz, Isabel

    2014-11-01

    The functionalization with amine groups was developed on the SBA-15, and its effect in the laccase immobilization was compared with that of a Periodic Mesoporous Aminosilica. A method to encapsulate the laccase in situ has now been developed. In this work, spherical aberration (Cs) corrected scanning transmission electron microscopy combined with high angle annular dark field detector and electron energy loss spectroscopy were applied to identify the exact location of the enzyme in the matrix formed by the ordered mesoporous solids.

  14. Primordial nucleosynthesis and neutrino physics

    NASA Astrophysics Data System (ADS)

    Smith, Christel Johanna

    We study primordial nucleosynthesis abundance yields for assumed ranges of cosmological lepton numbers, sterile neutrino mass-squared differences and active-sterile vacuum mixing angles. We fix the baryon-to-photon ratio at the value derived from the cosmic microwave background (CMB) data and then calculate the deviation of the 2 H, 4 He, and 7 Li abundance yields from those expected in the zero lepton number(s), no-new-neutrino-physics case. We conclude that high precision (< 5% error) measurements of the primordial 2 H abundance from, e.g., QSO absorption line observations coupled with high precision (< 1% error) baryon density measurements from the CMB could have the power to either: (1) reveal or rule out the existence of a light sterile neutrino if the sign of the cosmological lepton number is known; or (2) place strong constraints on lepton numbers, sterile neutrino mixing properties and resonance sweep physics. Similar conclusions would hold if the primordial 4 He abundance could be determined to better than 10%. We have performed new Big Bang Nucleosynthesis calculations which employ arbitrarily-specified, time-dependent neutrino and antineutrino distribution functions for each of up to four neutrino flavors. We self-consistently couple these distributions to the thermodynamics, the expansion rate and scale factor-time/temperature relationship, as well as to all relevant weak, electromagnetic, and strong nuclear reaction processes in the early universe. With this approach, we can treat any scenario in which neutrino or antineutrino spectral distortion might arise. These scenarios might include, for example, decaying particles, active-sterile neutrino oscillations, and active-active neutrino oscillations in the presence of significant lepton numbers. Our calculations allow lepton numbers and sterile neutrinos to be constrained with observationally-determined primordial helium and deuterium abundances. We have modified a standard BBN code to perform these calculations and have made it available to the community. We have applied a fully relativistic Coulomb wave correction to the weak reactions in the full Kawano/Wagoner Big Bang Nucleosynthesis (BBN) code. We have also added the zero temperature radiative correction. We find that using this higher accuracy Coulomb correction results in good agreement with previous work, giving only a modest ˜ 0.04% increase in helium mass fraction over correction prescriptions applied previously in BBN calculations. We have calculated the effect of these corrections on other light element abundance yields in BBN and we have studied these yields as functions of electron neutrino lepton number. This has allowed insights into the role of the Coulomb correction in the setting of the neutron-to-proton ratio during the BBN epoch. We find that the lepton capture processes' contributions to this ratio are only second order in the Coulomb correction.

  15. Computer Vision and Machine Learning for Autonomous Characterization of AM Powder Feedstocks

    NASA Astrophysics Data System (ADS)

    DeCost, Brian L.; Jain, Harshvardhan; Rollett, Anthony D.; Holm, Elizabeth A.

    2017-03-01

    By applying computer vision and machine learning methods, we develop a system to characterize powder feedstock materials for metal additive manufacturing (AM). Feature detection and description algorithms are applied to create a microstructural scale image representation that can be used to cluster, compare, and analyze powder micrographs. When applied to eight commercial feedstock powders, the system classifies powder images into the correct material systems with greater than 95% accuracy. The system also identifies both representative and atypical powder images. These results suggest the possibility of measuring variations in powders as a function of processing history, relating microstructural features of powders to properties relevant to their performance in AM processes, and defining objective material standards based on visual images. A significant advantage of the computer vision approach is that it is autonomous, objective, and repeatable.

  16. First Principle Predictions of Isotopic Shifts in H2O

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    We compute isotope independent first and second order corrections to the Born-Oppenheimer approximation for water and use them to predict isotopic shifts. For the diagonal correction, we use icMRCI wavefunctions and derivatives with respect to mass dependent, internal coordinates to generate the mass independent correction functions. For the non-adiabatic correction, we use scaled SCF/CIS wave functions and a generalization of the Handy method to obtain mass independent correction functions. We find that including the non-adiabatic correction gives significantly improved results compared to just including the diagonal correction when the Born-Oppenheimer potential energy surface is optimized for H2O-16. The agreement with experimental results for deuterium and tritium containing isotopes is nearly as good as our best empirical correction, however, the present correction is expected to be more reliable for higher, uncharacterized levels.

  17. Transport through correlated systems with density functional theory

    NASA Astrophysics Data System (ADS)

    Kurth, S.; Stefanucci, G.

    2017-10-01

    We present recent advances in density functional theory (DFT) for applications in the field of quantum transport, with particular emphasis on transport through strongly correlated systems. We review the foundations of the popular Landauer-Büttiker(LB)  +  DFT approach. This formalism, when using approximations to the exchange-correlation (xc) potential with steps at integer occupation, correctly captures the Kondo plateau in the zero bias conductance at zero temperature but completely fails to capture the transition to the Coulomb blockade (CB) regime as the temperature increases. To overcome the limitations of LB  +  DFT, the quantum transport problem is treated from a time-dependent (TD) perspective using TDDFT, an exact framework to deal with nonequilibrium situations. The steady-state limit of TDDFT shows that in addition to an xc potential in the junction, there also exists an xc correction to the applied bias. Open shell molecules in the CB regime provide the most striking examples of the importance of the xc bias correction. Using the Anderson model as guidance we estimate these corrections in the limit of zero bias. For the general case we put forward a steady-state DFT which is based on one-to-one correspondence between the pair of basic variables, steady density on and steady current across the junction and the pair local potential on and bias across the junction. Like TDDFT, this framework also leads to both an xc potential in the junction and an xc correction to the bias. Unlike TDDFT, these potentials are independent of history. We highlight the universal features of both xc potential and xc bias corrections for junctions in the CB regime and provide an accurate parametrization for the Anderson model at arbitrary temperatures and interaction strengths, thus providing a unified DFT description for both Kondo and CB regimes and the transition between them.

  18. Time-dependent phase error correction using digital waveform synthesis

    DOEpatents

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  19. Preferred color correction for digital LCD TVs

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  20. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the observations over 12 TCCON stations we estimated temporal and spatial trends (interannual XCO2 and XCH4 variations, seasonal cycles, latitudinal gradients) and compared them with modeled results as well as with similar estimates from other GOSAT retrievals.

  1. Multimodal Randomized Functional MR Imaging of the Effects of Methylene Blue in the Human Brain

    PubMed Central

    Rodriguez, Pavel; Zhou, Wei; Barrett, Douglas W.; Altmeyer, Wilson; Gutierrez, Juan E.; Li, Jinqi; Lancaster, Jack L.; Gonzalez-Lima, Francisco

    2016-01-01

    Purpose To investigate the sustained-attention and memory-enhancing neural correlates of the oral administration of methylene blue in the healthy human brain. Materials and Methods The institutional review board approved this prospective, HIPAA-compliant, randomized, double-blinded, placebo-controlled clinical trial, and all patients provided informed consent. Twenty-six subjects (age range, 22–62 years) were enrolled. Functional magnetic resonance (MR) imaging was performed with a psychomotor vigilance task (sustained attention) and delayed match-to-sample tasks (short-term memory) before and 1 hour after administration of low-dose methylene blue or a placebo. Cerebrovascular reactivity effects were also measured with the carbon dioxide challenge, in which a 2 × 2 repeated-measures analysis of variance was performed with a drug (methylene blue vs placebo) and time (before vs after administration of the drug) as factors to assess drug × time between group interactions. Multiple comparison correction was applied, with cluster-corrected P < .05 indicating a significant difference. Results Administration of methylene blue increased response in the bilateral insular cortex during a psychomotor vigilance task (Z = 2.9–3.4, P = .01–.008) and functional MR imaging response during a short-term memory task involving the prefrontal, parietal, and occipital cortex (Z = 2.9–4.2, P = .03–.0003). Methylene blue was also associated with a 7% increase in correct responses during memory retrieval (P = .01). Conclusion Low-dose methylene blue can increase functional MR imaging activity during sustained attention and short-term memory tasks and enhance memory retrieval. © RSNA, 2016 Online supplemental material is available for this article. PMID:27351678

  2. Dispersion-correcting potentials can significantly improve the bond dissociation enthalpies and noncovalent binding energies predicted by density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiLabio, Gino A., E-mail: Gino.DiLabio@nrc.ca; Department of Chemistry, University of British Columbia, Okanagan, 3333 University Way, Kelowna, British Columbia V1V 1V7; Koleini, Mohammad

    2014-05-14

    Dispersion-correcting potentials (DCPs) are atom-centered Gaussian functions that are applied in a manner that is similar to effective core potentials. Previous work on DCPs has focussed on their use as a simple means of improving the ability of conventional density-functional theory methods to predict the binding energies of noncovalently bonded molecular dimers. We show in this work that DCPs developed for use with the LC-ωPBE functional along with 6-31+G(2d,2p) basis sets are capable of simultaneously improving predicted noncovalent binding energies of van der Waals dimer complexes and covalent bond dissociation enthalpies in molecules. Specifically, the DCPs developed herein for themore » C, H, N, and O atoms provide binding energies for a set of 66 noncovalently bonded molecular dimers (the “S66” set) with a mean absolute error (MAE) of 0.21 kcal/mol, which represents an improvement of more than a factor of 10 over unadorned LC-ωPBE/6-31+G(2d,2p) and almost a factor of two improvement over LC-ωPBE/6-31+G(2d,2p) used in conjunction with the “D3” pairwise dispersion energy corrections. In addition, the DCPs reduce the MAE of calculated X-H and X-Y (X,Y = C, H, N, O) bond dissociation enthalpies for a set of 40 species from 3.2 kcal/mol obtained with unadorned LC-ωPBE/6-31+G(2d,2p) to 1.6 kcal/mol. Our findings demonstrate that broad improvements to the performance of DFT methods may be achievable through the use of DCPs.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reppert, Mike; Kell, Adam; Pruitt, Thomas

    The vibrational spectral density is an important physical parameter needed to describe both linear and non-linear spectra of multi-chromophore systems such as photosynthetic complexes. Low-temperature techniques such as hole burning (HB) and fluorescence line narrowing are commonly used to extract the spectral density for a given electronic transition from experimental data. We report here that the lineshape function formula reported by Hayes et al. [J. Phys. Chem. 98, 7337 (1994)] in the mean-phonon approximation and frequently applied to analyzing HB data contains inconsistencies in notation, leading to essentially incorrect expressions in cases of moderate and strong electron-phonon (el-ph) coupling strengths.more » A corrected lineshape function L(ω) is given that retains the computational and intuitive advantages of the expression of Hayes et al. [J. Phys. Chem. 98, 7337 (1994)]. Although the corrected lineshape function could be used in modeling studies of various optical spectra, we suggest that it is better to calculate the lineshape function numerically, without introducing the mean-phonon approximation. New theoretical fits of the P870 and P960 absorption bands and frequency-dependent resonant HB spectra of Rb. sphaeroides and Rps. viridis reaction centers are provided as examples to demonstrate the importance of correct lineshape expressions. Comparison with the previously determined el-ph coupling parameters [Johnson et al., J. Phys. Chem. 94, 5849 (1990); Lyle et al., ibid. 97, 6924 (1993); Reddy et al., ibid. 97, 6934 (1993)] is also provided. The new fits lead to modified el-ph coupling strengths and different frequencies of the special pair marker mode, ω{sub sp}, for Rb. sphaeroides that could be used in the future for more advanced calculations of absorption and HB spectra obtained for various bacterial reaction centers.« less

  4. Determining return water levels at ungauged coastal sites: a case study for northern Germany

    NASA Astrophysics Data System (ADS)

    Arns, Arne; Wahl, Thomas; Haigh, Ivan D.; Jensen, Jürgen

    2015-04-01

    We estimate return periods and levels of extreme still water levels for the highly vulnerable and historically and culturally important small marsh islands known as the Halligen, located in the Wadden Sea offshore of the coast of northern Germany. This is a challenging task as only few water level records are available for this region, and they are currently too short to apply traditional extreme value analysis methods. Therefore, we use the Regional Frequency Analysis (RFA) approach. This originates from hydrology but has been used before in several coastal studies and is also currently applied by the local federal administration responsible for coastal protection in the study area. The RFA enables us to indirectly estimate return levels by transferring hydrological information from gauged to related ungauged sites. Our analyses highlight that this methodology has some drawbacks and may over- or underestimate return levels compared to direct analyses using station data. To overcome these issues, we present an alternative approach, combining numerical and statistical models. First, we produced a numerical multidecadal model hindcast of water levels for the entire North Sea. Predicted water levels from the hindcast are bias corrected using the information from the available tide gauge records. Hence, the simulated water levels agree well with the measured water levels at gauged sites. The bias correction is then interpolated spatially to obtain correction functions for the simulated water levels at each coastal and island model grid point in the study area. Using a recommended procedure to conduct extreme value analyses from a companion study, return water levels suitable for coastal infrastructure design are estimated continuously along the entire coastline of the study area, including the offshore islands. A similar methodology can be applied in other regions of the world where tide gauge observations are sparse.

  5. 76 FR 59897 - Branded Prescription Drug Fee; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Prescription Drug Fee; Correction AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Correction to... branded prescription drugs. This fee was enacted by section 9008 of the Patient Protection and Affordable...: This correction is effective on September 28, 2011 and applies to any fee on branded prescription drug...

  6. Iterative Potts and Blake–Zisserman minimization for the recovery of functions with discontinuities from indirect measurements

    PubMed Central

    Weinmann, Andreas; Storath, Martin

    2015-01-01

    Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074

  7. Application of commercial MOSFET detectors for in vivo dosimetry in the therapeutic x-ray range from 80 kV to 250 kV

    NASA Astrophysics Data System (ADS)

    Ehringfeld, Christian; Schmid, Susanne; Poljanc, Karin; Kirisits, Christian; Aiginger, Hannes; Georg, Dietmar

    2005-01-01

    The purpose of this study was to investigate the dosimetric characteristics (energy dependence, linearity, fading, reproducibility, etc) of MOSFET detectors for in vivo dosimetry in the kV x-ray range. The experience of MOSFET in vivo dosimetry in a pre-clinical study using the Alderson phantom and in clinical practice is also reported. All measurements were performed with a Gulmay D3300 kV unit and TN-502RDI MOSFET detectors. For the determination of correction factors different solid phantoms and a calibrated Farmer-type chamber were used. The MOSFET signal was linear with applied dose in the range from 0.2 to 2 Gy for all energies. Due to fading it is recommended to read the MOSFET signal during the first 15 min after irradiation. For long time intervals between irradiation and readout the fading can vary largely with the detector. The temperature dependence of the detector signal was small (0.3% °C-1) in the temperature range between 22 and 40 °C. The variation of the measuring signal with beam incidence amounts to ±5% and should be considered in clinical applications. Finally, for entrance dose measurements energy-dependent calibration factors, correction factors for field size and irradiated cable length were applied. The overall accuracy, for all measurements, was dominated by reproducibility as a function of applied dose. During the pre-clinical in vivo study, the agreement between MOSFET and TLD measurements was well within 3%. The results of MOSFET measurements, to determine the dosimetric characteristics as well as clinical applications, showed that MOSFET detectors are suitable for in vivo dosimetry in the kV range. However, some energy-dependent dosimetry effects need to be considered and corrected for. Due to reproducibility effects at low dose levels accurate in vivo measurements are only possible if the applied dose is equal to or larger than 2 Gy.

  8. Application of commercial MOSFET detectors for in vivo dosimetry in the therapeutic x-ray range from 80 kV to 250 kV.

    PubMed

    Ehringfeld, Christian; Schmid, Susanne; Poljanc, Karin; Kirisits, Christian; Aiginger, Hannes; Georg, Dietmar

    2005-01-21

    The purpose of this study was to investigate the dosimetric characteristics (energy dependence, linearity, fading, reproducibility, etc) of MOSFET detectors for in vivo dosimetry in the kV x-ray range. The experience of MOSFET in vivo dosimetry in a pre-clinical study using the Alderson phantom and in clinical practice is also reported. All measurements were performed with a Gulmay D3300 kV unit and TN-502RDI MOSFET detectors. For the determination of correction factors different solid phantoms and a calibrated Farmer-type chamber were used. The MOSFET signal was linear with applied dose in the range from 0.2 to 2 Gy for all energies. Due to fading it is recommended to read the MOSFET signal during the first 15 min after irradiation. For long time intervals between irradiation and readout the fading can vary largely with the detector. The temperature dependence of the detector signal was small (0.3% degrees C(-1)) in the temperature range between 22 and 40 degrees C. The variation of the measuring signal with beam incidence amounts to +/-5% and should be considered in clinical applications. Finally, for entrance dose measurements energy-dependent calibration factors, correction factors for field size and irradiated cable length were applied. The overall accuracy, for all measurements, was dominated by reproducibility as a function of applied dose. During the pre-clinical in vivo study, the agreement between MOSFET and TLD measurements was well within 3%. The results of MOSFET measurements, to determine the dosimetric characteristics as well as clinical applications, showed that MOSFET detectors are suitable for in vivo dosimetry in the kV range. However, some energy-dependent dosimetry effects need to be considered and corrected for. Due to reproducibility effects at low dose levels accurate in vivo measurements are only possible if the applied dose is equal to or larger than 2 Gy.

  9. Intra-operative measurement of applied forces during anterior scoliosis correction.

    PubMed

    Fairhurst, H; Little, J P; Adam, C J

    2016-12-01

    Spinal instrumentation and fusion for the treatment of scoliosis is primarily a mechanical intervention to correct the deformity and halt further progression. While implant-related complications remain a concern, little is known about the magnitudes of the forces applied to the spine during surgery, which may affect post-surgical outcomes. In this study, the compressive forces applied to each spinal segment during anterior instrumentation were measured in a series of patients with Adolescent Idiopathic Scoliosis. A force transducer was designed and retrofit to a routinely used surgical tool, and compressive forces applied to each segment during surgery were measured for 15 scoliosis patients. Cobb angle correction achieved by each force was measured on intra-operative fluoroscope images. Relative changes in orientation of the screw within the vertebra were also measured to detect intra-operative screw plough. Intra-operative forces were measured for a total of 95 spinal segments. The mean applied compressive force was 540N (SD 230N, range 88N-1019N). There was a clear trend for higher forces to be applied at segments toward the apex of the scoliosis. Fluoroscopic evidence of screw plough was detected at 10 segments (10.5%). The magnitude of forces applied during anterior scoliosis correction vary over a broad range. These forces do reach magnitudes capable of causing intra-operative vertebral body screw plough. Surgeons should be aware there is a risk for tissue overload during correction, however the clinical implications of intra-operative screw plough remain unclear. The dataset presented here is valuable for providing realistic input parameters for in silico surgical simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. 77 FR 1941 - Statement of Organization, Functions, and Delegations of Authority; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-12

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Institutes of Health Statement of Organization, Functions, and Delegations of Authority; Correction Correction In the Federal Register of January 6, 2012... of Health Statement of Organization, Functions, and Delegations of Authority. On page 797, in the...

  11. Correcting the vertical component of ocean bottom seismometers for the effects of tilt and compliance

    NASA Astrophysics Data System (ADS)

    Bell, S. W.; Forsyth, D. W.

    2013-12-01

    Typically there are very high noise levels at long periods on the horizontal components of ocean bottom seismographs due to the turbulent interaction of bottom currents with the seismometer package on the seafloor. When there is a slight tilt of the instrument, some of the horizontal displacement caused by bottom currents leaks onto the vertical component record, which can severely increase the apparent vertical noise. Another major type of noise, compliance noise, is created when pressure variations associated with water (gravity) waves deform the seabed. Compliance noise increases with decreasing water depth, and at water depths of less than a few hundred meters, compliance noise typically obscures most earthquake signals. Following Crawford and Webb (2000), we have developed a methodology for reducing these noise sources by 1-2 orders of magnitude, revealing many events that could not be distinguished before noise reduction. Our methodology relies on transfer functions between different channels. We calculate the compliance noise in the vertical displacement record by applying a transfer function to the differential pressure gauge record. Similarly, we calculate the tilt-induced bottom current noise in the vertical displacement record by applying a transfer function to the horizontal displacement records. Using data from the Cascadia experiment and other experiments, we calculate these transfer functions at a range of stations with varying tilts and water depths. The compliance noise transfer function depends strongly on water depth, and we provide a theoretical and empirical description of this dependence. Tilt noise appears to be very highly correlated with instrument design, with negligible tilt noise observed for the 'abalone' instruments from the Scripps Institute of Oceanography and significant tilt observed for the Woods Hole Oceanographic Institution instruments in the first year deployment of the Cascadia experiment. Tilt orientation appears relatively constant, but we observe significant day-to-day variation in tilt angle, requiring the calculation of a tilt transfer function for each individual day for optimum removal of bottom current noise. In removing the compliance noise, there is some distortion of the signal. We show how to correct for this distortion using theoretical and empirical transfer functions between pressure and displacement records for seismic signals.

  12. On the shape of things: From holography to elastica

    NASA Astrophysics Data System (ADS)

    Fonda, Piermarco; Jejjala, Vishnu; Veliz-Osorio, Alvaro

    2017-10-01

    We explore the question of which shape a manifold is compelled to take when immersed in another one, provided it must be the extremum of some functional. We consider a family of functionals which depend quadratically on the extrinsic curvatures and on projections of the ambient curvatures. These functionals capture a number of physical setups ranging from holography to the study of membranes and elastica. We present a detailed derivation of the equations of motion, known as the shape equations, placing particular emphasis on the issue of gauge freedom in the choice of normal frame. We apply these equations to the particular case of holographic entanglement entropy for higher curvature three dimensional gravity and find new classes of entangling curves. In particular, we discuss the case of New Massive Gravity where we show that non-geodesic entangling curves have always a smaller on-shell value of the entropy functional. Then we apply this formalism to the computation of the entanglement entropy for dual logarithmic CFTs. Nevertheless, the correct value for the entanglement entropy is provided by geodesics. Then, we discuss the importance of these equations in the context of classical elastica and comment on terms that break gauge invariance.

  13. Accuracy Assessment and Correction of Vaisala RS92 Radiosonde Water Vapor Measurements

    NASA Technical Reports Server (NTRS)

    Whiteman, David N.; Miloshevich, Larry M.; Vomel, Holger; Leblanc, Thierry

    2008-01-01

    Relative humidity (RH) measurements from Vaisala RS92 radiosondes are widely used in both research and operational applications, although the measurement accuracy is not well characterized as a function of its known dependences on height, RH, and time of day (or solar altitude angle). This study characterizes RS92 mean bias error as a function of its dependences by comparing simultaneous measurements from RS92 radiosondes and from three reference instruments of known accuracy. The cryogenic frostpoint hygrometer (CFH) gives the RS92 accuracy above the 700 mb level; the ARM microwave radiometer gives the RS92 accuracy in the lower troposphere; and the ARM SurTHref system gives the RS92 accuracy at the surface using 6 RH probes with NIST-traceable calibrations. These RS92 assessments are combined using the principle of Consensus Referencing to yield a detailed estimate of RS92 accuracy from the surface to the lowermost stratosphere. An empirical bias correction is derived to remove the mean bias error, yielding corrected RS92 measurements whose mean accuracy is estimated to be +/-3% of the measured RH value for nighttime soundings and +/-4% for daytime soundings, plus an RH offset uncertainty of +/-0.5%RH that is significant for dry conditions. The accuracy of individual RS92 soundings is further characterized by the 1-sigma "production variability," estimated to be +/-1.5% of the measured RH value. The daytime bias correction should not be applied to cloudy daytime soundings, because clouds affect the solar radiation error in a complicated and uncharacterized way.

  14. Inverted Nipple Correction with Selective Dissection of Lactiferous Ducts Using an Operative Microscope and a Traction Technique.

    PubMed

    Sowa, Yoshihiro; Itsukage, Sizu; Morita, Daiki; Numajiri, Toshiaki

    2017-10-01

    An inverted nipple is a common congenital condition in young women that may cause breastfeeding difficulty, psychological distress, repeated inflammation, and loss of sensation. Various surgical techniques have been reported for correction of inverted nipples, and all have advantages and disadvantages. Here, we report a new technique for correction of an inverted nipple using an operative microscope and traction that results in low recurrence and preserves lactation function and sensation. Between January 2010 and January 2013, we treated eight inverted nipples in seven patients with selective lactiferous duct dissection using an operative microscope. An opposite Z-plasty was added at the junction of the nipple and areola. Postoperatively, traction was applied through an apparatus made from a rubber gasket attached to a sterile syringe. Patients were followed up for 15-48 months. Adequate projection was achieved in all patients, and there was no wound dehiscence or complications such as infection. Three patients had successful pregnancies and subsequent breastfeeding that was not adversely affected by the treatment. There was no loss of sensation in any patient during the postoperative period. Our technique for treating an inverted nipple is effective and preserves lactation function and nipple sensation. The method maintains traction for a longer period, which we believe increases the success rate of the surgery for correction of severely inverted nipples. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  15. Beyond Poisson-Boltzmann: Fluctuation effects and correlation functions

    NASA Astrophysics Data System (ADS)

    Netz, R. R.; Orland, H.

    2000-02-01

    We formulate the exact non-linear field theory for a fluctuating counter-ion distribution in the presence of a fixed, arbitrary charge distribution. The Poisson-Boltzmann equation is obtained as the saddle-point of the field-theoretic action, and the effects of counter-ion fluctuations are included by a loop-wise expansion around this saddle point. The Poisson equation is obeyed at each order in this loop expansion. We explicitly give the expansion of the Gibbs potential up to two loops. We then apply our field-theoretic formalism to the case of a single impenetrable wall with counter ions only (in the absence of salt ions). We obtain the fluctuation corrections to the electrostatic potential and the counter-ion density to one-loop order without further approximations. The relative importance of fluctuation corrections is controlled by a single parameter, which is proportional to the cube of the counter-ion valency and to the surface charge density. The effective interactions and correlation functions between charged particles close to the charged wall are obtained on the one-loop level.

  16. An Update on Oxidative Damage to Spermatozoa and Oocytes.

    PubMed

    Opuwari, Chinyerum S; Henkel, Ralf R

    2016-01-01

    On the one hand, reactive oxygen species (ROS) are mandatory mediators for essential cellular functions including the function of germ cells (oocytes and spermatozoa) and thereby the fertilization process. However, the exposure of these cells to excessive levels of oxidative stress by too high levels of ROS or too low levels of antioxidative protection will render these cells dysfunctional thereby failing the fertilization process and causing couples to be infertile. Numerous causes are responsible for the delicate bodily redox system being out of balance and causing disease and infertility. Many of these causes are modifiable such as lifestyle factors like obesity, poor nutrition, heat stress, smoking, or alcohol abuse. Possible correctable measures include foremost lifestyle changes, but also supplementation with antioxidants to scavenge excessive ROS. However, this should only be done after careful examination of the patient and establishment of the individual bodily antioxidant needs. In addition, other corrective measures include sperm separation for assisted reproductive techniques. However, these techniques have to be carried out very carefully as they, if applied wrongly, bear risks of generating ROS damaging the germ cells and preventing fertilization.

  17. An Update on Oxidative Damage to Spermatozoa and Oocytes

    PubMed Central

    Opuwari, Chinyerum S.; Henkel, Ralf R.

    2016-01-01

    On the one hand, reactive oxygen species (ROS) are mandatory mediators for essential cellular functions including the function of germ cells (oocytes and spermatozoa) and thereby the fertilization process. However, the exposure of these cells to excessive levels of oxidative stress by too high levels of ROS or too low levels of antioxidative protection will render these cells dysfunctional thereby failing the fertilization process and causing couples to be infertile. Numerous causes are responsible for the delicate bodily redox system being out of balance and causing disease and infertility. Many of these causes are modifiable such as lifestyle factors like obesity, poor nutrition, heat stress, smoking, or alcohol abuse. Possible correctable measures include foremost lifestyle changes, but also supplementation with antioxidants to scavenge excessive ROS. However, this should only be done after careful examination of the patient and establishment of the individual bodily antioxidant needs. In addition, other corrective measures include sperm separation for assisted reproductive techniques. However, these techniques have to be carried out very carefully as they, if applied wrongly, bear risks of generating ROS damaging the germ cells and preventing fertilization. PMID:26942204

  18. Transfrontal orbitotomy in the dog: an adaptable three-step approach to the orbit.

    PubMed

    Håkansson, Nils Wallin; Håkansson, Berit Wallin

    2010-11-01

    To describe an adaptable and extensive method for orbitotomy in the dog. An adaptable three-step technique for orbitotomy was developed and applied in nine consecutive cases. The steps are zygomatic arch resection laterally, temporalis muscle elevation medially and zygomatic process osteotomy anteriorly-dorsally. The entire orbit is accessed with excellent exposure and room for surgical manipulation. Facial nerve, lacrimal nerve and lacrimal gland function are preserved. The procedure can easily be converted into an orbital exenteration. Exposure of the orbit was excellent in all cases and anatomically correct closure was achieved. Signs of postoperative discomfort were limited, with moderate, reversible swelling in two cases and mild in seven. Wound infection or emphysema did not occur, nor did any other complication attributable to the operative procedure. Blinking ability and lacrimal function were preserved over follow-up times ranging from 1 to 4 years. Transfrontal orbitotomy in the dog offers excellent exposure and room for manipulation. Anatomically correct closure is easily accomplished, postoperative discomfort is limited and complications are mild and temporary. © 2010 American College of Veterinary Ophthalmologists.

  19. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  20. Slow-roll corrections in multi-field inflation: a separate universes approach

    NASA Astrophysics Data System (ADS)

    Karčiauskas, Mindaugas; Kohri, Kazunori; Mori, Taro; White, Jonathan

    2018-05-01

    In view of cosmological parameters being measured to ever higher precision, theoretical predictions must also be computed to an equally high level of precision. In this work we investigate the impact on such predictions of relaxing some of the simplifying assumptions often used in these computations. In particular, we investigate the importance of slow-roll corrections in the computation of multi-field inflation observables, such as the amplitude of the scalar spectrum Pζ, its spectral tilt ns, the tensor-to-scalar ratio r and the non-Gaussianity parameter fNL. To this end we use the separate universes approach and δ N formalism, which allows us to consider slow-roll corrections to the non-Gaussianity of the primordial curvature perturbation as well as corrections to its two-point statistics. In the context of the δ N expansion, we divide slow-roll corrections into two categories: those associated with calculating the correlation functions of the field perturbations on the initial flat hypersurface and those associated with determining the derivatives of the e-folding number with respect to the field values on the initial flat hypersurface. Using the results of Nakamura & Stewart '96, corrections of the first kind can be written in a compact form. Corrections of the second kind arise from using different levels of slow-roll approximation in solving for the super-horizon evolution, which in turn corresponds to using different levels of slow-roll approximation in the background equations of motion. We consider four different levels of approximation and apply the results to a few example models. The various approximations are also compared to exact numerical solutions.

  1. A geometrical correction for the inter- and intra-molecular basis set superposition error in Hartree-Fock and density functional theory calculations for large systems

    NASA Astrophysics Data System (ADS)

    Kruse, Holger; Grimme, Stefan

    2012-04-01

    A semi-empirical counterpoise-type correction for basis set superposition error (BSSE) in molecular systems is presented. An atom pair-wise potential corrects for the inter- and intra-molecular BSSE in supermolecular Hartree-Fock (HF) or density functional theory (DFT) calculations. This geometrical counterpoise (gCP) denoted scheme depends only on the molecular geometry, i.e., no input from the electronic wave-function is required and hence is applicable to molecules with ten thousands of atoms. The four necessary parameters have been determined by a fit to standard Boys and Bernadi counterpoise corrections for Hobza's S66×8 set of non-covalently bound complexes (528 data points). The method's target are small basis sets (e.g., minimal, split-valence, 6-31G*), but reliable results are also obtained for larger triple-ζ sets. The intermolecular BSSE is calculated by gCP within a typical error of 10%-30% that proves sufficient in many practical applications. The approach is suggested as a quantitative correction in production work and can also be routinely applied to estimate the magnitude of the BSSE beforehand. The applicability for biomolecules as the primary target is tested for the crambin protein, where gCP removes intramolecular BSSE effectively and yields conformational energies comparable to def2-TZVP basis results. Good mutual agreement is also found with Jensen's ACP(4) scheme, estimating the intramolecular BSSE in the phenylalanine-glycine-phenylalanine tripeptide, for which also a relaxed rotational energy profile is presented. A variety of minimal and double-ζ basis sets combined with gCP and the dispersion corrections DFT-D3 and DFT-NL are successfully benchmarked on the S22 and S66 sets of non-covalent interactions. Outstanding performance with a mean absolute deviation (MAD) of 0.51 kcal/mol (0.38 kcal/mol after D3-refit) is obtained at the gCP-corrected HF-D3/(minimal basis) level for the S66 benchmark. The gCP-corrected B3LYP-D3/6-31G* model chemistry yields MAD=0.68 kcal/mol, which represents a huge improvement over plain B3LYP/6-31G* (MAD=2.3 kcal/mol). Application of gCP-corrected B97-D3 and HF-D3 on a set of large protein-ligand complexes prove the robustness of the method. Analytical gCP gradients make optimizations of large systems feasible with small basis sets, as demonstrated for the inter-ring distances of 9-helicene and most of the complexes in Hobza's S22 test set. The method is implemented in a freely available FORTRAN program obtainable from the author's website.

  2. A diffraction correction for storage and loss moduli imaging using radiation force based elastography.

    PubMed

    Budelli, Eliana; Brum, Javier; Bernal, Miguel; Deffieux, Thomas; Tanter, Mickaël; Lema, Patricia; Negreira, Carlos; Gennisson, Jean-Luc

    2017-01-07

    Noninvasive evaluation of the rheological behavior of soft tissues may provide an important diagnosis tool. Nowadays, available commercial ultrasound systems only provide shear elasticity estimation by shear wave speed assessment under the hypothesis of a purely elastic model. However, to fully characterize the rheological behavior of tissues, given by its storage (G') and loss (G″) moduli, it is necessary to estimate both: shear wave speed and shear wave attenuation. Most elastography techniques use the acoustic radiation force to generate shear waves. For this type of source the shear waves are not plane and a diffraction correction is needed to properly estimate the shear wave attenuation. The use of a cylindrical wave approximation to evaluate diffraction has been proposed by other authors before. Here the validity of such approximation is numerically and experimentally revisited. Then, it is used to generate images of G' and G″ in heterogeneous viscoelastic mediums. A simulation algorithm based on the anisotropic and viscoelastic Green's function was used to establish the validity of the cylindrical approximation. Moreover, two experiments were carried out: a transient elastography experiment where plane shear waves were generated using a vibrating plate and a SSI experiment that uses the acoustic radiation force to generate shear waves. For both experiments the shear wave propagation was followed with an ultrafast ultrasound scanner. Then, the shear wave velocity and shear wave attenuation were recovered from the phase and amplitude decay versus distance respectively. In the SSI experiment the cylindrical approximation was applied to correct attenuation due to diffraction effects. The numerical and experimental results validate the use of a cylindrical correction to assess shear wave attenuation. Finally, by applying the cylindrical correction G' and G″ images were generated in heterogeneous phantoms and a preliminary in vivo feasibility study was carried out in the human liver.

  3. Posterior multilevel vertebral osteotomy for correction of severe and rigid neuromuscular scoliosis: a preliminary study.

    PubMed

    Suh, Seung Woo; Modi, Hitesh N; Yang, Jaehyuk; Song, Hae-Ryong; Jang, Ki-Mo

    2009-05-20

    Prospective study. To determine the effectiveness and correction with posterior multilevel vertebral osteotomy in severe and rigid curves without anterior release. For the correction of severe and rigid scoliotic curve, anterior-posterior combined or posterior vertebral column resection (PVCR) procedures are used. Anterior procedure might compromise pulmonary functions, and PVCR might carry risk of neurologic injuries. Therefore, authors developed a new technique, which reduces both. Thirteen neuromuscular patients (7 cerebral palsy, 2 Duchenne muscular dystrophy, and 4 spinal muscular atrophy) who had rigid curve >100 degrees were prospectively selected. All were operated with posterior-only approach using pedicle screw construct. To achieve desired correction, posterior multilevel vertebral osteotomies were performed at 3 to 5 levels (apex, and 1-2 levels above and below apex) through partial laminotomy sites connecting from concave to convex side, just above the pedicle; and repeated cantilever manipulation was applied over temporary short-segment fixation, above and below the apex, on convex side. On concave side, rod was assembled with screws and rod-derotation maneuver was performed. Finally, short-segment fixation on convex side was replaced with full-length construct. Intraoperative MEP monitoring was applied in all. Mean age was 21 years and average follow-up was 25 months. Average preoperative flexibility was 20.3% (24.1 degrees). Average Cobb's angle, pelvic obliquity, and apical rotation were 118.2 degrees, 16.7 degrees, and 57 degrees preoperatively, respectively, and 48.8 degrees, 8 degrees, and 43 degrees after surgery showing significant correction of 59.4%, 46.1%, and 24.5%. Average number of osteotomy level was 4.2 and average blood loss was 3356 +/- 884 mL. Mean operation time was 330 +/- 46 minutes. None of the patient required postoperative ventilator support or displayed any signs of neurologic or vascular injuries during or after the operation. This technique should be recommended because (1) it provides release of anterior column without anterior approach and (2) our results supports its superiority as a technique.

  4. A diffraction correction for storage and loss moduli imaging using radiation force based elastography

    NASA Astrophysics Data System (ADS)

    Budelli, Eliana; Brum, Javier; Bernal, Miguel; Deffieux, Thomas; Tanter, Mickaël; Lema, Patricia; Negreira, Carlos; Gennisson, Jean-Luc

    2017-01-01

    Noninvasive evaluation of the rheological behavior of soft tissues may provide an important diagnosis tool. Nowadays, available commercial ultrasound systems only provide shear elasticity estimation by shear wave speed assessment under the hypothesis of a purely elastic model. However, to fully characterize the rheological behavior of tissues, given by its storage (G‧) and loss (G″) moduli, it is necessary to estimate both: shear wave speed and shear wave attenuation. Most elastography techniques use the acoustic radiation force to generate shear waves. For this type of source the shear waves are not plane and a diffraction correction is needed to properly estimate the shear wave attenuation. The use of a cylindrical wave approximation to evaluate diffraction has been proposed by other authors before. Here the validity of such approximation is numerically and experimentally revisited. Then, it is used to generate images of G‧ and G″ in heterogeneous viscoelastic mediums. A simulation algorithm based on the anisotropic and viscoelastic Green’s function was used to establish the validity of the cylindrical approximation. Moreover, two experiments were carried out: a transient elastography experiment where plane shear waves were generated using a vibrating plate and a SSI experiment that uses the acoustic radiation force to generate shear waves. For both experiments the shear wave propagation was followed with an ultrafast ultrasound scanner. Then, the shear wave velocity and shear wave attenuation were recovered from the phase and amplitude decay versus distance respectively. In the SSI experiment the cylindrical approximation was applied to correct attenuation due to diffraction effects. The numerical and experimental results validate the use of a cylindrical correction to assess shear wave attenuation. Finally, by applying the cylindrical correction G‧ and G″ images were generated in heterogeneous phantoms and a preliminary in vivo feasibility study was carried out in the human liver.

  5. Correcting length-frequency distributions for imperfect detection

    USGS Publications Warehouse

    Breton, André R.; Hawkins, John A.; Winkelman, Dana L.

    2013-01-01

    Sampling gear selects for specific sizes of fish, which may bias length-frequency distributions that are commonly used to assess population size structure, recruitment patterns, growth, and survival. To properly correct for sampling biases caused by gear and other sources, length-frequency distributions need to be corrected for imperfect detection. We describe a method for adjusting length-frequency distributions when capture and recapture probabilities are a function of fish length, temporal variation, and capture history. The method is applied to a study involving the removal of Smallmouth Bass Micropterus dolomieu by boat electrofishing from a 38.6-km reach on the Yampa River, Colorado. Smallmouth Bass longer than 100 mm were marked and released alive from 2005 to 2010 on one or more electrofishing passes and removed on all other passes from the population. Using the Huggins mark–recapture model, we detected a significant effect of fish total length, previous capture history (behavior), year, pass, year×behavior, and year×pass on capture and recapture probabilities. We demonstrate how to partition the Huggins estimate of abundance into length frequencies to correct for these effects. Uncorrected length frequencies of fish removed from Little Yampa Canyon were negatively biased in every year by as much as 88% relative to mark–recapture estimates for the smallest length-class in our analysis (100–110 mm). Bias declined but remained high even for adult length-classes (≥200 mm). The pattern of bias across length-classes was variable across years. The percentage of unadjusted counts that were below the lower 95% confidence interval from our adjusted length-frequency estimates were 95, 89, 84, 78, 81, and 92% from 2005 to 2010, respectively. Length-frequency distributions are widely used in fisheries science and management. Our simple method for correcting length-frequency estimates for imperfect detection could be widely applied when mark–recapture data are available.

  6. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  7. Adam Smith's invisible hand is unstable: physics and dynamics reasoning applied to economic theorizing

    NASA Astrophysics Data System (ADS)

    McCauley, Joseph L.

    2002-11-01

    Neo-classical economic theory is based on the postulated, nonempiric notion of utility. Neo-classical economists assume that prices, dynamics, and market equilibria are supposed to be derived from utility. The results are supposed to represent mathematically the stabilizing action of Adam Smith's invisible hand. In deterministic excess demand dynamics, however, a utility function generally does not exist mathematically due to nonintegrability. Price as a function of demand does not exist and all equilibria are unstable. Qualitatively, and empirically, the neo-classical prediction of price as a function of demand describes neither consumer nor trader demand. We also discuss five inconsistent definitions of equilibrium used in economics and finance, only one of which is correct, and then explain the fallacy in the economists’ notion of ‘temporary price equilibria’.

  8. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musson, John C.; Seaton, Chad; Spata, Mike F.

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementationmore » of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.« less

  9. New methodology for adjusting rotating shadowband irradiometer measurements

    NASA Astrophysics Data System (ADS)

    Vignola, Frank; Peterson, Josh; Wilbert, Stefan; Blanc, Philippe; Geuder, Norbert; Kern, Chris

    2017-06-01

    A new method is developed for correcting systematic errors found in rotating shadowband irradiometer measurements. Since the responsivity of photodiode-based pyranometers typically utilized for RST sensors is dependent upon the wavelength of the incident radiation and the spectral distribution of the incident radiation is different for the Direct Normal Trradiance and the Diffuse Horizontal Trradiance, spectral effects have to be considered. These cause the most problematic errors when applying currently available correction functions to RST measurements. Hence, direct normal and diffuse contributions are analyzed and modeled separately. An additional advantage of this methodology is that it provides a prescription for how to modify the adjustment algorithms to locations with different atmospheric characteristics from the location where the calibration and adjustment algorithms were developed. A summary of results and areas for future efforts are then discussed.

  10. 3D resolved mapping of optical aberrations in thick tissues

    PubMed Central

    Zeng, Jun; Mahou, Pierre; Schanne-Klein, Marie-Claire; Beaurepaire, Emmanuel; Débarre, Delphine

    2012-01-01

    We demonstrate a simple method for mapping optical aberrations with 3D resolution within thick samples. The method relies on the local measurement of the variation in image quality with externally applied aberrations. We discuss the accuracy of the method as a function of the signal strength and of the aberration amplitude and we derive the achievable resolution for the resulting measurements. We then report on measured 3D aberration maps in human skin biopsies and mouse brain slices. From these data, we analyse the consequences of tissue structure and refractive index distribution on aberrations and imaging depth in normal and cleared tissue samples. The aberration maps allow the estimation of the typical aplanetism region size over which aberrations can be uniformly corrected. This method and data pave the way towards efficient correction strategies for tissue imaging applications. PMID:22876353

  11. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  12. Reconstruction of an infrared band of meteorological satellite imagery with abductive networks

    NASA Technical Reports Server (NTRS)

    Singer, Harvey A.; Cockayne, John E.; Versteegen, Peter L.

    1995-01-01

    As the current fleet of meteorological satellites age, the accuracy of the imagery sensed on a spectral channel of the image scanning system is continually and progressively degraded by noise. In time, that data may even become unusable. We describe a novel approach to the reconstruction of the noisy satellite imagery according to empirical functional relationships that tie the spectral channels together. Abductive networks are applied to automatically learn the empirical functional relationships between the data sensed on the other spectral channels to calculate the data that should have been sensed on the corrupted channel. Using imagery unaffected by noise, it is demonstrated that abductive networks correctly predict the noise-free observed data.

  13. Stress Intensity Factor Plasticity Correction for Flaws in Stress Concentration Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, E.; Wilson, W.K.

    2000-02-01

    Plasticity corrections to elastically computed stress intensity factors are often included in brittle fracture evaluation procedures. These corrections are based on the existence of a plastic zone in the vicinity of the crack tip. Such a plastic zone correction is included in the flaw evaluation procedure of Appendix A to Section XI of the ASME Boiler and Pressure Vessel Code. Plasticity effects from the results of elastic and elastic-plastic explicit flaw finite element analyses are examined for various size cracks emanating from the root of a notch in a panel and for cracks located at fillet fadii. The results ofmore » these caluclations provide conditions under which the crack-tip plastic zone correction based on the Irwin plastic zone size overestimates the plasticity effect for crack-like flaws embedded in stress concentration regions in which the elastically computed stress exceeds the yield strength of the material. A failure assessment diagram (FAD) curve is employed to graphically c haracterize the effect of plasticity on the crack driving force. The Option 1 FAD curve of the Level 3 advanced fracture assessment procedure of British Standard PD 6493:1991, adjusted for stress concentration effects by a term that is a function of the applied load and the ratio of the local radius of curvature at the flaw location to the flaw depth, provides a satisfactory bound to all the FAD curves derived from the explicit flaw finite element calculations. The adjusted FAD curve is a less restrictive plasticity correction than the plastic zone correction of Section XI for flaws embedded in plastic zones at geometric stress concentrators. This enables unnecessary conservatism to be removed from flaw evaluation procedures that utilize plasticity corrections.« less

  14. 0–0 Energies Using Hybrid Schemes: Benchmarks of TD-DFT, CIS(D), ADC(2), CC2, and BSE/GW formalisms for 80 Real-Life Compounds

    PubMed Central

    2015-01-01

    The 0–0 energies of 80 medium and large molecules have been computed with a large panel of theoretical formalisms. We have used an approach computationally tractable for large molecules, that is, the structural and vibrational parameters are obtained with TD-DFT, the solvent effects are accounted for with the PCM model, whereas the total and transition energies have been determined with TD-DFT and with five wave function approaches accounting for contributions from double excitations, namely, CIS(D), ADC(2), CC2, SCS-CC2, and SOS-CC2, as well as Green’s function based BSE/GW approach. Atomic basis sets including diffuse functions have been systematically applied, and several variations of the PCM have been evaluated. Using solvent corrections obtained with corrected linear-response approach, we found that three schemes, namely, ADC(2), CC2, and BSE/GW allow one to reach a mean absolute deviation smaller than 0.15 eV compared to the measurements, the two former yielding slightly better correlation with experiments than the latter. CIS(D), SCS-CC2, and SOS-CC2 provide significantly larger deviations, though the latter approach delivers highly consistent transition energies. In addition, we show that (i) ADC(2) and CC2 values are extremely close to each other but for systems absorbing at low energies; (ii) the linear-response PCM scheme tends to overestimate solvation effects; and that (iii) the average impact of nonequilibrium correction on 0–0 energies is negligible. PMID:26574326

  15. Body image, psychosocial functioning, and personality: how different are adolescents and young adults applying for plastic surgery?

    PubMed

    Simis, K J; Verhulst, F C; Koot, H M

    2001-07-01

    This study addressed three questions: (1) Do adolescents undergoing plastic surgery have a realistic view of their body? (2) How urgent is the psychosocial need of adolescents to undergo plastic surgery? (3) Which relations exist between bodily attitudes and psychosocial functioning and personality? From 1995 to 1997, 184 plastic surgical patients aged 12 to 22, and a comparison group of 684 adolescents and young adults from the general population aged 12 to 22 years, and their parents, were interviewed and completed questionnaires and standardised rating scales. Adolescents accepted for plastic surgery had realistic appearance attitudes and were psychologically healthy overall. Patients were equally satisfied with their overall appearance as the comparison group, but more dissatisfied with the specific body parts concerned for operation, especially when undergoing corrective operations. Patients had measurable appearance-related psychosocial problems. Patient boys reported less self-confidence on social areas than all other groups. There were very few patient-comparison group differences in correlations between bodily and psychosocial variables, indicating that bodily attitudes and satisfaction are not differentially related to psychosocial functioning and self-perception in patients than in peers. We concluded that adolescents accepted for plastic surgery have considerable appearance-related psychosocial problems, patients in the corrective group reporting more so than in the reconstructive group. Plastic surgeons may assume that these adolescents in general have a realistic attitude towards their appearance. are psychologically healthy, and are mainly dissatisfied about the body parts concerned for operation. corrective patients more so than reconstructive patients. Introverted patients may need more attention from plastic surgeons during the psychosocial assessment.

  16. Clinical relevance of gait research applied to clinical trials in spinal cord injury.

    PubMed

    Ditunno, John; Scivoletto, Giorgio

    2009-01-15

    The restoration of walking function following SCI is extremely important to consumers and has stimulated a response of new treatments by scientists, the pharmaceutical industry and clinical entrepreneurs. Several of the proposed interventions: (1) the use of functional electrical stimulation (FES) and (2) locomotor training have been examined in clinical trials and recent reviews of the scientific literature. Each of these interventions is based on research of human locomotion. Therefore, the systematic study of walking function and gait in normal individuals and those with injury to the spinal cord has contributed to the identification of the impairments of walking, the development of new treatments and how they will be measured to determine effectiveness. In this context gait research applied to interventions to improve walking function is of high clinical relevance. This research helps identify walking impairments to be corrected and measures of walking function to be utilized as endpoints for clinical trials. The most common impairments following SCI diagnosed by observational gait analysis include inadequate hip extension during stance, persistent plantar flexion and hip/knee flexion during swing and foot placement at heel strike. FES has been employed as one strategy for correcting these impairments based on analysis that range from simple measures of speed, cadence and stride length to more sophisticated systems of three- dimensional video motion analysis and multichannel EMG tracings of integrated walking. A recent review of the entire FES literature identified 36 studies that merit comment and the full range of outcome measures for walking function were used from simple velocity to the video analysis of motion. In addition to measures of walking function developed for FES interventions, the first randomized multicenter clinical trial on locomotor training in subacute SCI was recently published with an extensive review of these measures. In this study outcome measures of motor strength (impairment), balance, Walking Index for SCI (WISCI), speed, 5min walk (walking capacities) and locomotor functional independence measure (L-FIM), a disability measure all showed improvement in walking function based on the strategy of the response of activity based plasticity to step training. Although the scientific basis for this intervention will be covered in other articles in this series, the evolution of clinical outcome measures of walking function continues to be important for the determination of effectiveness in clinical trials.

  17. Length bias correction in one-day cross-sectional assessments - The nutritionDay study.

    PubMed

    Frantal, Sophie; Pernicka, Elisabeth; Hiesmayr, Michael; Schindler, Karin; Bauer, Peter

    2016-04-01

    A major problem occurring in cross-sectional studies is sampling bias. Length of hospital stay (LOS) differs strongly between patients and causes a length bias as patients with longer LOS are more likely to be included and are therefore overrepresented in this type of study. To adjust for the length bias higher weights are allocated to patients with shorter LOS. We determined the effect of length-bias adjustment in two independent populations. Length-bias correction is applied to the data of the nutritionDay project, a one-day multinational cross-sectional audit capturing data on disease and nutrition of patients admitted to hospital wards with right-censoring after 30 days follow-up. We applied the weighting method for estimating the distribution function of patient baseline variables based on the method of non-parametric maximum likelihood. Results are validated using data from all patients admitted to the General Hospital of Vienna between 2005 and 2009, where the distribution of LOS can be assumed to be known. Additionally, a simplified calculation scheme for estimating the adjusted distribution function of LOS is demonstrated on a small patient example. The crude median (lower quartile; upper quartile) LOS in the cross-sectional sample was 14 (8; 24) and decreased to 7 (4; 12) when adjusted. Hence, adjustment for length bias in cross-sectional studies is essential to get appropriate estimates. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  18. Preprocessing method to correct illumination pattern in sinusoidal-based structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Shabani, H.; Doblas, A.; Saavedra, G.; Preza, C.

    2018-02-01

    The restored images in structured illumination microscopy (SIM) can be affected by residual fringes due to a mismatch between the illumination pattern and the sinusoidal model assumed by the restoration method. When a Fresnel biprism is used to generate a structured pattern, this pattern cannot be described by a pure sinusoidal function since it is distorted by an envelope due to the biprism's edge. In this contribution, we have investigated the effect of the envelope on the restored SIM images and propose a computational method in order to address it. The proposed approach to reduce the effect of the envelope consists of two parts. First, the envelope of the structured pattern, determined through calibration data, is removed from the raw SIM data via a preprocessing step. In the second step, a notch filter is applied to the images, which are restored using the well-known generalized Wiener filter, to filter any residual undesired fringes. The performance of our approach has been evaluated numerically by simulating the effect of the envelope on synthetic forward images of a 6-μm spherical bead generated using the real pattern and then restored using the SIM approach that is based on an ideal pure sinusoidal function before and after our proposed correction method. The simulation result shows 74% reduction in the contrast of the residual pattern when the proposed method is applied. Experimental results from a pollen grain sample also validate the proposed approach.

  19. An asymptotically consistent approximant method with application to soft- and hard-sphere fluids.

    PubMed

    Barlow, N S; Schultz, A J; Weinstein, S J; Kofke, D A

    2012-11-28

    A modified Padé approximant is used to construct an equation of state, which has the same large-density asymptotic behavior as the model fluid being described, while still retaining the low-density behavior of the virial equation of state (virial series). Within this framework, all sequences of rational functions that are analytic in the physical domain converge to the correct behavior at the same rate, eliminating the ambiguity of choosing the correct form of Padé approximant. The method is applied to fluids composed of "soft" spherical particles with separation distance r interacting through an inverse-power pair potential, φ = ε(σ∕r)(n), where ε and σ are model parameters and n is the "hardness" of the spheres. For n < 9, the approximants provide a significant improvement over the 8-term virial series, when compared against molecular simulation data. For n ≥ 9, both the approximants and the 8-term virial series give an accurate description of the fluid behavior, when compared with simulation data. When taking the limit as n → ∞, an equation of state for hard spheres is obtained, which is closer to simulation data than the 10-term virial series for hard spheres, and is comparable in accuracy to other recently proposed equations of state. By applying a least square fit to the approximants, we obtain a general and accurate soft-sphere equation of state as a function of n, valid over the full range of density in the fluid phase.

  20. Blind Bayesian restoration of adaptive optics telescope images using generalized Gaussian Markov random field models

    NASA Astrophysics Data System (ADS)

    Jeffs, Brian D.; Christou, Julian C.

    1998-09-01

    This paper addresses post processing for resolution enhancement of sequences of short exposure adaptive optics (AO) images of space objects. The unknown residual blur is removed using Bayesian maximum a posteriori blind image restoration techniques. In the problem formulation, both the true image and the unknown blur psf's are represented by the flexible generalized Gaussian Markov random field (GGMRF) model. The GGMRF probability density function provides a natural mechanism for expressing available prior information about the image and blur. Incorporating such prior knowledge in the deconvolution optimization is crucial for the success of blind restoration algorithms. For example, space objects often contain sharp edge boundaries and geometric structures, while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits while the residual blur psf in the corresponding partially corrected AO image is spectrally band limited, and exhibits smoothed, random , texture-like features on a peaked central core. By properly choosing parameters, GGMRF models can accurately represent both the blur psf and the object, and serve to regularize the deconvolution problem. These two GGMRF models also serve as discriminator functions to separate blur and object in the solution. Algorithm performance is demonstrated with examples from synthetic AO images. Results indicate significant resolution enhancement when applied to partially corrected AO images. An efficient computational algorithm is described.

  1. Acceleration spectra for subduction zone earthquakes

    USGS Publications Warehouse

    Boatwright, J.; Choy, G.L.

    1989-01-01

    We estimate the source spectra of shallow earthquakes from digital recordings of teleseismic P wave groups, that is, P+pP+sP, by making frequency dependent corrections for the attenuation and for the interference of the free surface. The correction for the interference of the free surface assumes that the earthquake radiates energy from a range of depths. We apply this spectral analysis to a set of 12 subduction zone earthquakes which range in size from Ms = 6.2 to 8.1, obtaining corrected P wave acceleration spectra on the frequency band from 0.01 to 2.0 Hz. Seismic moment estimates from surface waves and normal modes are used to extend these P wave spectra to the frequency band from 0.001 to 0.01 Hz. The acceleration spectra of large subduction zone earthquakes, that is, earthquakes whose seismic moments are greater than 1027 dyn cm, exhibit intermediate slopes where u(w)???w5/4 for frequencies from 0.005 to 0.05 Hz. For these earthquakes, spectral shape appears to be a discontinuous function of seismic moment. Using reasonable assumptions for the phase characteristics, we transform the spectral shape observed for large earthquakes into the time domain to fit Ekstrom's (1987) moment rate functions for the Ms=8.1 Michoacan earthquake of September 19, 1985, and the Ms=7.6 Michoacan aftershock of September 21, 1985. -from Authors

  2. Microscopically based energy density functionals for nuclei using the density matrix expansion. II. Full optimization and validation

    NASA Astrophysics Data System (ADS)

    Navarro Pérez, R.; Schunck, N.; Dyhdalo, A.; Furnstahl, R. J.; Bogner, S. K.

    2018-05-01

    Background: Energy density functional methods provide a generic framework to compute properties of atomic nuclei starting from models of nuclear potentials and the rules of quantum mechanics. Until now, the overwhelming majority of functionals have been constructed either from empirical nuclear potentials such as the Skyrme or Gogny forces, or from systematic gradient-like expansions in the spirit of the density functional theory for atoms. Purpose: We seek to obtain a usable form of the nuclear energy density functional that is rooted in the modern theory of nuclear forces. We thus consider a functional obtained from the density matrix expansion of local nuclear potentials from chiral effective field theory. We propose a parametrization of this functional carefully calibrated and validated on selected ground-state properties that is suitable for large-scale calculations of nuclear properties. Methods: Our energy functional comprises two main components. The first component is a non-local functional of the density and corresponds to the direct part (Hartree term) of the expectation value of local chiral potentials on a Slater determinant. Contributions to the mean field and the energy of this term are computed by expanding the spatial, finite-range components of the chiral potential onto Gaussian functions. The second component is a local functional of the density and is obtained by applying the density matrix expansion to the exchange part (Fock term) of the expectation value of the local chiral potential. We apply the UNEDF2 optimization protocol to determine the coupling constants of this energy functional. Results: We obtain a set of microscopically constrained functionals for local chiral potentials from leading order up to next-to-next-to-leading order with and without three-body forces and contributions from Δ excitations. These functionals are validated on the calculation of nuclear and neutron matter, nuclear mass tables, single-particle shell structure in closed-shell nuclei, and the fission barrier of 240Pu. Quantitatively, they perform noticeably better than the more phenomenological Skyrme functionals. Conclusions: The inclusion of higher-order terms in the chiral perturbation expansion seems to produce a systematic improvement in predicting nuclear binding energies while the impact on other observables is not really significant. This result is especially promising since all the fits have been performed at the single-reference level of the energy density functional approach, where important collective correlations such as center-of-mass correction, rotational correction, or zero-point vibrational energies have not been taken into account yet.

  3. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum

    NASA Astrophysics Data System (ADS)

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990), 10.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988), 10.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ /ωb<0.26 ), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ˜10 % ) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  4. Quantum Kramers model: Corrections to the linear response theory for continuous bath spectrum.

    PubMed

    Rips, Ilya

    2017-01-01

    Decay of the metastable state is analyzed within the quantum Kramers model in the weak-to-intermediate dissipation regime. The decay kinetics in this regime is determined by energy exchange between the unstable mode and the stable modes of thermal bath. In our previous paper [Phys. Rev. A 42, 4427 (1990)PLRAAN1050-294710.1103/PhysRevA.42.4427], Grabert's perturbative approach to well dynamics in the case of the discrete bath [Phys. Rev. Lett. 61, 1683 (1988)PRLTAO0031-900710.1103/PhysRevLett.61.1683] has been extended to account for the second order terms in the classical equations of motion (EOM) for the stable modes. Account of the secular terms reduces EOM for the stable modes to those of the forced oscillator with the time-dependent frequency (TDF oscillator). Analytic expression for the characteristic function of energy loss of the unstable mode has been derived in terms of the generating function of the transition probabilities for the quantum forced TDF oscillator. In this paper, the approach is further developed and applied to the case of the continuous frequency spectrum of the bath. The spectral density functions of the bath of stable modes are expressed in terms of the dissipative properties (the friction function) of the original bath. They simplify considerably for the one-dimensional systems, when the density of phonon states is constant. Explicit expressions for the fourth order corrections to the linear response theory result for the characteristic function of the energy loss and its cumulants are obtained for the particular case of the cubic potential with Ohmic (Markovian) dissipation. The range of validity of the perturbative approach in this case is determined (γ/ω_{b}<0.26), which includes the turnover region. The dominant correction to the linear response theory result is associated with the "work function" and leads to reduction of the average energy loss and its dispersion. This reduction increases with the increasing dissipation strength (up to ∼10%) within the range of validity of the approach. We have also calculated corrections to the depopulation factor and the escape rate for the quantum and for the classical Kramers models. Results for the classical escape rate are in very good agreement with the numerical simulations for high barriers. The results can serve as an additional proof of the robustness and accuracy of the linear response theory.

  5. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  6. Numerical simulation and analysis of accurate blood oxygenation measurement by using optical resolution photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yu, Tianhao; Li, Qian; Li, Lin; Zhou, Chuanqing

    2016-10-01

    Accuracy of photoacoustic signal is the crux on measurement of oxygen saturation in functional photoacoustic imaging, which is influenced by factors such as defocus of laser beam, curve shape of large vessels and nonlinear saturation effect of optical absorption in biological tissues. We apply Monte Carlo model to simulate energy deposition in tissues and obtain photoacoustic signals reaching a simulated focused surface detector to investigate corresponding influence of these factors. We also apply compensation on photoacoustic imaging of in vivo cat cerebral cortex blood vessels, in which signals from different lateral positions of vessels are corrected based on simulation results. And this process on photoacoustic images can improve the smoothness and accuracy of oxygen saturation results.

  7. A Numerical, Literal, and Converged Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Wiesel, William E.

    2017-09-01

    The KAM theorem and von Ziepel's method are applied to a perturbed harmonic oscillator, and it is noted that the KAM methodology does not allow for necessary frequency or angle corrections, while von Ziepel does. The KAM methodology can be carried out with purely numerical methods, since its generating function does not contain momentum dependence. The KAM iteration is extended to allow for frequency and angle changes, and in the process apparently can be successfully applied to degenerate systems normally ruled out by the classical KAM theorem. Convergence is observed to be geometric, not exponential, but it does proceed smoothly to machine precision. The algorithm produces a converged perturbation solution by numerical methods, while still retaining literal variable dependence, at least in the vicinity of a given trajectory.

  8. [Methodic approaches to evaluation of microclimate at workplace, with application of various types of protective clothing against occupational hazards].

    PubMed

    Prokopenko, L V; Afanas'eva, R F; Bessonova, N A; Burmistrova, O V; Losik, T K; Konstantinov, E I

    2013-01-01

    Studies of heat state of human involved into physical work in heating environment and having various protective clothing on demonstrated value of the protective clothing in modifying thermal load on the body and possible decrease of this load through air temperature and humidity correction, shorter stay at workplace. The authors presented hygienic requirements to air temperatures range in accordance with allowable body heating degree, suggested mathematic model to forecast integral parameter of human functional state in accordance with type of protective clothing applied. The article also covers necessity of upper air temperature limit during hot season, for applying protective clothing made of materials with low air permeability and hydraulic conductivity.

  9. A nonlinear lag correction algorithm for a-Si flat-panel x-ray detectors

    PubMed Central

    Starman, Jared; Star-Lack, Josh; Virshup, Gary; Shapiro, Edward; Fahrig, Rebecca

    2012-01-01

    Purpose: Detector lag, or residual signal, in a-Si flat-panel (FP) detectors can cause significant shading artifacts in cone-beam computed tomography reconstructions. To date, most correction models have assumed a linear, time-invariant (LTI) model and correct lag by deconvolution with an impulse response function (IRF). However, the lag correction is sensitive to both the exposure intensity and the technique used for determining the IRF. Even when the LTI correction that produces the minimum error is found, residual artifact remains. A new non-LTI method was developed to take into account the IRF measurement technique and exposure dependencies. Methods: First, a multiexponential (N = 4) LTI model was implemented for lag correction. Next, a non-LTI lag correction, known as the nonlinear consistent stored charge (NLCSC) method, was developed based on the LTI multiexponential method. It differs from other nonlinear lag correction algorithms in that it maintains a consistent estimate of the amount of charge stored in the FP and it does not require intimate knowledge of the semiconductor parameters specific to the FP. For the NLCSC method, all coefficients of the IRF are functions of exposure intensity. Another nonlinear lag correction method that only used an intensity weighting of the IRF was also compared. The correction algorithms were applied to step-response projection data and CT acquisitions of a large pelvic phantom and an acrylic head phantom. The authors collected rising and falling edge step-response data on a Varian 4030CB a-Si FP detector operating in dynamic gain mode at 15 fps at nine incident exposures (2.0%–92% of the detector saturation exposure). For projection data, 1st and 50th frame lag were measured before and after correction. For the CT reconstructions, five pairs of ROIs were defined and the maximum and mean signal differences within a pair were calculated for the different exposures and step-response edge techniques. Results: The LTI corrections left residual 1st and 50th frame lag up to 1.4% and 0.48%, while the NLCSC lag correction reduced 1st and 50th frame residual lags to less than 0.29% and 0.0052%. For CT reconstructions, the NLCSC lag correction gave an average error of 11 HU for the pelvic phantom and 3 HU for the head phantom, compared to 14–19 HU and 2–11 HU for the LTI corrections and 15 HU and 9 HU for the intensity weighted non-LTI algorithm. The maximum ROI error was always smallest for the NLCSC correction. The NLCSC correction was also superior to the intensity weighting algorithm. Conclusions: The NLCSC lag algorithm corrected for the exposure dependence of lag, provided superior image improvement for the pelvic phantom reconstruction, and gave similar results to the best case LTI results for the head phantom. The blurred ring artifact that is left over in the LTI corrections was better removed by the NLCSC correction in all cases. PMID:23039642

  10. Forensic psychology and correctional psychology: Distinct but related subfields of psychological science and practice.

    PubMed

    Neal, Tess M S

    2018-02-12

    This article delineates 2 separate but related subfields of psychological science and practice applicable across all major areas of the field (e.g., clinical, counseling, developmental, social, cognitive, community). Forensic and correctional psychology are related by their historical roots, involvement in the justice system, and the shared population of people they study and serve. The practical and ethical contexts of these subfields is distinct from other areas of psychology-and from one another-with important implications for ecologically valid research and ethically sound practice. Forensic psychology is a subfield of psychology in which basic and applied psychological science or scientifically oriented professional practice is applied to the law to help resolve legal, contractual, or administrative matters. Correctional psychology is a subfield of psychology in which basic and applied psychological science or scientifically oriented professional practice is applied to the justice system to inform the classification, treatment, and management of offenders to reduce risk and improve public safety. There has been and continues to be great interest in both subfields-especially the potential for forensic and correctional psychological science to help resolve practical issues and questions in legal and justice settings. This article traces the shared and separate developmental histories of these subfields, outlines their important distinctions and implications, and provides a common understanding and shared language for psychologists interested in applying their knowledge in forensic or correctional contexts. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  12. GAMA/H-ATLAS: common star formation rate indicators and their dependence on galaxy physical parameters

    NASA Astrophysics Data System (ADS)

    Wang, L.; Norberg, P.; Gunawardhana, M. L. P.; Heinis, S.; Baldry, I. K.; Bland-Hawthorn, J.; Bourne, N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Cooray, A.; da Cunha, E.; Driver, S. P.; Dunne, L.; Dye, S.; Eales, S.; Grootes, M. W.; Holwerda, B. W.; Hopkins, A. M.; Ibar, E.; Ivison, R.; Lacey, C.; Lara-Lopez, M. A.; Loveday, J.; Maddox, S. J.; Michałowski, M. J.; Oteo, I.; Owers, M. S.; Popescu, C. C.; Smith, D. J. B.; Taylor, E. N.; Tuffs, R. J.; van der Werf, P.

    2016-09-01

    We compare common star formation rate (SFR) indicators in the local Universe in the Galaxy and Mass Assembly (GAMA) equatorial fields (˜160 deg2), using ultraviolet (UV) photometry from GALEX, far-infrared and sub-millimetre (sub-mm) photometry from Herschel Astrophysical Terahertz Large Area Survey, and Hα spectroscopy from the GAMA survey. With a high-quality sample of 745 galaxies (median redshift = 0.08), we consider three SFR tracers: UV luminosity corrected for dust attenuation using the UV spectral slope β (SFRUV, corr), Hα line luminosity corrected for dust using the Balmer decrement (BD) (SFRH α, corr), and the combination of UV and infrared (IR) emission (SFRUV + IR). We demonstrate that SFRUV, corr can be reconciled with the other two tracers after applying attenuation corrections by calibrating Infrared excess (IRX; I.e. the IR to UV luminosity ratio) and attenuation in the Hα (derived from BD) against β. However, β, on its own, is very unlikely to be a reliable attenuation indicator. We find that attenuation correction factors depend on parameters such as stellar mass (M*), z and dust temperature (Tdust), but not on Hα equivalent width or Sérsic index. Due to the large scatter in the IRX versus β correlation, when compared to SFRUV + IR, the β-corrected SFRUV, corr exhibits systematic deviations as a function of IRX, BD and Tdust.

  13. Corrections on the Thermometer Reading in an Air Stream

    NASA Technical Reports Server (NTRS)

    Van Der Maas, H J; Wynia, S

    1940-01-01

    A method is described for checking a correction formula, based partly on theoretical considerations, for adiabatic compression and friction in flight tests and determining the value of the constant. It is necessary to apply a threefold correction to each thermometer reading. They are a correction for adiabatic compression, friction and for time lag.

  14. Influence of CT-based depth correction of renal scintigraphy in evaluation of living kidney donors on side selection and postoperative renal function: is it necessary to know the relative renal function?

    PubMed

    Weinberger, Sarah; Klarholz-Pevere, Carola; Liefeldt, Lutz; Baeder, Michael; Steckhan, Nico; Friedersdorff, Frank

    2018-03-22

    To analyse the influence of CT-based depth correction in the assessment of split renal function in potential living kidney donors. In 116 consecutive living kidney donors preoperative split renal function was assessed using the CT-based depth correction. Influence on donor side selection and postoperative renal function of the living kidney donors were analyzed. Linear regression analysis was performed to identify predictors of postoperative renal function. A left versus right kidney depth variation of more than 1 cm was found in 40/114 donors (35%). 11 patients (10%) had a difference of more than 5% in relative renal function after depth correction. Kidney depth variation and changes in relative renal function after depth correction would have had influence on side selection in 30 of 114 living kidney donors. CT depth correction did not improve the predictability of postoperative renal function of the living kidney donor. In general, it was not possible to predict the postoperative renal function from preoperative total and relative renal function. In multivariate linear regression analysis, age and BMI were identified as most important predictors for postoperative renal function of the living kidney donors. Our results clearly indicate that concerning the postoperative renal function of living kidney donors, the relative renal function of the donated kidney seems to be less important than other factors. A multimodal assessment with consideration of all available results including kidney size, location of the kidney and split renal function remains necessary.

  15. Semi-empirical and phenomenological instrument functions for the scanning tunneling microscope

    NASA Astrophysics Data System (ADS)

    Feuchtwang, T. E.; Cutler, P. H.; Notea, A.

    1988-08-01

    Recent progress in the development of a convenient algorithm for the determination of a quantitative local density of states (LDOS) of the sample, from data measured in the STM, is reviewd. It is argued that the sample LDOS strikes a good balance between the information content of a surface characteristic and effort required to obtain it experimentally. Hence, procedures to determine the sample LDOS as directly and as tip-model independently as possible are emphasized. The solution of the STM's "inverse" problem in terms of novel versions of the instrument (or Green) function technique is considered in preference to the well known, more direct solutions. Two types of instrument functions are considered: Approximations of the basic tip-instrument function obtained from the transfer Hamiltonian theory of the STM-STS. And, phenomenological instrument functions devised as a systematic scheme for semi-empirical first order corrections of "ideal" models. The instrument function, in this case, describes the corrections as the response of an independent component of the measuring apparatus inserted between the "ideal" instrument and the measured data. This linear response theory of measurement is reviewed and applied. A procedure for the estimation of the consistency of the model and the systematic errors due to the use of an approximate instrument function is presented. The independence of the instrument function techniques from explicit microscopic models of the tip is noted. The need for semi-empirical, as opposed to strictly empirical or analytical determination of the instrument function is discussed. The extension of the theory to the scanning tunneling spectrometer is noted, as well as its use in a theory of resolution.

  16. Resolution of the COBE Earth sensor anomaly

    NASA Technical Reports Server (NTRS)

    Sedler, J.

    1993-01-01

    Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.

  17. Correction for specimen movement and rotation errors for in-vivo Optical Projection Tomography

    PubMed Central

    Birk, Udo Jochen; Rieckher, Matthias; Konstantinides, Nikos; Darrell, Alex; Sarasa-Renedo, Ana; Meyer, Heiko; Tavernarakis, Nektarios; Ripoll, Jorge

    2010-01-01

    The application of optical projection tomography to in-vivo experiments is limited by specimen movement during the acquisition. We present a set of mathematical correction methods applied to the acquired data stacks to correct for movement in both directions of the image plane. These methods have been applied to correct experimental data taken from in-vivo optical projection tomography experiments in Caenorhabditis elegans. Successful reconstructions for both fluorescence and white light (absorption) measurements are shown. Since no difference between movement of the animal and movement of the rotation axis is made, this approach at the same time removes artifacts due to mechanical drifts and errors in the assumed center of rotation. PMID:21258448

  18. Improved hepatic arterial fraction estimation using cardiac output correction of arterial input functions for liver DCE MRI

    NASA Astrophysics Data System (ADS)

    Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.

    2017-02-01

    Liver dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling could be useful in the assessment of diffuse liver disease and focal liver lesions, but is compromised by errors in arterial input function (AIF) sampling. In this study, we apply cardiac output correction to arterial input functions (AIFs) for liver DCE MRI and investigate the effect on dual-input single compartment hepatic perfusion parameter estimation and reproducibility. Thirteen healthy volunteers (28.7  ±  1.94 years, seven males) underwent liver DCE MRI and cardiac output measurement using aortic root phase contrast MRI (PCMRI), with reproducibility (n  =  9) measured at 7 d. Cardiac output AIF correction was undertaken by constraining the first pass AIF enhancement curve using the indicator-dilution principle. Hepatic perfusion parameters with and without cardiac output AIF correction were compared and 7 d reproducibility assessed. Differences between cardiac output corrected and uncorrected liver DCE MRI portal venous (PV) perfusion (p  =  0.066), total liver blood flow (TLBF) (p  =  0.101), hepatic arterial (HA) fraction (p  =  0.895), mean transit time (MTT) (p  =  0.646), distribution volume (DV) (p  =  0.890) were not significantly different. Seven day corrected HA fraction reproducibility was improved (mean difference 0.3%, Bland-Altman 95% limits-of-agreement (BA95%LoA)  ±27.9%, coefficient of variation (CoV) 61.4% versus 9.3%, ±35.5%, 81.7% respectively without correction). Seven day uncorrected PV perfusion was also improved (mean difference 9.3 ml min-1/100 g, BA95%LoA  ±506.1 ml min-1/100 g, CoV 64.1% versus 0.9 ml min-1/100 g, ±562.8 ml min-1/100 g, 65.1% respectively with correction) as was uncorrected TLBF (mean difference 43.8 ml min-1/100 g, BA95%LoA  ±586.7 ml min-1/ 100 g, CoV 58.3% versus 13.3 ml min-1/100 g, ±661.5 ml min-1/100 g, 60.9% respectively with correction). Reproducibility of uncorrected MTT was similar (uncorrected mean difference 2.4 s, BA95%LoA  ±26.7 s, CoV 60.8% uncorrected versus 3.7 s, ±27.8 s, 62.0% respectively with correction), as was and DV (uncorrected mean difference 14.1%, BA95%LoA  ±48.2%, CoV 24.7% versus 10.3%, ±46.0%, 23.9% respectively with correction). Cardiac output AIF correction does not significantly affect the estimation of hepatic perfusion parameters but demonstrates improvements in normal volunteer 7 d HA fraction reproducibility, but deterioration in PV perfusion and TLBF reproducibility. Improved HA fraction reproducibility maybe important as arterialisation of liver perfusion is increased in chronic liver disease and within malignant liver lesions.

  19. Toward an applied technology for quality measurement in health care.

    PubMed

    Berwick, D M

    1988-01-01

    Cost containment, financial incentives to conserve resources, the growth of for-profit hospitals, an aggressive malpractice environment, and demands from purchasers are among the forces today increasing the need for improved methods that measure quality in health care. At the same time, increasingly sophisticated databases and the existence of managed care systems yield new opportunities to observe and correct quality problems. Research on targets of measurement (structure, process, and outcome) and methods of measurement (implicit, explicit, and sentinel methods) has not yet produced managerially useful applied technology for quality measurement in real-world settings. Such an applied technology would have to be cheaper, faster, more flexible, better reported, and more multidimensional than the majority of current research on quality assurance. In developing a new applied technology for the measurement of health care quality, quantitative disciplines have much to offer, such as decision support systems, criteria based on rigorous decision analyses, utility theory, tools for functional status measurement, and advances in operations research.

  20. Technical Note: A Time-Dependent I(sub 0) Correction for Solar Occultation Instruments

    NASA Technical Reports Server (NTRS)

    Burton, Sharon P.; Thomason, Larry W.; Zawodny, Joseph M.

    2009-01-01

    Solar occultation has proven to be a reliable technique for the measurement of atmospheric constituents in the stratosphere. NASA's Stratospheric Aerosol and Gas Experiments (SAGE, SAGE II, and SAGE III) together have provided over 25 years of quality solar occultation data, a data record which has been an important resource for the scientific exploration of atmospheric composition and climate change. Herein, we describe an improvement to the processing of SAGE data that corrects for a previously uncorrected short-term timedependence in the calibration function. The variability relates to the apparent rotation of the scanning track with respect to the face of the sun due to the motion of the satellite. Correcting for this effect results in a decrease in the measurement noise in the Level 1 line-of-sight optical depth measurements of approximately 40% in the middle and upper stratospheric SAGE II and III where it has been applied. The technique is potentially useful for any scanning solar occultation instrument, and suggests further improvement for future occultation measurements if a full disk imaging system can be included.

  1. The free energy of a reaction coordinate at multiple constraints: a concise formulation

    NASA Astrophysics Data System (ADS)

    Schlitter, Jürgen; Klähn, Marco

    The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.

  2. Communication: Accurate higher-order van der Waals coefficients between molecules from a model dynamic multipole polarizability

    DOE PAGES

    Tao, Jianmin; Rappe, Andrew M.

    2016-01-20

    Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less

  3. Pressure cell for investigations of solid-liquid interfaces by neutron reflectivity.

    PubMed

    Kreuzer, Martin; Kaltofen, Thomas; Steitz, Roland; Zehnder, Beat H; Dahint, Reiner

    2011-02-01

    We describe an apparatus for measuring scattering length density and structure of molecular layers at planar solid-liquid interfaces under high hydrostatic pressure conditions. The device is designed for in situ characterizations utilizing neutron reflectometry in the pressure range 0.1-100 MPa at temperatures between 5 and 60 °C. The pressure cell is constructed such that stratified molecular layers on crystalline substrates of silicon, quartz, or sapphire with a surface area of 28 cm(2) can be investigated against noncorrosive liquid phases. The large substrate surface area enables reflectivity to be measured down to 10(-5) (without background correction) and thus facilitates determination of the scattering length density profile across the interface as a function of applied load. Our current interest is on the stability of oligolamellar lipid coatings on silicon surfaces against aqueous phases as a function of applied hydrostatic pressure and temperature but the device can also be employed to probe the structure of any other solid-liquid interface.

  4. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  5. Study of Thermodynamics of Liquid Noble-Metals Alloys Through a Pseudopotential Theory

    NASA Astrophysics Data System (ADS)

    Vora, Aditya M.

    2010-09-01

    The Gibbs-Bogoliubov (GB) inequality is applied to investigate the thermodynamic properties of some equiatomic noble metal alloys in liquid phase such as Au-Cu, Ag-Cu, and Ag-Au using well recognized pseudopotential formalism. For description of the structure, well known Percus-Yevick (PY) hard sphere model is used as a reference system. By applying a variation method the best hard core diameters have been found which correspond to minimum free energy. With this procedure the thermodynamic properties such as entropy and heat of mixing have been computed. The influence of local field correction function viz; Hartree (H), Taylor (T), Ichimaru-Utsumi (IU), Farid et al. (F), and Sarkar et al. (S) is also investigated. The computed results of the excess entropy compares favourably in the case of liquid alloys while the agreement with experiment is poor in the case of heats of mixing. This may be due to the sensitivity of the heats of mixing with the potential parameters and the dielectric function.

  6. An efficient solution to the decoherence enhanced trivial crossing problem in surface hopping

    NASA Astrophysics Data System (ADS)

    Bai, Xin; Qiu, Jing; Wang, Linjun

    2018-03-01

    We provide an in-depth investigation of the time interval convergence when both trivial crossing and decoherence corrections are applied to Tully's fewest switches surface hopping (FSSH) algorithm. Using one force-based and one energy-based decoherence strategies as examples, we show decoherence corrections intrinsically enhance the trivial crossing problem. We propose a restricted decoherence (RD) strategy and incorporate it into the self-consistent (SC) fewest switches surface hopping algorithm [L. Wang and O. V. Prezhdo, J. Phys. Chem. Lett. 5, 713 (2014)]. The resulting SC-FSSH-RD approach is applied to general Hamiltonians with different electronic couplings and electron-phonon couplings to mimic charge transport in tens to hundreds of molecules. In all cases, SC-FSSH-RD allows us to use a large time interval of 0.1 fs for convergence and the simulation time is reduced by over one order of magnitude. Both the band and hopping mechanisms of charge transport have been captured perfectly. SC-FSSH-RD makes surface hops in the adiabatic representation and can be implemented in both diabatic and locally diabatic representations for wave function propagation. SC-FSSH-RD can potentially describe general nonadiabatic dynamics of electrons and excitons in organics and other materials.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest-Beaulieu, C.; Bergeron, P., E-mail: genest@astro.umontreal.ca, E-mail: bergeron@astro.umontreal.ca

    We present a comparative analysis of atmospheric parameters obtained with the so-called photometric and spectroscopic techniques. Photometric and spectroscopic data for 1360 DA white dwarfs from the Sloan Digital Sky Survey (SDSS) are used, as well as spectroscopic data from the Villanova White Dwarf Catalog. We first test the calibration of the ugriz photometric system by using model atmosphere fits to observed data. Our photometric analysis indicates that the ugriz photometry appears well calibrated when the SDSS to AB{sub 95} zeropoint corrections are applied. The spectroscopic analysis of the same data set reveals that the so-called high-log g problem canmore » be solved by applying published correction functions that take into account three-dimensional hydrodynamical effects. However, a comparison between the SDSS and the White Dwarf Catalog spectra also suggests that the SDSS spectra still suffer from a small calibration problem. We then compare the atmospheric parameters obtained from both fitting techniques and show that the photometric temperatures are systematically lower than those obtained from spectroscopic data. This systematic offset may be linked to the hydrogen line profiles used in the model atmospheres. We finally present the results of an analysis aimed at measuring surface gravities using photometric data only.« less

  8. BLACK HOLE MASS AND EDDINGTON RATIO DISTRIBUTION FUNCTIONS OF X-RAY-SELECTED BROAD-LINE AGNs AT z {approx} 1.4 IN THE SUBARU XMM-NEWTON DEEP FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobuta, K.; Akiyama, M.; Ueda, Y.

    2012-12-20

    In order to investigate the growth of supermassive black holes (SMBHs), we construct the black hole mass function (BHMF) and Eddington ratio distribution function (ERDF) of X-ray-selected broad-line active galactic nuclei (AGNs) at z {approx} 1.4 in the Subaru XMM-Newton Deep Survey (SXDS) field. A significant part of the accretion growth of SMBHs is thought to take place in this redshift range. Black hole masses of X-ray-selected broad-line AGNs are estimated using the width of the broad Mg II line and 3000 A monochromatic luminosity. We supplement the Mg II FWHM values with the H{alpha} FWHM obtained from our NIRmore » spectroscopic survey. Using the black hole masses of broad-line AGNs at redshifts between 1.18 and 1.68, the binned broad-line AGN BHMFs and ERDFs are calculated using the V{sub max} method. To properly account for selection effects that impact the binned estimates, we derive the corrected broad-line AGN BHMFs and ERDFs by applying the maximum likelihood method, assuming that the ERDF is constant regardless of the black hole mass. We do not correct for the non-negligible uncertainties in virial BH mass estimates. If we compare the corrected broad-line AGN BHMF with that in the local universe, then the corrected BHMF at z = 1.4 has a higher number density above 10{sup 8} M{sub Sun} but a lower number density below that mass range. The evolution may be indicative of a downsizing trend of accretion activity among the SMBH population. The evolution of broad-line AGN ERDFs from z = 1.4 to 0 indicates that the fraction of broad-line AGNs with accretion rates close to the Eddington limit is higher at higher redshifts.« less

  9. Elucidation of molecular kinetic schemes from macroscopic traces using system identification

    PubMed Central

    González-Maeso, Javier; Sealfon, Stuart C.; Galocha-Iragüen, Belén; Brezina, Vladimir

    2017-01-01

    Overall cellular responses to biologically-relevant stimuli are mediated by networks of simpler lower-level processes. Although information about some of these processes can now be obtained by visualizing and recording events at the molecular level, this is still possible only in especially favorable cases. Therefore the development of methods to extract the dynamics and relationships between the different lower-level (microscopic) processes from the overall (macroscopic) response remains a crucial challenge in the understanding of many aspects of physiology. Here we have devised a hybrid computational-analytical method to accomplish this task, the SYStems-based MOLecular kinetic scheme Extractor (SYSMOLE). SYSMOLE utilizes system-identification input-output analysis to obtain a transfer function between the stimulus and the overall cellular response in the Laplace-transformed domain. It then derives a Markov-chain state molecular kinetic scheme uniquely associated with the transfer function by means of a classification procedure and an analytical step that imposes general biological constraints. We first tested SYSMOLE with synthetic data and evaluated its performance in terms of its rate of convergence to the correct molecular kinetic scheme and its robustness to noise. We then examined its performance on real experimental traces by analyzing macroscopic calcium-current traces elicited by membrane depolarization. SYSMOLE derived the correct, previously known molecular kinetic scheme describing the activation and inactivation of the underlying calcium channels and correctly identified the accepted mechanism of action of nifedipine, a calcium-channel blocker clinically used in patients with cardiovascular disease. Finally, we applied SYSMOLE to study the pharmacology of a new class of glutamate antipsychotic drugs and their crosstalk mechanism through a heteromeric complex of G protein-coupled receptors. Our results indicate that our methodology can be successfully applied to accurately derive molecular kinetic schemes from experimental macroscopic traces, and we anticipate that it may be useful in the study of a wide variety of biological systems. PMID:28192423

  10. In Our Lifetime: A Reformist View of Correctional Education.

    ERIC Educational Resources Information Center

    Arbenz, Richard L.

    1994-01-01

    Effective correctional education strategies should be used as models for comprehensive prison reform: prisons should become schools. Cognitive-democratic educational theory has been successfully applied in correctional education, demonstrating belief in people's intrinsic worth and ability to change. (SK)

  11. Personalized pseudophakic model

    NASA Astrophysics Data System (ADS)

    Ribeiro, F.; Castanheira-Dinis, A.; Dias, J. M.

    2014-08-01

    With the aim of taking into account all optical aberrations, a personalized pseudophakic optical model was designed for refractive evaluation using ray tracing software. Starting with a generic model, all clinically measurable data were replaced by personalized measurements. Data from corneal anterior and posterior surfaces were imported from a grid of elevation data obtained by topography, and a formula for the calculation of the intraocular lens (IOL) position was developed based on the lens equator. For the assessment of refractive error, a merit function minimized by the approximation of the Modulation Transfer Function values to diffraction limit values on the frequencies corresponding up to the discrimination limits of the human eye, weighted depending on the human contrast sensitivity function, was built. The model was tested on the refractive evaluation of 50 pseudophakic eyes. The developed model shows good correlation with subjective evaluation of a pseudophakic population, having the added advantage of being independent of corrective factors, allowing it to be immediately adaptable to new technological developments. In conclusion, this personalized model, which uses individual biometric values, allows for a precise refractive assessment and is a valuable tool for an accurate IOL power calculation, including in conditions to which population averages and the commonly used regression correction factors do not apply, thus achieving the goal of being both personalized and universally applicable.

  12. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  13. Dispersion- and Exchange-Corrected Density Functional Theory for Sodium Ion Hydration.

    PubMed

    Soniat, Marielle; Rogers, David M; Rempe, Susan B

    2015-07-14

    A challenge in density functional theory is developing functionals that simultaneously describe intermolecular electron correlation and electron delocalization. Recent exchange-correlation functionals address those two issues by adding corrections important at long ranges: an atom-centered pairwise dispersion term to account for correlation and a modified long-range component of the electron exchange term to correct for delocalization. Here we investigate how those corrections influence the accuracy of binding free energy predictions for sodium-water clusters. We find that the dual-corrected ωB97X-D functional gives cluster binding energies closest to high-level ab initio methods (CCSD(T)). Binding energy decomposition shows that the ωB97X-D functional predicts the smallest ion-water (pairwise) interaction energy and larger multibody contributions for a four-water cluster than most other functionals - a trend consistent with CCSD(T) results. Also, ωB97X-D produces the smallest amounts of charge transfer and the least polarizable waters of the density functionals studied, which mimics the lower polarizability of CCSD. When compared with experimental binding free energies, however, the exchange-corrected CAM-B3LYP functional performs best (error <1 kcal/mol), possibly because of its parametrization to experimental formation enthalpies. For clusters containing more than four waters, "split-shell" coordination must be considered to obtain accurate free energies in comparison with experiment.

  14. 40 CFR 1065.650 - Emission calculations.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... for performing the drift validation according to § 1065.550(b). When applying the initial THC and CH4...-corrected set of calculations as described in § 1065.520(g)(7). (ii) Correct all THC and CH4 concentrations... § 1065.667. (5) Mass of NMHC. Compare the corrected mass of NMHC to corrected mass of THC. If the...

  15. Characterizing bonding patterns in diradicals and triradicals by density-based wave function analysis: A uniform approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orms, Natalie; Rehn, Dirk; Dreuw, Andreas

    Density-based wave function analysis enables unambiguous comparisons of electronic structure computed by different methods and removes ambiguity of orbital choices. Here, we use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such asmore » polyradicals. We also show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of bonding pattern.« less

  16. Characterizing bonding patterns in diradicals and triradicals by density-based wave function analysis: A uniform approach

    DOE PAGES

    Orms, Natalie; Rehn, Dirk; Dreuw, Andreas; ...

    2017-12-21

    Density-based wave function analysis enables unambiguous comparisons of electronic structure computed by different methods and removes ambiguity of orbital choices. Here, we use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such asmore » polyradicals. We also show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of bonding pattern.« less

  17. The role of the cerebellum in the regulation of language functions.

    PubMed

    Starowicz-Filip, Anna; Chrobak, Adrian Andrzej; Moskała, Marek; Krzyżewski, Roger M; Kwinta, Borys; Kwiatkowski, Stanisław; Milczarek, Olga; Rajtar-Zembaty, Anna; Przewoźnik, Dorota

    2017-08-29

    The present paper is a review of studies on the role of the cerebellum in the regulation of language functions. This brain structure until recently associated chiefly with motor skills, visual-motor coordination and balance, proves to be significant also for cognitive functioning. With regard to language functions, studies show that the cerebellum determines verbal fluency (both semantic and formal) expressive and receptive grammar processing, the ability to identify and correct language mistakes, and writing skills. Cerebellar damage is a possible cause of aphasia or the cerebellar mutism syndrome (CMS). Decreased cerebellocortical connectivity as well as anomalies in the structure of the cerebellum are emphasized in numerous developmental dyslexia theories. The cerebellum is characterized by linguistic lateralization. From the neuroanatomical perspective, its right hemisphere and dentate nucleus, having multiple cerebellocortical connections with the cerebral cortical language areas, are particularly important for language functions. Usually, language deficits developed as a result of a cerebellar damage have subclinical intensity and require applying sensitive neuropsychological diagnostic tools designed to assess higher verbal functions.

  18. Effects of a multichannel dynamic functional electrical stimulation system on hemiplegic gait and muscle forces

    PubMed Central

    Qian, Jing-guang; Rong, Ke; Qian, Zhenyun; Wen, Chen; Zhang, Songning

    2015-01-01

    [Purpose] The purpose of the study was to design and implement a multichannel dynamic functional electrical stimulation system and investigate acute effects of functional electrical stimulation of the tibialis anterior and rectus femoris on ankle and knee sagittal-plane kinematics and related muscle forces of hemiplegic gait. [Subjects and Methods] A multichannel dynamic electrical stimulation system was developed with 8-channel low frequency current generators. Eight male hemiplegic patients were trained for 4 weeks with electric stimulation of the tibia anterior and rectus femoris muscles during walking, which was coupled with active contraction. Kinematic data were collected, and muscle forces of the tibialis anterior and rectus femoris of the affected limbs were analyzed using a musculoskelatal modeling approach before and after training. A paired sample t-test was used to detect the differences between before and after training. [Results] The step length of the affected limb significantly increased after the stimulation was applied. The maximum dorsiflexion angle and maximum knee flexion angle of the affected limb were both increased significantly during stimulation. The maximum muscle forces of both the tibia anterior and rectus femoris increased significantly during stimulation compared with before functional electrical stimulation was applied. [Conclusion] This study established a functional electrical stimulation strategy based on hemiplegic gait analysis and musculoskeletal modeling. The multichannel functional electrical stimulation system successfully corrected foot drop and altered circumduction hemiplegic gait pattern. PMID:26696734

  19. Improvement of dose calculation in radiation therapy due to metal artifact correction using the augmented likelihood image reconstruction.

    PubMed

    Ziemann, Christian; Stille, Maik; Cremers, Florian; Buzug, Thorsten M; Rades, Dirk

    2018-04-17

    Metal artifacts caused by high-density implants lead to incorrectly reconstructed Hounsfield units in computed tomography images. This can result in a loss of accuracy in dose calculation in radiation therapy. This study investigates the potential of the metal artifact reduction algorithms, Augmented Likelihood Image Reconstruction and linear interpolation, in improving dose calculation in the presence of metal artifacts. In order to simulate a pelvis with a double-sided total endoprosthesis, a polymethylmethacrylate phantom was equipped with two steel bars. Artifacts were reduced by applying the Augmented Likelihood Image Reconstruction, a linear interpolation, and a manual correction approach. Using the treatment planning system Eclipse™, identical planning target volumes for an idealized prostate as well as structures for bladder and rectum were defined in corrected and noncorrected images. Volumetric modulated arc therapy plans have been created with double arc rotations with and without avoidance sectors that mask out the prosthesis. The irradiation plans were analyzed for variations in the dose distribution and their homogeneity. Dosimetric measurements were performed using isocentric positioned ionization chambers. Irradiation plans based on images containing artifacts lead to a dose error in the isocenter of up to 8.4%. Corrections with the Augmented Likelihood Image Reconstruction reduce this dose error to 2.7%, corrections with linear interpolation to 3.2%, and manual artifact correction to 4.1%. When applying artifact correction, the dose homogeneity was slightly improved for all investigated methods. Furthermore, the calculated mean doses are higher for rectum and bladder if avoidance sectors are applied. Streaking artifacts cause an imprecise dose calculation within irradiation plans. Using a metal artifact correction algorithm, the planning accuracy can be significantly improved. Best results were accomplished using the Augmented Likelihood Image Reconstruction algorithm. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  20. Comparison of observation level versus 24-hour average atmospheric loading corrections in VLBI analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; van Dam, T. M.

    2009-04-01

    Variations in the horizontal distribution of atmospheric mass induce displacements of the Earth's surface. Theoretical estimates of the amplitude of the surface displacement indicate that the predicted surface displacement is often large enough to be detected by current geodetic techniques. In fact, the effects of atmospheric pressure loading have been detected in Global Positioning System (GPS) coordinate time series [van Dam et al., 1994; Dong et al., 2002; Scherneck et al., 2003; Zerbini et al., 2004] and very long baseline interferometery (VLBI) coordinates [Rabble and Schuh, 1986; Manabe et al., 1991; van Dam and Herring, 1994; Schuh et al., 2003; MacMillan and Gipson, 1994; and Petrov and Boy, 2004]. Some of these studies applied the atmospheric displacement at the observation level and in other studies, the predicted atmospheric and observed geodetic surface displacements have been averaged over 24 hours. A direct comparison of observation level and 24 hour corrections has not been carried out for VLBI to determine if one or the other approach is superior. In this presentation, we address the following questions: 1) Is it better to correct geodetic data at the observation level rather than applying corrections averaged over 24 hours to estimated geodetic coordinates a posteriori? 2) At the sub-daily periods, the atmospheric mass signal is composed of two components: a tidal component and a non-tidal component. If observation level corrections reduce the scatter of VLBI data more than a posteriori correction, is it sufficient to only model the atmospheric tides or must the entire atmospheric load signal be incorporated into the corrections? 3) When solutions from different geodetic techniques (or analysis centers within a technique) are combined (e.g., for ITRF2008), not all solutions may have applied atmospheric loading corrections. Are any systematic effects on the estimated TRF introduced when atmospheric loading is applied?

  1. Using a System Identification Approach to Investigate Subtask Control during Human Locomotion

    PubMed Central

    Logan, David; Kiemel, Tim; Jeka, John J.

    2017-01-01

    Here we apply a control theoretic view of movement to the behavior of human locomotion with the goal of using perturbations to learn about subtask control. Controlling one's speed and maintaining upright posture are two critical subtasks, or underlying functions, of human locomotion. How the nervous system simultaneously controls these two subtasks was investigated in this study. Continuous visual and mechanical perturbations were applied concurrently to subjects (n = 20) as probes to investigate these two subtasks during treadmill walking. Novel application of harmonic transfer function (HTF) analysis to human motor behavior was used, and these HTFs were converted to the time-domain based representation of phase-dependent impulse response functions (ϕIRFs). These ϕIRFs were used to identify the mapping from perturbation inputs to kinematic and electromyographic (EMG) outputs throughout the phases of the gait cycle. Mechanical perturbations caused an initial, passive change in trunk orientation and, at some phases of stimulus presentation, a corrective trunk EMG and orientation response. Visual perturbations elicited a trunk EMG response prior to a trunk orientation response, which was subsequently followed by an anterior-posterior displacement response. This finding supports the notion that there is a temporal hierarchy of functional subtasks during locomotion in which the control of upper-body posture precedes other subtasks. Moreover, the novel analysis we apply has the potential to probe a broad range of rhythmic behaviors to better understand their neural control. PMID:28123365

  2. A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores

    NASA Astrophysics Data System (ADS)

    Brüggemann, Wolfgang

    1992-08-01

    The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.

  3. Bias-dependent local structure of water molecules at an electrochemical interface

    NASA Astrophysics Data System (ADS)

    Pedroza, Luana; Brandimarte, Pedro; Rocha, Alexandre R.; Fernandez-Serra, Marivi

    2015-03-01

    Following the need for new - and renewable - sources of energy worldwide, fuel cells using electrocatalysts can be thought of as a viable option. Understanding the local structure of water molecules at the interfaces of the metallic electrodes is a key problem. Notably the system is under an external potential bias, which makes the task of simulating this setup difficult. A first principle description of all components of the system is the most appropriate methodology in order to advance understanding of electrochemical processes. There, the metal is usually charged. To correctly compute the effect of an external bias potential applied to electrodes, we combine density functional theory (DFT) and non-equilibrium Green's functions methods (NEGF), with and without van der Waals interactions. In this work, we apply this methodology to study the electronic properties and forces of one water molecule and water monolayer at the interface of gold electrodes. We find that the water molecule has a different torque direction depending on the sign of the bias applied. We also show that it changes the position of the most stable configuration indicating that the external bias plays an important role in the structural properties of the interface. We acknowledge financial support from FAPESP.

  4. Adaptive optics with a magnetic deformable mirror: applications in the human eye

    NASA Astrophysics Data System (ADS)

    Fernandez, Enrique J.; Vabre, Laurent; Hermann, Boris; Unterhuber, Angelika; Povazay, Boris; Drexler, Wolfgang

    2006-10-01

    A novel deformable mirror using 52 independent magnetic actuators (MIRAO 52, Imagine Eyes) is presented and characterized for ophthalmic applications. The capabilities of the device to reproduce different surfaces, in particular Zernike polynomials up to the fifth order, are investigated in detail. The study of the influence functions of the deformable mirror reveals a significant linear response with the applied voltage. The correcting device also presents a high fidelity in the generation of surfaces. The ranges of production of Zernike polynomials fully cover those typically found in the human eye, even for the cases of highly aberrated eyes. Data from keratoconic eyes are confronted with the obtained ranges, showing that the deformable mirror is able to compensate for these strong aberrations. Ocular aberration correction with polychromatic light, using a near Gaussian spectrum of 130 nm full width at half maximum centered at 800 nm, in five subjects is accomplished by simultaneously using the deformable mirror and an achromatizing lens, in order to compensate for the monochromatic and chromatic aberrations, respectively. Results from living eyes, including one exhibiting 4.66 D of myopia and a near pathologic cornea with notable high order aberrations, show a practically perfect aberration correction. Benefits and applications of simultaneous monochromatic and chromatic aberration correction are finally discussed in the context of retinal imaging and vision.

  5. Noninvasive Thermometry Assisted by a Dual Function Ultrasound Transducer for Mild Hyperthermia

    PubMed Central

    Lai, Chun-Yen; Kruse, Dustin E.; Caskey, Charles F.; Stephens, Douglas N.; Sutcliffe, Patrick L.; Ferrara, Katherine W.

    2010-01-01

    Mild hyperthermia is increasingly important for the activation of temperature-sensitive drug delivery vehicles. Noninvasive ultrasound thermometry based on a 2-D speckle tracking algorithm was examined in this study. Here, a commercial ultrasound scanner, a customized co-linear array transducer, and a controlling PC system were used to generate mild hyperthermia. Because the co-linear array transducer is capable of both therapy and imaging at widely separated frequencies, RF image frames were acquired during therapeutic insonation and then exported for off-line analysis. For in vivo studies in a mouse model, before temperature estimation, motion correction was applied between a reference RF frame and subsequent RF frames. Both in vitro and in vivo experiments were examined; in the in vitro and in vivo studies, the average temperature error had a standard deviation of 0.7°C and 0.8°C, respectively. The application of motion correction improved the accuracy of temperature estimation, where the error range was −1.9 to 4.5°C without correction compared with −1.1 to 1.0°C following correction. This study demonstrates the feasibility of combining therapy and monitoring using a commercial system. In the future, real-time temperature estimation will be incorporated into this system. PMID:21156363

  6. Active phase correction of high resolution silicon photonic arrayed waveguide gratings

    DOE PAGES

    Gehl, M.; Trotter, D.; Starbuck, A.; ...

    2017-03-10

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Thus, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. We present the design and fabrication of compact siliconmore » photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm 2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. In addition, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.« less

  7. Active phase correction of high resolution silicon photonic arrayed waveguide gratings.

    PubMed

    Gehl, M; Trotter, D; Starbuck, A; Pomerene, A; Lentine, A L; DeRose, C

    2017-03-20

    Arrayed waveguide gratings provide flexible spectral filtering functionality for integrated photonic applications. Achieving narrow channel spacing requires long optical path lengths which can greatly increase the footprint of devices. High index contrast waveguides, such as those fabricated in silicon-on-insulator wafers, allow tight waveguide bends which can be used to create much more compact designs. Both the long optical path lengths and the high index contrast contribute to significant optical phase error as light propagates through the device. Therefore, silicon photonic arrayed waveguide gratings require active or passive phase correction following fabrication. Here we present the design and fabrication of compact silicon photonic arrayed waveguide gratings with channel spacings of 50, 10 and 1 GHz. The largest device, with 11 channels of 1 GHz spacing, has a footprint of only 1.1 cm2. Using integrated thermo-optic phase shifters, the phase error is actively corrected. We present two methods of phase error correction and demonstrate state-of-the-art cross-talk performance for high index contrast arrayed waveguide gratings. As a demonstration of possible applications, we perform RF channelization with 1 GHz resolution. Additionally, we generate unique spectral filters by applying non-zero phase offsets calculated by the Gerchberg Saxton algorithm.

  8. Closed reduction of a rare type III dislocation of the first metatarsophalangeal joint.

    PubMed

    Tondera, E K; Baker, C C

    1996-09-01

    To discuss a rare Type III dislocation of the first metatarsophalangeal (MP) joint, without fracture, that used a closed reduction technique for correction. A 43-yr-old man suffered from an acute severe dislocation of his great toe as the result of acute forceful motion applied to the toe as his foot was depressed onto a brake pedal to avoid a motor vehicle accident. Physical examination and X-rays revealed the dislocation, muscle spasm, edema and severely restricted range of motion. The dislocation was corrected using a closed reduction technique, in this case a chiropractic manipulation. Fourteen months after reduction, the joint was intact, muscle strength was graded +5 normal, ranges of motion were within normal limits and no crepitation was noted. X-rays revealed normal intact joint congruency. The patient experienced full weight bearing, range of motion and function of the joint. Although a Type III dislocation of the great toe has only once been cited briefly in the literature, this classification carries a recommended surgical treatment protocol for correction. No literature describes a closed reduction of a Type III dislocation as described in this case report. It is apparent that a closed reduction technique using a chiropractic manipulation may be considered a valid alternative correction technique for Type III dislocations of the great toe.

  9. Correction of I/Q channel errors without calibration

    DOEpatents

    Doerry, Armin W.; Tise, Bertice L.

    2002-01-01

    A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.

  10. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  11. An automated baseline correction protocol for infrared spectra of atmospheric aerosols collected on polytetrafluoroethylene (Teflon) filters

    NASA Astrophysics Data System (ADS)

    Kuzmiakova, Adele; Dillner, Ann M.; Takahama, Satoshi

    2016-06-01

    A growing body of research on statistical applications for characterization of atmospheric aerosol Fourier transform infrared (FT-IR) samples collected on polytetrafluoroethylene (PTFE) filters (e.g., Russell et al., 2011; Ruthenburg et al., 2014) and a rising interest in analyzing FT-IR samples collected by air quality monitoring networks call for an automated PTFE baseline correction solution. The existing polynomial technique (Takahama et al., 2013) is not scalable to a project with a large number of aerosol samples because it contains many parameters and requires expert intervention. Therefore, the question of how to develop an automated method for baseline correcting hundreds to thousands of ambient aerosol spectra given the variability in both environmental mixture composition and PTFE baselines remains. This study approaches the question by detailing the statistical protocol, which allows for the precise definition of analyte and background subregions, applies nonparametric smoothing splines to reproduce sample-specific PTFE variations, and integrates performance metrics from atmospheric aerosol and blank samples alike in the smoothing parameter selection. Referencing 794 atmospheric aerosol samples from seven Interagency Monitoring of PROtected Visual Environment (IMPROVE) sites collected during 2011, we start by identifying key FT-IR signal characteristics, such as non-negative absorbance or analyte segment transformation, to capture sample-specific transitions between background and analyte. While referring to qualitative properties of PTFE background, the goal of smoothing splines interpolation is to learn the baseline structure in the background region to predict the baseline structure in the analyte region. We then validate the model by comparing smoothing splines baseline-corrected spectra with uncorrected and polynomial baseline (PB)-corrected equivalents via three statistical applications: (1) clustering analysis, (2) functional group quantification, and (3) thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) predictions. The discrepancy rate for a four-cluster solution is 10 %. For all functional groups but carboxylic COH the discrepancy is ≤ 10 %. Performance metrics obtained from TOR OC and EC predictions (R2 ≥ 0.94 %, bias ≤ 0.01 µg m-3, and error ≤ 0.04 µg m-3) are on a par with those obtained from uncorrected and PB-corrected spectra. The proposed protocol leads to visually and analytically similar estimates as those generated by the polynomial method. More importantly, the automated solution allows us and future users to evaluate its analytical reproducibility while minimizing reducible user bias. We anticipate the protocol will enable FT-IR researchers and data analysts to quickly and reliably analyze a large amount of data and connect them to a variety of available statistical learning methods to be applied to analyte absorbances isolated in atmospheric aerosol samples.

  12. Left dorsal premotor cortex and supramarginal gyrus complement each other during rapid action reprogramming

    PubMed Central

    Hartwigsen, Gesa; Bestmann, Sven; Ward, Nick S.; Woerbel, Saskia; Mastroeni, Claudia; Granert, Oliver; Siebner, Hartwig R.

    2013-01-01

    The ability to discard a prepared action plan in favor of an alternative action is critical when facing sudden environmental changes. We tested whether the functional contribution of left supramarginal gyrus (SMG) during action reprogramming depends on the functional integrity of left dorsal premotor cortex (PMd). Adopting a dual-site repetitive transcranial magnetic stimulation (rTMS) strategy, we first transiently disrupted PMd with “offline” 1Hz rTMS and then applied focal “online” rTMS to SMG whilst human subjects performed a spatially-precued reaction time task. Effective online rTMS of SMG but not sham rTMS of SMG increased errors when subjects had to reprogram their action in response to an invalid precue regardless of the type of preceding offline rTMS. This suggests that left SMG primarily contributes to the online updating of actions by suppressing invalidly prepared responses. Online rTMS of SMG additionally increased reaction times for correct responses in invalidly-precued trials, but only after offline rTMS of PMd. We infer that offline rTMS caused an additional dysfunction of PMd which increased the functional relevance of SMG for rapid activation of the correct response, and sensitized SMG to the disruptive effects of online rTMS. These results not only provide causal evidence that left PMd and SMG jointly contribute to action reprogramming, but also that the respective functional weight of these areas can be rapidly redistributed. This mechanism might constitute a generic feature of functional networks that allows for rapid functional compensation in response to focal dysfunctions. PMID:23152600

  13. Colorimetric calibration of wound photography with off-the-shelf devices

    NASA Astrophysics Data System (ADS)

    Bala, Subhankar; Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    Digital cameras are often used in recent days for photographic documentation in medical sciences. However, color reproducibility of same objects suffers from different illuminations and lighting conditions. This variation in color representation is problematic when the images are used for segmentation and measurements based on color thresholds. In this paper, motivated by photographic follow-up of chronic wounds, we assess the impact of (i) gamma correction, (ii) white balancing, (iii) background unification, and (iv) reference card-based color correction. Automatic gamma correction and white balancing are applied to support the calibration procedure, where gamma correction is a nonlinear color transform. For unevenly illuminated images, non- uniform illumination correction is applied. In the last step, we apply colorimetric calibration using a reference color card of 24 patches with known colors. A lattice detection algorithm is used for locating the card. The least squares algorithm is applied for affine color calibration in the RGB model. We have tested the algorithm on images with seven different types of illumination: with and without flash using three different off-the-shelf cameras including smartphones. We analyzed the spread of resulting color value of selected color patch before and after applying the calibration. Additionally, we checked the individual contribution of different steps of the whole calibration process. Using all steps, we were able to achieve a maximum of 81% reduction in standard deviation of color patch values in resulting images comparing to the original images. That supports manual as well as automatic quantitative wound assessments with off-the-shelf devices.

  14. Relationship of forces acting on implant rods and degree of scoliosis correction.

    PubMed

    Salmingo, Remel Alingalan; Tadano, Shigeru; Fujisaki, Kazuhiro; Abe, Yuichiro; Ito, Manabu

    2013-02-01

    Adolescent idiopathic scoliosis is a complex spinal pathology characterized as a three-dimensional spine deformity combined with vertebral rotation. Various surgical techniques for correction of severe scoliotic deformity have evolved and became more advanced in applying the corrective forces. The objective of this study was to investigate the relationship between corrective forces acting on deformed rods and degree of scoliosis correction. Implant rod geometries of six adolescent idiopathic scoliosis patients were measured before and after surgery. An elasto-plastic finite element model of the implant rod before surgery was reconstructed for each patient. An inverse method based on Finite Element Analysis was used to apply forces to the implant rod model such that it was deformed the same after surgery. Relationship between the magnitude of corrective forces and degree of correction expressed as change of Cobb angle was evaluated. The effects of screw configuration on the corrective forces were also investigated. Corrective forces acting on rods and degree of correction were not correlated. Increase in number of implant screws tended to decrease the magnitude of corrective forces but did not provide higher degree of correction. Although greater correction was achieved with higher screw density, the forces increased at some level. The biomechanics of scoliosis correction is not only dependent to the corrective forces acting on implant rods but also associated with various parameters such as screw placement configuration and spine stiffness. Considering the magnitude of forces, increasing screw density is not guaranteed as the safest surgical strategy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. A meta-GGA level screened range-separated hybrid functional by employing short range Hartree-Fock with a long range semilocal functional.

    PubMed

    Jana, Subrata; Samal, Prasanjit

    2018-03-28

    The range-separated hybrid density functionals are very successful in describing a wide range of molecular and solid-state properties accurately. In principle, such functionals are designed from spherically averaged or system averaged as well as reverse engineered exchange holes. In the present attempt, the screened range-separated hybrid functional scheme has been applied to the meta-GGA rung by using the density matrix expansion based semilocal exchange hole (or functional). The hybrid functional proposed here utilizes the spherically averaged density matrix expansion based exchange hole in the range separation scheme. For slowly varying density correction the range separation scheme is employed only through the local density approximation based exchange hole coupled with the corresponding fourth order gradient approximate Tao-Mo enhancement factor. The comprehensive testing and performance of the newly constructed functional indicates its applicability in describing several molecular properties. The most appealing feature of this present screened hybrid functional is that it will be practically very useful in describing solid-state properties at the meta-GGA level.

  16. Automatic oscillator frequency control system

    NASA Technical Reports Server (NTRS)

    Smith, S. F. (Inventor)

    1985-01-01

    A frequency control system makes an initial correction of the frequency of its own timing circuit after comparison against a frequency of known accuracy and then sequentially checks and corrects the frequencies of several voltage controlled local oscillator circuits. The timing circuit initiates the machine cycles of a central processing unit which applies a frequency index to an input register in a modulo-sum frequency divider stage and enables a multiplexer to clock an accumulator register in the divider stage with a cyclical signal derived from the oscillator circuit being checked. Upon expiration of the interval, the processing unit compares the remainder held as the contents of the accumulator against a stored zero error constant and applies an appropriate correction word to a correction stage to shift the frequency of the oscillator being checked. A signal from the accumulator register may be used to drive a phase plane ROM and, with periodic shifts in the applied frequency index, to provide frequency shift keying of the resultant output signal. Interposition of a phase adder between the accumulator register and phase plane ROM permits phase shift keying of the output signal by periodic variation in the value of a phase index applied to one input of the phase adder.

  17. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  18. Crystal structure prediction supported by incomplete experimental data

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji

    2018-05-01

    We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.

  19. Nuclear parton distributions and the Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Petti, R.

    2014-10-01

    We study the nuclear parton distribution functions on the basis of our recently developed semimicroscopic model, which takes into account a number of nuclear effects including nuclear shadowing, Fermi motion and nuclear binding, nuclear meson-exchange currents, and off-shell corrections to bound nucleon distributions. We discuss in detail the dependencies of nuclear effects on the type of parton distribution (nuclear sea vs valence), as well as on the parton flavor (isospin). We apply the resulting nuclear parton distributions to calculate ratios of cross sections for proton-induced Drell-Yan production off different nuclear targets. We obtain a good agreement on the magnitude, target and projectile x, and the dimuon mass dependence of proton-nucleus Drell-Yan process data from the E772 and E866 experiments at Fermilab. We also provide nuclear corrections for the Drell-Yan data from the E605 experiment.

  20. Hard sphere perturbation theory for fluids with soft-repulsive-core potentials

    NASA Astrophysics Data System (ADS)

    Ben-Amotz, Dor; Stell, George

    2004-03-01

    The thermodynamic properties of fluids with very soft repulsive-core potentials, resembling those of some liquid metals, are predicted with unprecedented accuracy using a new first-order thermodynamic perturbation theory. This theory is an extension of Mansoori-Canfield/Rasaiah-Stell (MCRS) perturbation theory, obtained by including a configuration integral correction recently identified by Mon, who evaluated it by computer simulation. In this work we derive an analytic expression for Mon's correction in terms of the radial distribution function of the soft-core fluid, g0(r), approximated using Lado's self-consistent extension of Weeks-Chandler-Andersen (WCA) theory. Comparisons with WCA and MCRS predictions show that our new extended-MCRS theory outperforms other first-order theories when applied to fluids with very soft inverse-power potentials (n⩽6), and predicts free energies that are within 0.3kT of simulation results up to the fluid freezing point.

  1. Well-tempered metadynamics converges asymptotically.

    PubMed

    Dama, James F; Parrinello, Michele; Voth, Gregory A

    2014-06-20

    Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.

  2. Well-Tempered Metadynamics Converges Asymptotically

    NASA Astrophysics Data System (ADS)

    Dama, James F.; Parrinello, Michele; Voth, Gregory A.

    2014-06-01

    Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.

  3. Fingerprinting redox and ligand states in haemprotein crystal structures using resonance Raman spectroscopy.

    PubMed

    Kekilli, Demet; Dworkowski, Florian S N; Pompidor, Guillaume; Fuchs, Martin R; Andrew, Colin R; Antonyuk, Svetlana; Strange, Richard W; Eady, Robert R; Hasnain, S Samar; Hough, Michael A

    2014-05-01

    It is crucial to assign the correct redox and ligand states to crystal structures of proteins with an active redox centre to gain valid functional information and prevent the misinterpretation of structures. Single-crystal spectroscopies, particularly when applied in situ at macromolecular crystallography beamlines, allow spectroscopic investigations of redox and ligand states and the identification of reaction intermediates in protein crystals during the collection of structural data. Single-crystal resonance Raman spectroscopy was carried out in combination with macromolecular crystallography on Swiss Light Source beamline X10SA using cytochrome c' from Alcaligenes xylosoxidans. This allowed the fingerprinting and validation of different redox and ligand states, identification of vibrational modes and identification of intermediates together with monitoring of radiation-induced changes. This combined approach provides a powerful tool to obtain complementary data and correctly assign the true oxidation and ligand state(s) in redox-protein crystals.

  4. Nasomaxillary hypoplasia with a congenitally missing tooth treated with LeFort II osteotomy, autotransplantation, and nickel-titanium alloy wire.

    PubMed

    Ishida, Takayoshi; Ikemoto, Shigehiro; Ono, Takashi

    2015-09-01

    In some skeletal Class III adult patients with nasomaxillary hypoplasia, the LeFort I osteotomy provides insufficient correction. This case report describes a 20-year-old woman with a combination of nasomaxillary hypoplasia and a protrusive mandible with a congenitally missing mandibular second premolar. We performed a LeFort II osteotomy for maxillary advancement. Autotransplantation of a tooth was also performed; the donor tooth was used to replace the missing permanent tooth. To increase the chance of success, we applied light continuous force with an improved superelastic nickel-titanium alloy wire technique before extraction and after transplantation. The patient's profile and malocclusion were corrected, and the autotransplanted tooth functioned well. The postero-occlusal relationships were improved, and ideal overbite and overjet relationships were achieved. The methods used in this case represent a remarkable treatment. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  5. Motor neurons in Drosophila flight control: could b1 be the one?

    NASA Astrophysics Data System (ADS)

    Whitehead, Samuel; Shirangi, Troy; Cohen, Itai

    Similar to balancing a stick on one's fingertip, flapping flight is inherently unstable; maintaining stability is a delicate balancing act made possible only by near-constant, often-subtle corrective actions. For fruit flies, such corrective responses need not only be robust, but also fast: the Drosophila flight control reflex has a response latency time of ~5 ms, ranking it among the fastest reflexes in the animal kingdom. How is such rapid, robust control implemented physiologically? Here we present an analysis of a putatively crucial component of the Drosophila flight control circuit: the b1 motor neuron. Specifically, we apply mechanical perturbations to freely-flying Drosophila and analyze the differences in kinematics patterns between flies with manipulated and un-manipulated b1 motor neurons. Ultimately, we hope to identify the functional role of b1 in flight stabilization, with the aim of linking it to previously-proposed, reduced-order models for reflexive control.

  6. [Surgical correction of cleft palate].

    PubMed

    Kimura, F T; Pavia Noble, A; Soriano Padilla, F; Soto Miranda, A; Medellín Rodríguez, A

    1990-04-01

    This study presents a statistical review of corrective surgery for cleft palate, based on cases treated at the maxillo-facial surgery units of the Pediatrics Hospital of the Centro Médico Nacional and at Centro Médico La Raza of the National Institute of Social Security of Mexico, over a five-year period. Interdisciplinary management as performed at the Cleft-Palate Clinic, in an integrated approach involving specialists in maxillo-facial surgery, maxillar orthopedics, genetics, social work and mental hygiene, pursuing to reestablish the stomatological and psychological functions of children afflicted by cleft palate, is amply described. The frequency and classification of the various techniques practiced in that service are described, as well as surgical statistics for 188 patients, which include a total of 256 palate surgeries performed from March 1984 to March 1989, applying three different techniques and proposing a combination of them in a single surgical time, in order to avoid complementary surgery.

  7. CRISPR-Cas Genome Surgery in Ophthalmology

    PubMed Central

    DiCarlo, James E.; Sengillo, Jesse D.; Justus, Sally; Cabral, Thiago; Tsang, Stephen H.; Mahajan, Vinit B.

    2017-01-01

    Genetic disease affecting vision can significantly impact patient quality of life. Gene therapy seeks to slow the progression of these diseases by treating the underlying etiology at the level of the genome. Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated systems (Cas) represent powerful tools for studying diseases through the creation of model organisms generated by targeted modification and by the correction of disease mutations for therapeutic purposes. CRISPR-Cas systems have been applied successfully to the visual sciences and study of ophthalmic disease – from the modification of zebrafish and mammalian models of eye development and disease, to the correction of pathogenic mutations in patient-derived stem cells. Recent advances in CRISPR-Cas delivery and optimization boast improved functionality that continues to enhance genome-engineering applications in the eye. This review provides a synopsis of the recent implementations of CRISPR-Cas tools in the field of ophthalmology. PMID:28573077

  8. Qubits in phase space: Wigner-function approach to quantum-error correction and the mean-king problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paz, Juan Pablo; Roncaglia, Augusto Jose; Theoretical Division, LANL, MSB213, Los Alamos, New Mexico 87545

    2005-07-15

    We analyze and further develop a method to represent the quantum state of a system of n qubits in a phase-space grid of NxN points (where N=2{sup n}). The method, which was recently proposed by Wootters and co-workers (Gibbons et al., Phys. Rev. A 70, 062101 (2004).), is based on the use of the elements of the finite field GF(2{sup n}) to label the phase-space axes. We present a self-contained overview of the method, we give insights into some of its features, and we apply it to investigate problems which are of interest for quantum-information theory: We analyze the phase-spacemore » representation of stabilizer states and quantum error-correction codes and present a phase-space solution to the so-called mean king problem.« less

  9. Temperature dependent structural and vibrational properties of liquid indium

    NASA Astrophysics Data System (ADS)

    Patel, A. B.; Bhatt, N. K.

    2018-05-01

    The influence of the temperature effect on both the structure factor and the phonon dispersion relation of liquid indium have been investigated by means of pseudopotential theory. The Percus-Yevick Hard Sphere reference system is applied to describe the structural calculation. The effective electron-ion interaction is explained by using modified empty core potential due to Hasegawa et al. along with a local field correction function due to Ichimaru-Utsumi (IU). The temperature dependence of pair potential needed at higher temperatures was achieved by multiplying the damping factor exp(- π/kBT2k F r ) in the pair potential. Very close agreement of static structure factor, particularly, at elevated temperatures confirms the validity of the local potential. A positive dispersion is found in low-q region and the correct trend of phonon dispersion branches like the experimental; shows all broad features of collective excitations in liquid metals.

  10. GW quasiparticle bandgaps of anatase TiO2 starting from DFT + U.

    PubMed

    Patrick, Christopher E; Giustino, Feliciano

    2012-05-23

    We investigate the quasiparticle band structure of anatase TiO(2), a wide gap semiconductor widely employed in photovoltaics and photocatalysis. We obtain GW quasiparticle energies starting from density-functional theory (DFT) calculations including Hubbard U corrections. Using a simple iterative procedure we determine the value of the Hubbard parameter yielding a vanishing quasiparticle correction to the fundamental bandgap of anatase TiO(2). The bandgap (3.3 eV) calculated using this optimal Hubbard parameter is smaller than the value obtained by applying many-body perturbation theory to standard DFT eigenstates and eigenvalues (3.7 eV). We extend our analysis to the rutile polymorph of TiO(2) and reach similar conclusions. Our work highlights the role of the starting non-interacting Hamiltonian in the calculation of GW quasiparticle energies in TiO(2) and suggests an optimal Hubbard parameter for future calculations.

  11. Initial On-Orbit Spatial Resolution Characterization of OrbView-3 Panchromatic Images

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir

    2006-01-01

    Characterization was conducted under the Memorandum of Understanding among Orbital Sciences Corp., ORBIMAGE, Inc., and NASA Applied Sciences Directorate. Acquired five OrbView-3 panchromatic images of the permanent Stennis Space Center edge targets painted on a concrete surface. Each image is available at two processing levels: Georaw and Basic. Georaw is an intermediate image in which individual pixels are aligned by a nominal shift in the along-scan direction to adjust for the staggered layout of the panchromatic detectors along the focal plane array. Georaw images are engineering data and are not delivered to customers. The Basic product includes a cubic interpolation to align the pixels better along the focal plane and to correct for sensor artifacts, such as smile and attitude smoothing. This product retains satellite geometry - no rectification is performed. Processing of the characterized images did not include image sharpening, which is applied by default to OrbView-3 image products delivered by ORBIMAGE to customers. Edge responses were extracted from images of tilted edges in two directions: along-scan and cross-scan. Each edge response was approximated with a superposition of three sigmoidal functions through a nonlinear least-squares curve-fitting. Line Spread Functions (LSF) were derived by differentiation of the analytical approximation. Modulation Transfer Functions (MTF) were obtained after applying the discrete Fourier transform to the LSF.

  12. Estimating Uncertainties of Ship Course and Speed in Early Navigations using ICOADS3.0

    NASA Astrophysics Data System (ADS)

    Chan, D.; Huybers, P. J.

    2017-12-01

    Information on ship position and its uncertainty is potentially important for mapping out climatologists and changes in SSTs. Using the 2-hourly ship reports from the International Comprehensive Ocean Atmosphere Dataset 3.0 (ICOADS 3.0), we estimate the uncertainties of ship course, ship speed, and latitude/longitude corrections during 1870-1900. After reviewing the techniques used in early navigations, we build forward navigation model that uses dead reckoning technique, celestial latitude corrections, and chronometer longitude corrections. The modeled ship tracks exhibit jumps in longitude and latitude, when a position correction is applied. These jumps are also seen in ICOADS3.0 observations. In this model, position error at the end of each day increases following a 2D random walk; the latitudinal/longitude errors are reset when a latitude/longitude correction is applied.We fit the variance of the magnitude of latitude/longitude corrections in the observation against model outputs, and estimate that the standard deviation of uncertainty is 5.5 degree for ship course, 32% for ship speed, 22km for latitude correction, and 27km for longitude correction. The estimates here are informative priors for Bayesian methods that quantify position errors of individual tracks.

  13. 2D beam hardening correction for micro-CT of immersed hard tissue

    NASA Astrophysics Data System (ADS)

    Davis, Graham; Mills, David

    2016-10-01

    Beam hardening artefacts arise in tomography and microtomography with polychromatic sources. Typically, specimens appear to be less dense in the center of reconstructions because as the path length through the specimen increases, so the X-ray spectrum is shifted towards higher energies due to the preferential absorption of low energy photons. Various approaches have been taken to reduce or correct for these artefacts. Pre-filtering the X-ray beam with a thin metal sheet will reduce soft energy X-rays and thus narrow the spectrum. Correction curves can be applied to the projections prior to reconstruction which transform measured attenuation with polychromatic radiation to predicted attenuation with monochromatic radiation. These correction curves can be manually selected, iteratively derived from reconstructions (this generally works where density is assumed to be constant) or derived from a priori information about the X-ray spectrum and specimen composition. For hard tissue specimens, the latter approach works well if the composition is reasonably homogeneous. In the case of an immersed or embedded specimen (e.g., tooth or bone) the relative proportions of mineral and "organic" (including medium and plastic container) species varies considerably for different ray paths and simple beam hardening correction does not give accurate results. By performing an initial reconstruction, the total path length through the container can be determined. By modelling the X-ray properties of the specimen, a 2D correction transform can then be created such that the predicted monochromatic attenuation can be derived as a function of both the measured polychromatic attenuation and the container path length.

  14. Extraction of memory colors for preferred color correction in digital TVs

    NASA Astrophysics Data System (ADS)

    Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.

  15. Correcting for batch effects in case-control microbiome studies

    PubMed Central

    Gibbons, Sean M.; Duvallet, Claire

    2018-01-01

    High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016

  16. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  17. Efficient color correction method for smartphone camera-based health monitoring application.

    PubMed

    Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong

    2017-07-01

    Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.

  18. A New Method for Atmospheric Correction of MRO/CRISM Data.

    NASA Astrophysics Data System (ADS)

    Noe Dobrea, Eldar Z.; Dressing, C.; Wolff, M. J.

    2009-09-01

    The Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) aboard the Mars Reconnaissance Orbiter (MRO) collects hyperspectral images from 0.362 to 3.92 μm at 6.55 nanometers/channel, and at a spatial resolution of 20 m/pixel. The 1-2.6 μm spectral range is often used to identify and map the distribution of hydrous minerals using mineralogically diagnostic bands at 1.4 μm, 1.9 μm, and 2 - 2.5 micron region. Atmospheric correction of the 2-μm CO2 band typically employs the same methodology applied to OMEGA data (Mustard et al., Nature 454, 2008): an atmospheric opacity spectrum, obtained from the ratio of spectra from the base to spectra from the peak of Olympus Mons, is rescaled for each spectrum in the observation to fit the 2-μm CO2 band, and is subsequently used to correct the data. Three important aspects are not considered in this correction: 1) absorptions due to water vapor are improperly accounted for, 2) the band-center of each channel shifts slightly with time, and 3) multiple scattering due to atmospheric aerosols is not considered. The second issue results in miss-registration of the sharp CO2 features in the 2-μm triplet, and hence poor atmospheric correction. This leads to the necessity to ratio all spectra using the spectrum of a spectrally "bland” region in each observation in order to distinguish features 1.9 μm. Here, we present an improved atmospheric correction method, which uses emission phase function (EPF) observations to correct for molecular opacity, and a discrete ordinate radiative transfer algorithm (DISORT - Stamnes et al., Appl. Opt. 27, 1988) to correct for the effects of multiple scattering. This method results in a significant improvement in the correction of the 2-μm CO2 band, allowing us to forgo the use of spectral ratios that affect the spectral shape and preclude the derivation of reflectance values in the data.

  19. Optimal Variational Asymptotic Method for Nonlinear Fractional Partial Differential Equations.

    PubMed

    Baranwal, Vipul K; Pandey, Ram K; Singh, Om P

    2014-01-01

    We propose optimal variational asymptotic method to solve time fractional nonlinear partial differential equations. In the proposed method, an arbitrary number of auxiliary parameters γ 0, γ 1, γ 2,… and auxiliary functions H 0(x), H 1(x), H 2(x),… are introduced in the correction functional of the standard variational iteration method. The optimal values of these parameters are obtained by minimizing the square residual error. To test the method, we apply it to solve two important classes of nonlinear partial differential equations: (1) the fractional advection-diffusion equation with nonlinear source term and (2) the fractional Swift-Hohenberg equation. Only few iterations are required to achieve fairly accurate solutions of both the first and second problems.

  20. Spin-state transition in LaCoO3 by variational cluster approximation

    NASA Astrophysics Data System (ADS)

    Eder, R.

    2010-01-01

    The variational cluster approximation (VCA) is applied to the calculation of thermodynamical quantities and single-particle spectra of LaCoO3 . Trial self-energies and the numerical value of the Luttinger-Ward functional are obtained by exact diagonalization of a CoO6 cluster. The VCA correctly predicts LaCoO3 as a paramagnetic insulator, and a gradual and relatively smooth increase in the occupation of high-spin Co3+ ions causes the temperature dependence of entropy and magnetic susceptibility. The single-particle spectral function agrees well with experiment; the experimentally observed temperature dependence of photoelectron spectra is reproduced satisfactorily. Remaining discrepancies with experiment highlight the importance of spin-orbit coupling and local lattice relaxation.

  1. Time-dependent density functional theory (TD-DFT) coupled with reference interaction site model self-consistent field explicitly including spatial electron density distribution (RISM-SCF-SEDD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules

    2016-09-07

    Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.

  2. Holographic Phase Correction.

    DTIC Science & Technology

    1987-06-01

    functions, so that, for example, the device could function as a.% combined beam splitter /multifocus lens/mirror. Offset against these advantages are...illustrated in Figure 7. Here the reconstructed, phase corrected wave, is interfered with a plane wave introduced ..- after the hologram, via a beam splitter ...the recording medium). c. The phase correction can be combined with other beam forming functions. This can result in further savings in size and weight

  3. Properties of Vector Preisach Models

    NASA Technical Reports Server (NTRS)

    Kahler, Gary R.; Patel, Umesh D.; Torre, Edward Della

    2004-01-01

    This paper discusses rotational anisotropy and rotational accommodation of magnetic particle tape. These effects have a performance impact during the reading and writing of the recording process. We introduce the reduced vector model as the basis for the computations. Rotational magnetization models must accurately compute the anisotropic characteristics of ellipsoidally magnetizable media. An ellipticity factor is derived for these media that computes the two-dimensional magnetization trajectory for all applied fields. An orientation correction must be applied to the computed rotational magnetization. For isotropic materials, an orientation correction has been developed and presented. For anisotropic materials, an orientation correction is introduced.

  4. Model-free quantification of dynamic PET data using nonparametric deconvolution

    PubMed Central

    Zanderigo, Francesca; Parsey, Ramin V; Todd Ogden, R

    2015-01-01

    Dynamic positron emission tomography (PET) data are usually quantified using compartment models (CMs) or derived graphical approaches. Often, however, CMs either do not properly describe the tracer kinetics, or are not identifiable, leading to nonphysiologic estimates of the tracer binding. The PET data are modeled as the convolution of the metabolite-corrected input function and the tracer impulse response function (IRF) in the tissue. Using nonparametric deconvolution methods, it is possible to obtain model-free estimates of the IRF, from which functionals related to tracer volume of distribution and binding may be computed, but this approach has rarely been applied in PET. Here, we apply nonparametric deconvolution using singular value decomposition to simulated and test–retest clinical PET data with four reversible tracers well characterized by CMs ([11C]CUMI-101, [11C]DASB, [11C]PE2I, and [11C]WAY-100635), and systematically compare reproducibility, reliability, and identifiability of various IRF-derived functionals with that of traditional CMs outcomes. Results show that nonparametric deconvolution, completely free of any model assumptions, allows for estimates of tracer volume of distribution and binding that are very close to the estimates obtained with CMs and, in some cases, show better test–retest performance than CMs outcomes. PMID:25873427

  5. Profile-Based LC-MS Data Alignment—A Bayesian Approach

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2014-01-01

    A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872

  6. Improvement of scattering correction for in situ coastal and inland water absorption measurement using exponential fitting approach

    NASA Astrophysics Data System (ADS)

    Ye, Huping; Li, Junsheng; Zhu, Jianhua; Shen, Qian; Li, Tongji; Zhang, Fangfang; Yue, Huanyin; Zhang, Bing; Liao, Xiaohan

    2017-10-01

    The absorption coefficient of water is an important bio-optical parameter for water optics and water color remote sensing. However, scattering correction is essential to obtain accurate absorption coefficient values in situ using the nine-wavelength absorption and attenuation meter AC9. Establishing the correction always fails in Case 2 water when the correction assumes zero absorption in the near-infrared (NIR) region and underestimates the absorption coefficient in the red region, which affect processes such as semi-analytical remote sensing inversion. In this study, the scattering contribution was evaluated by an exponential fitting approach using AC9 measurements at seven wavelengths (412, 440, 488, 510, 532, 555, and 715 nm) and by applying scattering correction. The correction was applied to representative in situ data of moderately turbid coastal water, highly turbid coastal water, eutrophic inland water, and turbid inland water. The results suggest that the absorption levels in the red and NIR regions are significantly higher than those obtained using standard scattering error correction procedures. Knowledge of the deviation between this method and the commonly used scattering correction methods will facilitate the evaluation of the effect on satellite remote sensing of water constituents and general optical research using different scattering-correction methods.

  7. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  8. An in silico agent-based model demonstrates Reelin function in directing lamination of neurons during cortical development.

    PubMed

    Caffrey, James R; Hughes, Barry D; Britto, Joanne M; Landman, Kerry A

    2014-01-01

    The characteristic six-layered appearance of the neocortex arises from the correct positioning of pyramidal neurons during development and alterations in this process can cause intellectual disabilities and developmental delay. Malformations in cortical development arise when neurons either fail to migrate properly from the germinal zones or fail to cease migration in the correct laminar position within the cortical plate. The Reelin signalling pathway is vital for correct neuronal positioning as loss of Reelin leads to a partially inverted cortex. The precise biological function of Reelin remains controversial and debate surrounds its role as a chemoattractant or stop signal for migrating neurons. To investigate this further we developed an in silico agent-based model of cortical layer formation. Using this model we tested four biologically plausible hypotheses for neuron motility and four biologically plausible hypotheses for the loss of neuron motility (conversion from migration). A matrix of 16 combinations of motility and conversion rules was applied against the known structure of mouse cortical layers in the wild-type cortex, the Reelin-null mutant, the Dab1-null mutant and a conditional Dab1 mutant. Using this approach, many combinations of motility and conversion mechanisms can be rejected. For example, the model does not support Reelin acting as a repelling or as a stopping signal. In contrast, the study lends very strong support to the notion that the glycoprotein Reelin acts as a chemoattractant for neurons. Furthermore, the most viable proposition for the conversion mechanism is one in which conversion is affected by a motile neuron sensing in the near vicinity neurons that have already converted. Therefore, this model helps elucidate the function of Reelin during neuronal migration and cortical development.

  9. Convex reformulation of biologically-based multi-criteria intensity-modulated radiation therapy optimization including fractionation effects

    NASA Astrophysics Data System (ADS)

    Hoffmann, Aswin L.; den Hertog, Dick; Siem, Alex Y. D.; Kaanders, Johannes H. A. M.; Huizenga, Henk

    2008-11-01

    Finding fluence maps for intensity-modulated radiation therapy (IMRT) can be formulated as a multi-criteria optimization problem for which Pareto optimal treatment plans exist. To account for the dose-per-fraction effect of fractionated IMRT, it is desirable to exploit radiobiological treatment plan evaluation criteria based on the linear-quadratic (LQ) cell survival model as a means to balance the radiation benefits and risks in terms of biologic response. Unfortunately, the LQ-model-based radiobiological criteria are nonconvex functions, which make the optimization problem hard to solve. We apply the framework proposed by Romeijn et al (2004 Phys. Med. Biol. 49 1991-2013) to find transformations of LQ-model-based radiobiological functions and establish conditions under which transformed functions result in equivalent convex criteria that do not change the set of Pareto optimal treatment plans. The functions analysed are: the LQ-Poisson-based model for tumour control probability (TCP) with and without inter-patient heterogeneity in radiation sensitivity, the LQ-Poisson-based relative seriality s-model for normal tissue complication probability (NTCP), the equivalent uniform dose (EUD) under the LQ-Poisson model and the fractionation-corrected Probit-based model for NTCP according to Lyman, Kutcher and Burman. These functions differ from those analysed before in that they cannot be decomposed into elementary EUD or generalized-EUD functions. In addition, we show that applying increasing and concave transformations to the convexified functions is beneficial for the piecewise approximation of the Pareto efficient frontier.

  10. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  11. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  12. Optical aberration correction for simple lenses via sparse representation

    NASA Astrophysics Data System (ADS)

    Cui, Jinlin; Huang, Wei

    2018-04-01

    Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.

  13. Percolation galaxy groups and clusters in the sdss redshift survey: identification, catalogs, and the multiplicity function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlind, Andreas A.; Frieman, Joshua A.; Weinberg, David H.

    2006-01-01

    We identify galaxy groups and clusters in volume-limited samples of the SDSS redshift survey, using a redshift-space friends-of-friends algorithm. We optimize the friends-of-friends linking lengths to recover galaxy systems that occupy the same dark matter halos, using a set of mock catalogs created by populating halos of N-body simulations with galaxies. Extensive tests with these mock catalogs show that no combination of perpendicular and line-of-sight linking lengths is able to yield groups and clusters that simultaneously recover the true halo multiplicity function, projected size distribution, and velocity dispersion. We adopt a linking length combination that yields, for galaxy groups withmore » ten or more members: a group multiplicity function that is unbiased with respect to the true halo multiplicity function; an unbiased median relation between the multiplicities of groups and their associated halos; a spurious group fraction of less than {approx}1%; a halo completeness of more than {approx}97%; the correct projected size distribution as a function of multiplicity; and a velocity dispersion distribution that is {approx}20% too low at all multiplicities. These results hold over a range of mock catalogs that use different input recipes of populating halos with galaxies. We apply our group-finding algorithm to the SDSS data and obtain three group and cluster catalogs for three volume-limited samples that cover 3495.1 square degrees on the sky. We correct for incompleteness caused by fiber collisions and survey edges, and obtain measurements of the group multiplicity function, with errors calculated from realistic mock catalogs. These multiplicity function measurements provide a key constraint on the relation between galaxy populations and dark matter halos.« less

  14. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    PubMed

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P < .05). There were no correlations between executive functions and facial emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  15. Systematic theoretical investigation of the zero-field splitting in Gd(III) complexes: Wave function and density functional approaches

    NASA Astrophysics Data System (ADS)

    Khan, Shehryar; Kubica-Misztal, Aleksandra; Kruk, Danuta; Kowalewski, Jozef; Odelius, Michael

    2015-01-01

    The zero-field splitting (ZFS) of the electronic ground state in paramagnetic ions is a sensitive probe of the variations in the electronic and molecular structure with an impact on fields ranging from fundamental physical chemistry to medical applications. A detailed analysis of the ZFS in a series of symmetric Gd(III) complexes is presented in order to establish the applicability and accuracy of computational methods using multiconfigurational complete-active-space self-consistent field wave functions and of density functional theory calculations. The various computational schemes are then applied to larger complexes Gd(III)DOTA(H2O)-, Gd(III)DTPA(H2O)2-, and Gd(III)(H2O)83+ in order to analyze how the theoretical results compare to experimentally derived parameters. In contrast to approximations based on density functional theory, the multiconfigurational methods produce results for the ZFS of Gd(III) complexes on the correct order of magnitude.

  16. Four theorems on the psychometric function.

    PubMed

    May, Keith A; Solomon, Joshua A

    2013-01-01

    In a 2-alternative forced-choice (2AFC) discrimination task, observers choose which of two stimuli has the higher value. The psychometric function for this task gives the probability of a correct response for a given stimulus difference, Δx. This paper proves four theorems about the psychometric function. Assuming the observer applies a transducer and adds noise, Theorem 1 derives a convenient general expression for the psychometric function. Discrimination data are often fitted with a Weibull function. Theorem 2 proves that the Weibull "slope" parameter, β, can be approximated by β(Noise) x β(Transducer), where β(Noise) is the β of the Weibull function that fits best to the cumulative noise distribution, and β(Transducer) depends on the transducer. We derive general expressions for β(Noise) and β(Transducer), from which we derive expressions for specific cases. One case that follows naturally from our general analysis is Pelli's finding that, when d' ∝ (Δx)(b), β ≈ β(Noise) x b. We also consider two limiting cases. Theorem 3 proves that, as sensitivity improves, 2AFC performance will usually approach that for a linear transducer, whatever the actual transducer; we show that this does not apply at signal levels where the transducer gradient is zero, which explains why it does not apply to contrast detection. Theorem 4 proves that, when the exponent of a power-function transducer approaches zero, 2AFC performance approaches that of a logarithmic transducer. We show that the power-function exponents of 0.4-0.5 fitted to suprathreshold contrast discrimination data are close enough to zero for the fitted psychometric function to be practically indistinguishable from that of a log transducer. Finally, Weibull β reflects the shape of the noise distribution, and we used our results to assess the recent claim that internal noise has higher kurtosis than a Gaussian. Our analysis of β for contrast discrimination suggests that, if internal noise is stimulus-independent, it has lower kurtosis than a Gaussian.

  17. On the wall perturbation correction for a parallel-plate NACP-02 chamber in clinical electron beams.

    PubMed

    Zink, K; Wulff, J

    2011-02-01

    In recent years, several Monte Carlo studies have been published concerning the perturbation corrections of a parallel-plate chamber in clinical electron beams. In these studies, a strong depth dependence of the relevant correction factors (p(wall) and P(cav)) for depth beyond the reference depth is recognized and it has been shown that the variation with depth is sensitive to the choice of the chamber's effective point of measurement. Recommendations concerning the positioning of parallel-plate ionization chambers in clinical electron beams are not the same for all current dosimetry protocols. The IAEA TRS-398 as well as the IPEM protocol and the German protocol DIN 6800-2 interpret the depth of measurement within the phantom as the water equivalent depth, i.e., the nonwater equivalence of the entrance window has to be accounted for by shifting the chamber by an amount deltaz. This positioning should ensure that the primary electrons traveling from the surface of the water phantom through the entrance window to the chamber's reference point sustain the same energy loss as the primary electrons in the undisturbed phantom. The objective of the present study is the determination of the shift deltaz for a NACP-02 chamber and the calculation of the resulting wall perturbation correction as a function of depth. Moreover, the contributions of the different chamber walls to the wall perturbation correction are identified. The dose and fluence within the NACP-02 chamber and a wall-less air cavity is calculated using the Monte Carlo code EGSnrc in a water phantom at different depths for different clinical electron beams. In order to determine the necessary shift to account for the nonwater equivalence of the entrance window, the chamber is shifted in steps deltaz around the depth of measurement. The optimal shift deltaz is determined from a comparison of the spectral fluence within the chamber and the bare cavity. The wall perturbation correction is calculated as the ratio between doses for the complete chamber and a wall-less air cavity. The high energy part of the fluence spectra within the chamber strongly varies even with small chamber shifts, allowing the determination of deltaz within micrometers. For the NACP-02 chamber a shift deltaz = -0.058 cm results. This value is independent of the energy of the primary electrons as well as of the depth within the phantom and it is in good agreement with the value recommended in the German dosimetry protocol. Applying this shift, the calculated wall perturbation correction as a function of depth is varying less than 1% from zero up to the half value depth R50 for electron energies in the range of 6-21 MeV. The remaining depth dependence can mainly be attributed to the scatter properties of the entrance window. When neglecting the nonwater equivalence of the entrance window, the variation of p(wall) with depth is up to 10% and more, especially for low electron energies. The variation of the wall perturbation correction for the NACP-02 chamber in clinical electron beams strongly depends on the positioning of the chamber. Applying a shift deltaz = -0.058 cm toward the focus ensures that the primary electron spectrum within the chamber bears the largest resemblance to the fluence of a wall-less cavity. Hence, the influence of the chamber walls on the perturbation correction can be separated out and the residual variation of p(wall) with depth is minimized.

  18. Projection-based estimation and nonuniformity correction of sensitivity profiles in phased-array surface coils.

    PubMed

    Yun, Sungdae; Kyriakos, Walid E; Chung, Jun-Young; Han, Yeji; Yoo, Seung-Schik; Park, Hyunwook

    2007-03-01

    To develop a novel approach for calculating the accurate sensitivity profiles of phased-array coils, resulting in correction of nonuniform intensity in parallel MRI. The proposed intensity-correction method estimates the accurate sensitivity profile of each channel of the phased-array coil. The sensitivity profile is estimated by fitting a nonlinear curve to every projection view through the imaged object. The nonlinear curve-fitting efficiently obtains the low-frequency sensitivity profile by eliminating the high-frequency image contents. Filtered back-projection (FBP) is then used to compute the estimates of the sensitivity profile of each channel. The method was applied to both phantom and brain images acquired from the phased-array coil. Intensity-corrected images from the proposed method had more uniform intensity than those obtained by the commonly used sum-of-squares (SOS) approach. With the use of the proposed correction method, the intensity variation was reduced to 6.1% from 13.1% of the SOS. When the proposed approach was applied to the computation of the sensitivity maps during sensitivity encoding (SENSE) reconstruction, it outperformed the SOS approach in terms of the reconstructed image uniformity. The proposed method is more effective at correcting the intensity nonuniformity of phased-array surface-coil images than the conventional SOS method. In addition, the method was shown to be resilient to noise and was successfully applied for image reconstruction in parallel imaging.

  19. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE PAGES

    Burgess, C. P.; Holman, R.; Tasinato, G.

    2016-01-26

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  20. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgess, C. P.; Holman, R.; Tasinato, G.

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  1. Correction for faking in self-report personality tests.

    PubMed

    Sjöberg, Lennart

    2015-10-01

    Faking is a common problem in testing with self-report personality tests, especially in high-stakes situations. A possible way to correct for it is statistical control on the basis of social desirability scales. Two such scales were developed and applied in the present paper. It was stressed that the statistical models of faking need to be adapted to different properties of the personality scales, since such scales correlate with faking to different extents. In four empirical studies of self-report personality tests, correction for faking was investigated. One of the studies was experimental, and asked participants to fake or to be honest. In the other studies, job or school applicants were investigated. It was found that the approach to correct for effects of faking in self-report personality tests advocated in the paper removed a large share of the effects, about 90%. It was found in one study that faking varied as a function of degree of how important the consequences of test results could be expected to be, more high-stakes situations being associated with more faking. The latter finding is incompatible with the claim that social desirability scales measure a general personality trait. It is concluded that faking can be measured and that correction for faking, based on such measures, can be expected to remove about 90% of its effects. © 2015 Psykologisk Metod AB. Scandinavian Journal of Psychology published by Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  2. Distortion correction in EPI at ultra-high-field MRI using PSF mapping with optimal combination of shift detection dimension.

    PubMed

    Oh, Se-Hong; Chung, Jun-Young; In, Myung-Ho; Zaitsev, Maxim; Kim, Young-Bo; Speck, Oliver; Cho, Zang-Hee

    2012-10-01

    Despite its wide use, echo-planar imaging (EPI) suffers from geometric distortions due to off-resonance effects, i.e., strong magnetic field inhomogeneity and susceptibility. This article reports a novel method for correcting the distortions observed in EPI acquired at ultra-high-field such as 7 T. Point spread function (PSF) mapping methods have been proposed for correcting the distortions in EPI. The PSF shift map can be derived either along the nondistorted or the distorted coordinates. Along the nondistorted coordinates more information about compressed areas is present but it is prone to PSF-ghosting artifacts induced by large k-space shift in PSF encoding direction. In contrast, shift maps along the distorted coordinates contain more information in stretched areas and are more robust against PSF-ghosting. In ultra-high-field MRI, an EPI contains both compressed and stretched regions depending on the B0 field inhomogeneity and local susceptibility. In this study, we present a new geometric distortion correction scheme, which selectively applies the shift map with more information content. We propose a PSF-ghost elimination method to generate an artifact-free pixel shift map along nondistorted coordinates. The proposed method can correct the effects of the local magnetic field inhomogeneity induced by the susceptibility effects along with the PSF-ghost artifact cancellation. We have experimentally demonstrated the advantages of the proposed method in EPI data acquisitions in phantom and human brain using 7-T MRI. Copyright © 2011 Wiley Periodicals, Inc.

  3. Akterations/corrections to the BRASS Program

    NASA Technical Reports Server (NTRS)

    Brand, S. N.

    1985-01-01

    Corrections applied to statistical programs contained in two subroutines of the Bed Rest Analysis Software System (BRASS) are summarized. Two subroutines independently calculate significant values within the BRASS program.

  4. An Inherent-Optical-Property-Centered Approach to Correct the Angular Effects in Water-Leaving Radiance

    DTIC Science & Technology

    2011-07-01

    10%. These results demonstrate that the IOP-based BRDF correction scheme (which is composed of the R„ model along with the IOP retrieval...distribution was averaged over 10 min 5. Validation of the lOP-Based BRDF Correction Scheme The IOP-based BRDF correction scheme is applied to both...oceanic and coastal waters were very consistent qualitatively and quantitatively and thus validate the IOP- based BRDF correction system, at least

  5. Bias Properties of Extragalactic Distance Indicators. VIII. H0 from Distance-limited Luminosity Class and Morphological Type-Specific Luminosity Functions for SB, SBC, and SC Galaxies Calibrated Using Cepheids

    NASA Astrophysics Data System (ADS)

    Sandage, Allan

    1999-12-01

    Relative, reduced to absolute, magnitude distributions are obtained for Sb, Sbc, and Sc galaxies in the flux-limited Revised Shapley-Ames Catalog (RSA2) for each van den Bergh luminosity class (L), within each Hubble type (T). The method to isolate bias-free subsets of the total sample is via Spaenhauer diagrams, as in previous papers of this series. The distance-limited type and class-specific luminosity functions are normalized to numbers of galaxies per unit volume (105 Mpc3), rather than being left as relative functions, as in Paper V. The functions are calculated using kinematic absolute magnitudes, based on an arbitrary trial value of H0=50. Gaussian fits to the individual normalized functions are listed for each T and L subclass. As in Paper V, the data can be freed from the T and L dependencies by applying a correction of 0.23T+0.5L to the individual absolute magnitudes. Here, T=3 for Sb, 4 for Sbc, and 5 for Sc galaxies, and the L values range from 1 to 6 as the luminosity class changes from I to III-IV. The total luminosity function, obtained by combining the volume-normalized Sb, Sbc, and Sc individual luminosity functions, each corrected for the T and L dependencies, has an rms dispersion of 0.67 mag, similar to much of the Tully-Fisher parameter space. Absolute calibration of the trial kinematic absolute magnitudes is made using 27 galaxies with known T and L that also have Cepheid distances. This permits the systematic correction to the H0=50 kinematic absolute magnitudes of 0.22+/-0.12 mag, givingH0=55+/-3(internal) km s-1 Mpc-1 . The Cepheid distances are based on the Madore/Freedman Cepheid period-luminosity (PL) zero point that requires (m-M)0=18.50 for the LMC. Using the modern LMC modulus of (m-M)0=18.58 requires a 4% decrease in H0, giving a final value of H0=53+/-7 (external) by this method. These values of H0, based here on the method of luminosity functions, are in good agreement with (1) H0=55+/-5 by Theureau and coworkers from their bias-corrected Tully-Fisher method of ``normalized distances'' for field galaxies; (2) H0=56+/-4 from the method through the Virgo Cluster, as corrected to the global kinematic frame (Tammann and coworkers); and (3) H0=58+/-5 from Cepheid-calibrated Type Ia supernovae (Saha and coworkers). Our value here also disagrees with the final value from the NASA ``Key Project'' group value of H0=70+/-7. Analysis of the total flux-limited sample of Sb, Sbc, and Sc galaxies in the RSA2 by the present method, but uncorrected for selection bias, would give an incorrect value of H0=71 using the same Cepheid calibration. The effect of the bias is pernicious at the 30% level; either it must be corrected by the methods in the papers of this series, or the data must be restricted to the distance-limited subset of any sample, as is done here.

  6. Using soft computing techniques to predict corrected air permeability using Thomeer parameters, air porosity and grain density

    NASA Astrophysics Data System (ADS)

    Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez

    2014-03-01

    Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.

  7. Quantum corrections to Bekenstein-Hawking black hole entropy and gravity partition functions

    NASA Astrophysics Data System (ADS)

    Bytsenko, A. A.; Tureanu, A.

    2013-08-01

    Algebraic aspects of the computation of partition functions for quantum gravity and black holes in AdS3 are discussed. We compute the sub-leading quantum corrections to the Bekenstein-Hawking entropy. It is shown that the quantum corrections to the classical result can be included systematically by making use of the comparison with conformal field theory partition functions, via the AdS3/CFT2 correspondence. This leads to a better understanding of the role of modular and spectral functions, from the point of view of the representation theory of infinite-dimensional Lie algebras. Besides, the sum of known quantum contributions to the partition function can be presented in a closed form, involving the Patterson-Selberg spectral function. These contributions can be reproduced in a holomorphically factorized theory whose partition functions are associated with the formal characters of the Virasoro modules. We propose a spectral function formulation for quantum corrections to the elliptic genus from supergravity states.

  8. FIACH: A biophysical model for automatic retrospective noise control in fMRI.

    PubMed

    Tierney, Tim M; Weiss-Croft, Louise J; Centeno, Maria; Shamshiri, Elhum A; Perani, Suejen; Baldeweg, Torsten; Clark, Christopher A; Carmichael, David W

    2016-01-01

    Different noise sources in fMRI acquisition can lead to spurious false positives and reduced sensitivity. We have developed a biophysically-based model (named FIACH: Functional Image Artefact Correction Heuristic) which extends current retrospective noise control methods in fMRI. FIACH can be applied to both General Linear Model (GLM) and resting state functional connectivity MRI (rs-fcMRI) studies. FIACH is a two-step procedure involving the identification and correction of non-physiological large amplitude temporal signal changes and spatial regions of high temporal instability. We have demonstrated its efficacy in a sample of 42 healthy children while performing language tasks that include overt speech with known activations. We demonstrate large improvements in sensitivity when FIACH is compared with current methods of retrospective correction. FIACH reduces the confounding effects of noise and increases the study's power by explaining significant variance that is not contained within the commonly used motion parameters. The method is particularly useful in detecting activations in inferior temporal regions which have proven problematic for fMRI. We have shown greater reproducibility and robustness of fMRI responses using FIACH in the context of task induced motion. In a clinical setting this will translate to increasing the reliability and sensitivity of fMRI used for the identification of language lateralisation and eloquent cortex. FIACH can benefit studies of cognitive development in young children, patient populations and older adults. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  9. A polarizable dipole-dipole interaction model for evaluation of the interaction energies for N-H···O=C and C-H···O=C hydrogen-bonded complexes.

    PubMed

    Li, Shu-Shi; Huang, Cui-Ying; Hao, Jiao-Jiao; Wang, Chang-Sheng

    2014-03-05

    In this article, a polarizable dipole-dipole interaction model is established to estimate the equilibrium hydrogen bond distances and the interaction energies for hydrogen-bonded complexes containing peptide amides and nucleic acid bases. We regard the chemical bonds N-H, C=O, and C-H as bond dipoles. The magnitude of the bond dipole moment varies according to its environment. We apply this polarizable dipole-dipole interaction model to a series of hydrogen-bonded complexes containing the N-H···O=C and C-H···O=C hydrogen bonds, such as simple amide-amide dimers, base-base dimers, peptide-base dimers, and β-sheet models. We find that a simple two-term function, only containing the permanent dipole-dipole interactions and the van der Waals interactions, can produce the equilibrium hydrogen bond distances compared favorably with those produced by the MP2/6-31G(d) method, whereas the high-quality counterpoise-corrected (CP-corrected) MP2/aug-cc-pVTZ interaction energies for the hydrogen-bonded complexes can be well-reproduced by a four-term function which involves the permanent dipole-dipole interactions, the van der Waals interactions, the polarization contributions, and a corrected term. Based on the calculation results obtained from this polarizable dipole-dipole interaction model, the natures of the hydrogen bonding interactions in these hydrogen-bonded complexes are further discussed. Copyright © 2013 Wiley Periodicals, Inc.

  10. Platysma Flap with Z-Plasty for Correction of Post-Thyroidectomy Swallowing Deformity

    PubMed Central

    Jeon, Min Kyeong; Kang, Seok Joo

    2013-01-01

    Background Recently, the number of thyroid surgery cases has been increasing; consequently, the number of patients who visit plastic surgery departments with a chief complaint of swallowing deformity has also increased. We performed a scar correction technique on post-thyroidectomy swallowing deformity via platysma flap with Z-plasty and obtained satisfactory aesthetic and functional outcomes. Methods The authors performed operations upon 18 patients who presented a definitive retraction on the swallowing mechanism as an objective sign of swallowing deformity, or throat or neck discomfort on swallowing mechanism such as sensation of throat traction as a subjective sign after thyoridectomy from January 2009 till June 2012. The scar tissue that adhered to the subcutaneous tissue layer was completely excised. A platysma flap as mobile interference was applied to remove the continuity of the scar adhesion, and additionally, Z-plasty for prevention of midline platysma banding was performed. Results The follow-up results of the 18 patients indicated that the definitive retraction on the swallowing mechanism was completely removed. Throat or neck discomfort on the swallowing mechanism such as sensation of throat traction also was alleviated in all 18 patients. When preoperative and postoperative Vancouver scar scales were compared to each other, the scale had decreased significantly after surgery (P<0.05). Conclusions Our simple surgical method involved the formation of a platysma flap with Z-plasty as mobile interference for the correction of post-thyroidectomy swallowing deformity. This method resulted in aesthetically and functionally satisfying outcomes. PMID:23898442

  11. An improved interatomic potential for xenon in UO2: a combined density functional theory/genetic algorithm approach.

    PubMed

    Thompson, Alexander E; Meredig, Bryce; Wolverton, C

    2014-03-12

    We have created an improved xenon interatomic potential for use with existing UO2 potentials. This potential was fit to density functional theory calculations with the Hubbard U correction (DFT + U) using a genetic algorithm approach called iterative potential refinement (IPR). We examine the defect energetics of the IPR-fitted xenon interatomic potential as well as other, previously published xenon potentials. We compare these potentials to DFT + U derived energetics for a series of xenon defects in a variety of incorporation sites (large, intermediate, and small vacant sites). We find the existing xenon potentials overestimate the energy needed to add a xenon atom to a wide set of defect sites representing a range of incorporation sites, including failing to correctly rank the energetics of the small incorporation site defects (xenon in an interstitial and xenon in a uranium site neighboring uranium in an interstitial). These failures are due to problematic descriptions of Xe-O and/or Xe-U interactions of the previous xenon potentials. These failures are corrected by our newly created xenon potential: our IPR-generated potential gives good agreement with DFT + U calculations to which it was not fitted, such as xenon in an interstitial (small incorporation site) and xenon in a double Schottky defect cluster (large incorporation site). Finally, we note that IPR is very flexible and can be applied to a wide variety of potential forms and materials systems, including metals and EAM potentials.

  12. Using goal- and grip-related information for understanding the correctness of other's actions: an ERP study.

    PubMed

    van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T

    2012-01-01

    Detecting errors in other's actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other's actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips.

  13. On the normalization of the minimum free energy of RNAs by sequence length.

    PubMed

    Trotta, Edoardo

    2014-01-01

    The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  14. On the Normalization of the Minimum Free Energy of RNAs by Sequence Length

    PubMed Central

    Trotta, Edoardo

    2014-01-01

    The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size. PMID:25405875

  15. Comparison of Modal to Nodal Approaches for Wavefront Correction,

    DTIC Science & Technology

    1986-02-01

    the influence function of the wavefront corrector. (Implicit here is the assumption that the influence function is the same for every node, which is...To implement a nodal correction, the wavefront to be corrected is -. .. decomposed using a basis which is determined by the nodal (actuator) influence ... function of the wavefront corrector. This decomposition results in a set of coefficients which correspond to the drive signal required at the

  16. Charge transfer optical absorption and fluorescence emission of 4-(9-acridyl)julolidine from long-range-corrected time dependent density functional theory in polarizable continuum approach.

    PubMed

    Kityk, A V

    2014-07-15

    A long-range-corrected time-dependent density functional theory (LC-TDDFT) in combination with polarizable continuum model (PCM) have been applied to study charge transfer (CT) optical absorption and fluorescence emission energies basing on parameterized LC-BLYP xc-potential. The molecule of 4-(9-acridyl)julolidine selected for this study represents typical CT donor-acceptor dye with strongly solvent dependent optical absorption and fluorescence emission spectra. The result of calculations are compared with experimental spectra reported in the literature to derive an optimal value of the model screening parameter ω. The first absorption band appears to be quite well predictable within DFT/TDDFT/PCM with the screening parameter ω to be solvent independent (ω ≈ 0.245 Bohr(-1)) whereas the fluorescence emission exhibits a strong dependence on the range separation with ω-value varying on a rising solvent polarity from about 0.225 to 0.151 Bohr(-1). Dipolar properties of the initial state participating in the electronic transition have crucial impact on the effective screening. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Theoretical insight of adsorption thermodynamics of multifunctional molecules on metal surfaces

    NASA Astrophysics Data System (ADS)

    Loffreda, David

    2006-05-01

    Adsorption thermodynamics based on density functional theory (DFT) calculations are exposed for the interaction of several multifunctional molecules with Pt and Au(1 1 0)-(1 × 2) surfaces. The Gibbs free adsorption energy explicitly depends on the adsorption internal energy, which is derived from DFT adsorption energy, and the vibrational entropy change during the chemisorption process. Zero-point energy (ZPE) corrections have been systematically applied to the adsorption energy. Moreover the vibrational entropy change has been computed on the basis of DFT harmonic frequencies (gas and adsorbed phases, clean surfaces), which have been extended to all the adsorbate vibrations and the metallic surface phonons. The phase diagrams plotted in realistic conditions of temperature (from 100 to 400 K) and pressure (0.15 atm) show that the ZPE corrected adsorption energy is the main contribution. When strong chemisorption is considered on the Pt surface, the multifunctional molecules are adsorbed on the surface in the considered temperature range. In contrast for weak chemisorption on the Au surface, the thermodynamic results should be held cautiously. The systematic errors of the model (choice of the functional, configurational entropy and vibrational entropy) make difficult the prediction of the adsorption-desorption phase boundaries.

  18. Iterative Structural and Functional Synergistic Resolution Recovery (iSFS-RR) Applied to PET-MR Images in Epilepsy

    NASA Astrophysics Data System (ADS)

    Silva-Rodríguez, J.; Cortés, J.; Rodríguez-Osorio, X.; López-Urdaneta, J.; Pardo-Montero, J.; Aguiar, P.; Tsoumpas, C.

    2016-10-01

    Structural Functional Synergistic Resolution Recovery (SFS-RR) is a technique that uses supplementary structural information from MR or CT to improve the spatial resolution of PET or SPECT images. This wavelet-based method may have a potential impact on the clinical decision-making of brain focal disorders such as refractory epilepsy, since it can produce images with better quantitative accuracy and enhanced detectability. In this work, a method for the iterative application of SFS-RR (iSFS-RR) was firstly developed and optimized in terms of convergence and input voxel size, and the corrected images were used for the diagnosis of 18 patients with refractory epilepsy. To this end, PET/MR images were clinically evaluated through visual inspection, atlas-based asymmetry indices (AIs) and SPM (Statistical Parametric Mapping) analysis, using uncorrected images and images corrected with SFS-RR and iSFS-RR. Our results showed that the sensitivity can be increased from 78% for uncorrected images, to 84% for SFS-RR and 94% for the proposed iSFS-RR. Thus, the proposed methodology has demonstrated the potential to improve the management of refractory epilepsy patients in the clinical routine.

  19. Extensive regularization of the coupled cluster methods based on the generating functional formalism: application to gas-phase benchmarks and to the S(N)2 reaction of CHCl3 and OH- in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalski, Karol; Valiev, Marat

    2009-12-21

    The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski, P.D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent non-iterative coupled-cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wavefunction. Although proven to be effcient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we addressmore » the issue of size-consistent regularization of the GF expansion by redefning the equations for the cluster amplitudes. The performance and basic features of proposed methodology is illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with QM/MM module and applied to describe the SN2 reaction of CHCl3 and OH- in aqueous solution.« less

  20. Effects of diurnal adjustment on biases and trends derived from inter-sensor calibrated AMSU-A data

    NASA Astrophysics Data System (ADS)

    Chen, H.; Zou, X.; Qin, Z.

    2018-03-01

    Measurements of brightness temperatures from Advanced Microwave Sounding Unit-A (AMSU-A) temperature sounding instruments onboard NOAA Polarorbiting Operational Environmental Satellites (POES) have been extensively used for studying atmospheric temperature trends over the past several decades. Intersensor biases, orbital drifts and diurnal variations of atmospheric and surface temperatures must be considered before using a merged long-term time series of AMSU-A measurements from NOAA-15, -18, -19 and MetOp-A.We study the impacts of the orbital drift and orbital differences of local equator crossing times (LECTs) on temperature trends derivable from AMSU-A using near-nadir observations from NOAA-15, NOAA-18, NOAA-19, and MetOp-A during 1998-2014 over the Amazon rainforest. The double difference method is firstly applied to estimation of inter-sensor biases between any two satellites during their overlapping time period. The inter-calibrated observations are then used to generate a monthly mean diurnal cycle of brightness temperature for each AMSU-A channel. A diurnal correction is finally applied each channel to obtain AMSU-A data valid at the same local time. Impacts of the inter-sensor bias correction and diurnal correction on the AMSU-A derived long-term atmospheric temperature trends are separately quantified and compared with those derived from original data. It is shown that the orbital drift and differences of LECTamong different POESs induce a large uncertainty in AMSU-A derived long-term warming/cooling trends. After applying an inter-sensor bias correction and a diurnal correction, the warming trends at different local times, which are approximately the same, are smaller by half than the trends derived without applying these corrections.

  1. Corrections Regarding the Impedance of Distance Functions for Several g(d) Functions

    ERIC Educational Resources Information Center

    Beaman, Jay

    1976-01-01

    Five functions were introduced for modeling travel behavior in the Beaman article "Distance and the 'Reaction' to Distance as a Function of Distance" published in Vol. 6, No. 3 of "Journal of Leisure Research" with the graphs of the functions printed incorrectly. This is a corrected version. (MM)

  2. The Functional Illiterate: Is Correctional Education Doing Its Job?

    ERIC Educational Resources Information Center

    Loeffler, Cynthia A.; Martin, Thomas C.

    A study researched the existence of established Adult Basic Education (ABE) curricula for incarcerated adult inmate/students in state correctional education programs, specifically the functionally illiterate. All 50 State Departments of Corrections were surveyed by questionnaire; 44 responded. ABE was a basis for curricula according to 37.6% of…

  3. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  4. The influence of kyphosis correction surgery on pulmonary function and thoracic volume.

    PubMed

    Zeng, Yan; Chen, Zhongqiang; Ma, Desi; Guo, Zhaoqing; Qi, Qiang; Li, Weishi; Sun, Chuiguo; Liu, Ning; White, Andrew P

    2014-10-01

    A clinical study. To measure the changes in pulmonary function and thoracic volume associated with surgical correction of kyphotic deformities. No prior study has focused on the pulmonary function and thoracic cavity volume before and after corrective surgery for kyphosis. Thirty-four patients with kyphosis underwent posterior deformity correction with instrumented fusion. Preoperative and postoperative pulmonary function was measured, and pulmonary function grade was evaluated as mild, significant, or severe. The change in preoperative to postoperative pulmonary function was analyzed, using 6 comparative subgroupings of patients on the basis of age, severity of kyphosis, location of kyphosis apex, length of follow-up time after surgery, degree of kyphosis correction, and number of segments fused. A second group of 19 patients also underwent posterior surgical correction of kyphosis, which had thoracic volume measured preoperatively and postoperatively with computed tomographic scanning. All of the pulmonary impairments were found to be restrictive. After surgery, most of the patients had improvement of the pulmonary function. Before surgery, the pulmonary function differences were found to be significant based on both severity of preoperative kyphosis (<60° vs. >60°) and location of the kyphosis apex (above T10 vs. below T10). Younger patients (younger than 35 yr) were more likely to exhibit statistically significant improvements in pulmonary function after surgery. However, thoracic volume was not significantly related to pulmonary function parameters. After surgery, average thoracic volume had no significant change. The major pulmonary impairment caused by kyphosis was found to be restrictive. Patients with kyphosis angle of 60° or greater or with kyphosis apex above T10 had more severe pulmonary dysfunction. Patients' age was significantly related to change in pulmonary function after surgery. However, the average thoracic volume had no significant change after surgery. 3.

  5. Wigner expansions for partition functions of nonrelativistic and relativistic oscillator systems

    NASA Technical Reports Server (NTRS)

    Zylka, Christian; Vojta, Guenter

    1993-01-01

    The equilibrium quantum statistics of various anharmonic oscillator systems including relativistic systems is considered within the Wigner phase space formalism. For this purpose the Wigner series expansion for the partition function is generalized to include relativistic corrections. The new series for partition functions and all thermodynamic potentials yield quantum corrections in terms of powers of h(sup 2) and relativistic corrections given by Kelvin functions (modified Hankel functions) K(sub nu)(mc(sup 2)/kT). As applications, the symmetric Toda oscillator, isotonic and singular anharmonic oscillators, and hindered rotators, i.e. oscillators with cosine potential, are addressed.

  6. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    NASA Astrophysics Data System (ADS)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  7. Characterization of Transducers and Resonators under High Drive Levels

    NASA Technical Reports Server (NTRS)

    Sherrit, Stewart; Bao, X.; Sigel, D. A.; Gradziel, M. J.; Askins, S. A.; Dolgin, B. P.; Bar-Cohen, Y.

    2001-01-01

    In many applications, piezoelectric transducers are driven at AC voltage levels well beyond the level for which the material was nominally characterized. In this paper we describe an experimental setup that allows for the determination of the main transducer or resonator properties under large AC drive. A sinusoidal voltage from a waveform generator is amplified and applied across the transducer/resonator in series with a known high power resistor. The amplitude of applied voltage and the amplitude and the relative phase of the current through the resistor are monitored on a digital scope. The frequency of the applied signal is swept through resonance and the voltage/current signals are recorded. After corrections for the series resistance and parasitic elements the technique allows for the determination of the complex impedance spectra of the sample as a function of frequency. In addition, access to the current signal allows for the direct investigation of non-linear effects through the application of Fourier transform techniques on the current signal. Our results indicate that care is required when interpreting impedance data at high drive level due to the frequency dependence of the dissipated power. Although the transducer/resonator at a single frequency and after many cycles may reach thermal equilibrium, the spectra as a whole cannot be considered an isothermal measurement due to the temperature change with frequency. Methods to correct for this effect will be discussed. Results determined from resonators of both soft and hard PZT and a ultrasonic horn transducer are presented.

  8. The Effects of Item Format and Cognitive Domain on Students' Science Performance in TIMSS 2011

    NASA Astrophysics Data System (ADS)

    Liou, Pey-Yan; Bulut, Okan

    2017-12-01

    The purpose of this study was to examine eighth-grade students' science performance in terms of two test design components, item format, and cognitive domain. The portion of Taiwanese data came from the 2011 administration of the Trends in International Mathematics and Science Study (TIMSS), one of the major international large-scale assessments in science. The item difficulty analysis was initially applied to show the proportion of correct items. A regression-based cumulative link mixed modeling (CLMM) approach was further utilized to estimate the impact of item format, cognitive domain, and their interaction on the students' science scores. The results of the proportion-correct statistics showed that constructed-response items were more difficult than multiple-choice items, and that the reasoning cognitive domain items were more difficult compared to the items in the applying and knowing domains. In terms of the CLMM results, students tended to obtain higher scores when answering constructed-response items as well as items in the applying cognitive domain. When the two predictors and the interaction term were included together, the directions and magnitudes of the predictors on student science performance changed substantially. Plausible explanations for the complex nature of the effects of the two test-design predictors on student science performance are discussed. The results provide practical, empirical-based evidence for test developers, teachers, and stakeholders to be aware of the differential function of item format, cognitive domain, and their interaction in students' science performance.

  9. Three-dimensional autoradiographic localization of quench-corrected glycine receptor specific activity in the mouse brain using sup 3 H-strychnine as the ligand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, W.F.; O'Gorman, S.; Roe, A.W.

    1990-03-01

    The autoradiographic analysis of neurotransmitter receptor distribution is a powerful technique that provides extensive information on the localization of neurotransmitter systems. Computer methodologies are described for the analysis of autoradiographic material which include quench correction, 3-dimensional display, and quantification based on anatomical boundaries determined from the tissue sections. These methodologies are applied to the problem of the distribution of glycine receptors measured by 3H-strychnine binding in the mouse CNS. The most distinctive feature of this distribution is its marked caudorostral gradient. The highest densities of binding sites within this gradient were seen in somatic motor and sensory areas; high densitiesmore » of binding were seen in branchial efferent and special sensory areas. Moderate levels were seen in nuclei related to visceral function. Densities within the reticular formation paralleled the overall gradient with high to moderate levels of binding. The colliculi had low and the diencephalon had very low levels of binding. No binding was seen in the cerebellum or the telencephalon with the exception of the amygdala, which had very low levels of specific binding. This distribution of glycine receptors correlates well with the known functional distribution of glycine synaptic function. These data are illustrated in 3 dimensions and discussed in terms of the significance of the analysis techniques on this type of data as well as the functional significance of the distribution of glycine receptors.« less

  10. [Peptide correction of age-related pineal disturbances in monkeys].

    PubMed

    Goncharova, N D; Vengerin, A A; Shmaliĭ, A V; Khavinson, V Kh

    2003-01-01

    Investigation of the age-related changes of the pineal gland function and possible ways for their overcoming on nonhuman monkey model was the purpose of this study. Hormonal function of the pineal gland was studied in 38 Macaca mulatta females of two age groups: 6-8 years old, n = 18 and 20-26 years old, n = 20. Pineal function was studied in basal conditions and after administration of pineal peptide preparations--epithalamin and epitalon, both developed in the St. Petersburg Institute of Bioregulation and Gerontology (Russia). It has been revealed that plasma melatonin concentration in monkeys has well expressed high amplitude diurnal rhythm. Minimum is manifested at 4 p.m. and maximum--at 10 p.m.-3 a.m. In aging the mean diurnal melatonin concentration decreases by 1.5-2 times as well as in different points of the day: 9 p.m., 10 p.m., 3 a.m. and 4 a.m. Administration of pineal peptides--epithalamin (at the dose 5 mg/animal/day intramuscularly during 10 consecutive days) or epitalon (at the dose 10 micrograms/animal/day intramuscularly during 7-10 consecutive days) induced significant increase in the night plasma melatonin in old monkeys, but the treatment did not change the melatonin level in young monkeys. Taking into consideration that melatonin is very important for regulation of the diurnal rhythm of functioning of some organs and systems it should be suggested that applying epithalamin and epitalon are perspective in the correction of age-related hormonal imbalance and age pathology.

  11. The massive end of the luminosity and stellar mass functions and clustering from CMASS to SDSS: evidence for and against passive evolution

    NASA Astrophysics Data System (ADS)

    Bernardi, M.; Meert, A.; Sheth, R. K.; Huertas-Company, M.; Maraston, C.; Shankar, F.; Vikram, V.

    2016-02-01

    We describe the luminosity function, based on Sérsic fits to the light profiles, of CMASS galaxies at z ˜ 0.55. Compared to previous estimates, our Sérsic-based reductions imply more luminous, massive galaxies, consistent with the effects of Sérsic- rather than Petrosian or de Vaucouleur-based photometry on the Sloan Digital Sky Survey (SDSS) main galaxy sample at z ˜ 0.1. This implies a significant revision of the high-mass end of the correlation between stellar and halo mass. Inferences about the evolution of the luminosity and stellar mass functions depend strongly on the assumed, and uncertain, k + e corrections. In turn, these depend on the assumed age of the population. Applying k + e corrections taken from fitting the models of Maraston et al. to the colours of both SDSS and CMASS galaxies, the evolution of the luminosity and stellar mass functions appears impressively passive, provided that the fits are required to return old ages. However, when matched in comoving number- or luminosity-density, the SDSS galaxies are less strongly clustered compared to their counterparts in CMASS. This rules out the passive evolution scenario, and, indeed, any minor merger scenarios which preserve the rank ordering in stellar mass of the population. Potential incompletenesses in the CMASS sample would further enhance this mismatch. Our analysis highlights the virtue of combining clustering measurements with number counts.

  12. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  13. Spatial filters and automated spike detection based on brain topographies improve sensitivity of EEG-fMRI studies in focal epilepsy.

    PubMed

    Siniatchkin, Michael; Moeller, Friederike; Jacobs, Julia; Stephani, Ulrich; Boor, Rainer; Wolff, Stephan; Jansen, Olav; Siebner, Hartwig; Scherg, Michael

    2007-09-01

    The ballistocardiogram (BCG) represents one of the most prominent sources of artifacts that contaminate the electroencephalogram (EEG) during functional MRI. The BCG artifacts may affect the detection of interictal epileptiform discharges (IED) in patients with epilepsy, reducing the sensitivity of the combined EEG-fMRI method. In this study we improved the BCG artifact correction using a multiple source correction (MSC) approach. On the one hand, a source analysis of the IEDs was applied to the EEG data obtained outside the MRI scanner to prevent the distortion of EEG signals of interest during the correction of BCG artifacts. On the other hand, the topographies of the BCG artifacts were defined based on the EEG recorded inside the scanner. The topographies of the BCG artifacts were then added to the surrogate model of IED sources and a combined source model was applied to the data obtained inside the scanner. The artifact signal was then subtracted without considerable distortion of the IED topography. The MSC approach was compared with the traditional averaged artifact subtraction (AAS) method. Both methods reduced the spectral power of BCG-related harmonics and enabled better detection of IEDs. Compared with the conventional AAS method, the MSC approach increased the sensitivity of IED detection because the IED signal was less attenuated when subtracting the BCG artifacts. The proposed MSC method is particularly useful in situations in which the BCG artifact is spatially correlated and time-locked with the EEG signal produced by the focal brain activity of interest.

  14. Recalculating the quasar luminosity function of the extended Baryon Oscillation Spectroscopic Survey

    NASA Astrophysics Data System (ADS)

    Caditz, David M.

    2017-12-01

    Aims: The extended Baryon Oscillation Spectroscopic Survey (eBOSS) of the Sloan Digital Sky Survey provides a uniform sample of over 13 000 variability selected quasi-stellar objects (QSOs) in the redshift range 0.68

  15. Wave-function-based approach to quasiparticle bands: Insight into the electronic structure of c-ZnS

    NASA Astrophysics Data System (ADS)

    Stoyanova, A.; Hozoi, L.; Fulde, P.; Stoll, H.

    2011-05-01

    Ab initio wave-function-based methods are employed for the study of quasiparticle energy bands of zinc-blende ZnS, with focus on the Zn 3d “semicore” states. The relative energies of these states with respect to the top of the S 3p valence bands appear to be poorly described as compared to experimental values not only within the local density approximation (LDA), but also when many-body corrections within the GW approximation are applied to the LDA or LDA + U mean-field solutions [T. Miyake, P. Zhang, M. L. Cohen, and S. G. Louie, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.74.245213 74, 245213 (2006)]. In the present study, we show that for the accurate description of the Zn 3d states a correlation treatment based on wave-function methods is needed. Our study rests on a local Hamiltonian approach which rigorously describes the short-range polarization and charge redistribution effects around an extra hole or electron placed into the valence respective conduction bands of semiconductors and insulators. The method also facilitates the computation of electron correlation effects beyond relaxation and polarization. The electron correlation treatment is performed on finite clusters cut off the infinite system. The formalism makes use of localized Wannier functions and embedding potentials derived explicitly from prior periodic Hartree-Fock calculations. The on-site and nearest-neighbor charge relaxation lead to corrections of several eV to the Hartree-Fock band energies and gap. Corrections due to long-range polarization are of the order of 1.0 eV. The dispersion of the Hartree-Fock bands is only slightly affected by electron correlations. We find the Zn 3d “semicore” states to lie ~9.0 eV below the top of the S 3p valence bands, in very good agreement with values from valence-band x-ray photoemission.

  16. Measurement of turbulent spatial structure and kinetic energy spectrum by exact temporal-to-spatial mapping

    NASA Astrophysics Data System (ADS)

    Buchhave, Preben; Velte, Clara M.

    2017-08-01

    We present a method for converting a time record of turbulent velocity measured at a point in a flow to a spatial velocity record consisting of consecutive convection elements. The spatial record allows computation of dynamic statistical moments such as turbulent kinetic wavenumber spectra and spatial structure functions in a way that completely bypasses the need for Taylor's hypothesis. The spatial statistics agree with the classical counterparts, such as the total kinetic energy spectrum, at least for spatial extents up to the Taylor microscale. The requirements for applying the method are access to the instantaneous velocity magnitude, in addition to the desired flow quantity, and a high temporal resolution in comparison to the relevant time scales of the flow. We map, without distortion and bias, notoriously difficult developing turbulent high intensity flows using three main aspects that distinguish these measurements from previous work in the field: (1) The measurements are conducted using laser Doppler anemometry and are therefore not contaminated by directional ambiguity (in contrast to, e.g., frequently employed hot-wire anemometers); (2) the measurement data are extracted using a correctly and transparently functioning processor and are analysed using methods derived from first principles to provide unbiased estimates of the velocity statistics; (3) the exact mapping proposed herein has been applied to the high turbulence intensity flows investigated to avoid the significant distortions caused by Taylor's hypothesis. The method is first confirmed to produce the correct statistics using computer simulations and later applied to measurements in some of the most difficult regions of a round turbulent jet—the non-equilibrium developing region and the outermost parts of the developed jet. The proposed mapping is successfully validated using corresponding directly measured spatial statistics in the fully developed jet, even in the difficult outer regions of the jet where the average convection velocity is negligible and turbulence intensities increase dramatically. The measurements in the developing region reveal interesting features of an incomplete Richardson-Kolmogorov cascade under development.

  17. Filtering Non-Linear Transfer Functions on Surfaces.

    PubMed

    Heitz, Eric; Nowrouzezahrai, Derek; Poulin, Pierre; Neyret, Fabrice

    2014-07-01

    Applying non-linear transfer functions and look-up tables to procedural functions (such as noise), surface attributes, or even surface geometry are common strategies used to enhance visual detail. Their simplicity and ability to mimic a wide range of realistic appearances have led to their adoption in many rendering problems. As with any textured or geometric detail, proper filtering is needed to reduce aliasing when viewed across a range of distances, but accurate and efficient transfer function filtering remains an open problem for several reasons: transfer functions are complex and non-linear, especially when mapped through procedural noise and/or geometry-dependent functions, and the effects of perspective and masking further complicate the filtering over a pixel's footprint. We accurately solve this problem by computing and sampling from specialized filtering distributions on the fly, yielding very fast performance. We investigate the case where the transfer function to filter is a color map applied to (macroscale) surface textures (like noise), as well as color maps applied according to (microscale) geometric details. We introduce a novel representation of a (potentially modulated) color map's distribution over pixel footprints using Gaussian statistics and, in the more complex case of high-resolution color mapped microsurface details, our filtering is view- and light-dependent, and capable of correctly handling masking and occlusion effects. Our approach can be generalized to filter other physical-based rendering quantities. We propose an application to shading with irradiance environment maps over large terrains. Our framework is also compatible with the case of transfer functions used to warp surface geometry, as long as the transformations can be represented with Gaussian statistics, leading to proper view- and light-dependent filtering results. Our results match ground truth and our solution is well suited to real-time applications, requires only a few lines of shader code (provided in supplemental material, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TVCG.2013.102), is high performance, and has a negligible memory footprint.

  18. Relativistic and the first sectorial harmonics corrections in the critical inclination

    NASA Astrophysics Data System (ADS)

    Rahoma, W. A.; Khattab, E. H.; Abd El-Salam, F. A.

    2014-05-01

    The problem of the critical inclination is treated in the Hamiltonian framework taking into consideration post-Newtonian corrections as well as the main correction term of sectorial harmonics for an earth-like planet. The Hamiltonian is expressed in terms of Delaunay canonical variables. A canonical transformation is applied to eliminate short period terms. A modified critical inclination is obtained due to relativistic and the first sectorial harmonics corrections.

  19. Evaluation of clinical methods for peroneal muscle testing.

    PubMed

    Sarig-Bahat, Hilla; Krasovsky, Andrei; Sprecher, Elliot

    2013-03-01

    Manual muscle testing of the peroneal muscles is well accepted as a testing method in musculoskeletal physiotherapy for the assessment of the foot and ankle. The peroneus longus and brevis are primary evertors and secondary plantar flexors of the ankle joint. However, some international textbooks describe them as dorsi flexors, when instructing peroneal muscle testing. The identified variability raised a question whether these educational texts are reflected in the clinical field. The purposes of this study were to investigate what are the methods commonly used in the clinical field for peroneal muscle testing and to evaluate their compatibility with functional anatomy. A cross-sectional study was conducted, using an electronic questionnaire sent to 143 Israeli physiotherapists in the musculoskeletal field. The survey questioned on the anatomical location of manual resistance and the combination of motions resisted. Ninety-seven responses were received. The majority (69%) of respondents related correctly to the peronei as evertors, but asserted that resistance should be located over the dorsal aspect of the fifth metatarsus, thereby disregarding the peroneus longus. Moreover, 38% of the respondents described the peronei as dorsi flexors, rather than plantar flexors. Only 2% selected the correct method of resisting plantarflexion and eversion at the base of the first metatarsus. We consider this technique to be the most compatible with the anatomy of the peroneus longus and brevis. The Fisher-Freeman-Halton test indicated that there was a significant relationship between responses on the questions (P = 0.0253, 95% CI 0.0249-0.0257), thus justifying further correspondence analysis. The correspondence analysis found no clustering of the answers that were compatible with anatomical evidence and were applied in the correct technique, but did demonstrate a common error, resisting dorsiflexion rather than plantarflexion, which was in agreement with the described frequencies. Inconsistencies were identified between the instruction method commonly provided for peroneal muscle testing in textbook and the functional anatomy of these muscles. Results reflect the lack of accuracy in applying functional anatomy to peroneal testing. This may be due to limited use of peroneal muscle testing or to inadequate investigation of the existing evaluation methods and their validity. Accordingly, teaching materials and clinical methods used for this test should be re-evaluated. Further research should investigate the value of peroneal muscle testing in clinical ankle evaluation. Copyright © 2012 John Wiley & Sons, Ltd.

  20. 34 CFR 403.100 - What are the requirements for designating a State corrections educational agency to administer...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... corrections educational agency to administer the Programs for Criminal Offenders? 403.100 Section 403.100... ADULT EDUCATION, DEPARTMENT OF EDUCATION STATE VOCATIONAL AND APPLIED TECHNOLOGY EDUCATION PROGRAM What... § 403.100 What are the requirements for designating a State corrections educational agency to administer...

Top