Higher order total variation regularization for EIT reconstruction.
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut
2018-01-08
Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.
Iterative Nonlocal Total Variation Regularization Method for Image Restoration
Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen
2013-01-01
In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...
2017-01-03
Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
Introduction of Total Variation Regularization into Filtered Backprojection Algorithm
NASA Astrophysics Data System (ADS)
Raczyński, L.; Wiślicki, W.; Klimaszewski, K.; Krzemień, W.; Kowalski, P.; Shopa, R. Y.; Białas, P.; Curceanu, C.; Czerwiński, E.; Dulski, K.; Gajos, A.; Głowacz, B.; Gorgol, M.; Hiesmayr, B.; Jasińska, B.; Kisielewska-Kamińska, D.; Korcyl, G.; Kozik, T.; Krawczyk, N.; Kubicz, E.; Mohammed, M.; Pawlik-Niedźwiecka, M.; Niedźwiecki, S.; Pałka, M.; Rudy, Z.; Sharma, N. G.; Sharma, S.; Silarski, M.; Skurzok, M.; Wieczorek, A.; Zgardzińska, B.; Zieliński, M.; Moskal, P.
In this paper we extend the state-of-the-art filtered backprojection (FBP) method with application of the concept of Total Variation regularization. We compare the performance of the new algorithm with the most common form of regularizing in the FBP image reconstruction via apodizing functions. The methods are validated in terms of cross-correlation coefficient between reconstructed and real image of radioactive tracer distribution using standard Derenzo-type phantom. We demonstrate that the proposed approach results in higher cross-correlation values with respect to the standard FBP method.
A spatially adaptive total variation regularization method for electrical resistance tomography
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2015-12-01
The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.
NASA Astrophysics Data System (ADS)
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan
2017-04-01
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
Total generalized variation-regularized variational model for single image dehazing
NASA Astrophysics Data System (ADS)
Shu, Qiao-Ling; Wu, Chuan-Sheng; Zhong, Qiu-Xiang; Liu, Ryan Wen
2018-04-01
Imaging quality is often significantly degraded under hazy weather condition. The purpose of this paper is to recover the latent sharp image from its hazy version. It is well known that the accurate estimation of depth information could assist in improving dehazing performance. In this paper, a detail-preserving variational model was proposed to simultaneously estimate haze-free image and depth map. In particular, the total variation (TV) and total generalized variation (TGV) regularizers were introduced to restrain haze-free image and depth map, respectively. The resulting nonsmooth optimization problem was efficiently solved using the alternating direction method of multipliers (ADMM). Comprehensive experiments have been conducted on realistic datasets to compare our proposed method with several state-of-the-art dehazing methods. Results have illustrated the superior performance of the proposed method in terms of visual quality evaluation.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Selection of regularization parameter in total variation image restoration.
Liao, Haiyong; Li, Fang; Ng, Michael K
2009-11-01
We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.
Iterative image reconstruction that includes a total variation regularization for radial MRI.
Kojima, Shinya; Shinohara, Hiroyuki; Hashimoto, Takeyuki; Hirata, Masami; Ueno, Eiko
2015-07-01
This paper presents an iterative image reconstruction method for radial encodings in MRI based on a total variation (TV) regularization. The algebraic reconstruction method combined with total variation regularization (ART_TV) is implemented with a regularization parameter specifying the weight of the TV term in the optimization process. We used numerical simulations of a Shepp-Logan phantom, as well as experimental imaging of a phantom that included a rectangular-wave chart, to evaluate the performance of ART_TV, and to compare it with that of the Fourier transform (FT) method. The trade-off between spatial resolution and signal-to-noise ratio (SNR) was investigated for different values of the regularization parameter by experiments on a phantom and a commercially available MRI system. ART_TV was inferior to the FT with respect to the evaluation of the modulation transfer function (MTF), especially at high frequencies; however, it outperformed the FT with regard to the SNR. In accordance with the results of SNR measurement, visual impression suggested that the image quality of ART_TV was better than that of the FT for reconstruction of a noisy image of a kiwi fruit. In conclusion, ART_TV provides radial MRI with improved image quality for low-SNR data; however, the regularization parameter in ART_TV is a critical factor for obtaining improvement over the FT.
Zeng, Dong; Gao, Yuanyuan; Huang, Jing; Bian, Zhaoying; Zhang, Hua; Lu, Lijun; Ma, Jianhua
2016-10-01
Multienergy computed tomography (MECT) allows identifying and differentiating different materials through simultaneous capture of multiple sets of energy-selective data belonging to specific energy windows. However, because sufficient photon counts are not available in each energy window compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise and strong streak artifacts. To address the particular challenge, this work presents a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization, which is henceforth referred to as 'PWLS-STV' for simplicity. Specifically, the STV regularization is derived by penalizing higher-order derivatives of the desired MECT images. Thus it could provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation (TV) regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Extensive experiments with a digital XCAT phantom and meat specimen clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of both quantitative and visual quality evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Bian, Zhaoying; Gong, Changfei; Huang, Jing; He, Ji; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua
2016-03-01
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
Fast magnetic resonance imaging based on high degree total variation
NASA Astrophysics Data System (ADS)
Wang, Sujie; Lu, Liangliang; Zheng, Junbao; Jiang, Mingfeng
2018-04-01
In order to eliminating the artifacts and "staircase effect" of total variation in Compressive Sensing MRI, high degree total variation model is proposed for dynamic MRI reconstruction. the high degree total variation regularization term is used as a constraint to reconstruct the magnetic resonance image, and the iterative weighted MM algorithm is proposed to solve the convex optimization problem of the reconstructed MR image model, In addtion, one set of cardiac magnetic resonance data is used to verify the proposed algorithm for MRI. The results show that the high degree total variation method has a better reconstruction effect than the total variation and the total generalized variation, which can obtain higher reconstruction SNR and better structural similarity.
Composite SAR imaging using sequential joint sparsity
NASA Astrophysics Data System (ADS)
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.
2017-06-01
This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.
NASA Astrophysics Data System (ADS)
Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Celorrio, Ricardo
2010-09-01
We analyze the ability of the Tikhonov regularization to retrieve different shapes of in-depth thermal conductivity profiles, usually encountered in hardened materials, from surface temperature data. Exponential, oscillating, and sigmoidal profiles are studied. By performing theoretical experiments with added white noises, the influence of the order of the Tikhonov functional and of the parameters that need to be tuned to carry out the inversion are investigated. The analysis shows that the Tikhonov regularization is very well suited to reconstruct smooth profiles but fails when the conductivity exhibits steep slopes. We check a natural alternative regularization, the total variation functional, which gives much better results for sigmoidal profiles. Accordingly, a strategy to deal with real data is proposed in which we introduce this total variation regularization. This regularization is applied to the inversion of real data corresponding to a case hardened AISI1018 steel plate, giving much better anticorrelation of the retrieved conductivity with microindentation test data than the Tikhonov regularization. The results suggest that this is a promising way to improve the reliability of local inversion methods.
A note on convergence of solutions of total variation regularized linear inverse problems
NASA Astrophysics Data System (ADS)
Iglesias, José A.; Mercier, Gwenael; Scherzer, Otmar
2018-05-01
In a recent paper by Chambolle et al (2017 Inverse Problems 33 015002) it was proven that if the subgradient of the total variation at the noise free data is not empty, the level-sets of the total variation denoised solutions converge to the level-sets of the noise free data with respect to the Hausdorff distance. The condition on the subgradient corresponds to the source condition introduced by Burger and Osher (2007 Multiscale Model. Simul. 6 365–95), who proved convergence rates results with respect to the Bregman distance under this condition. We generalize the result of Chambolle et al to total variation regularization of general linear inverse problems under such a source condition. As particular applications we present denoising in bounded and unbounded, convex and non convex domains, deblurring and inversion of the circular Radon transform. In all these examples the convergence result applies. Moreover, we illustrate the convergence behavior through numerical examples.
s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography
Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai
2016-01-01
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529
NASA Astrophysics Data System (ADS)
Huang, Zhenghua; Zhang, Tianxu; Deng, Lihua; Fang, Hao; Li, Qian
2015-12-01
Total variation(TV) based on regularization has been proven as a popular and effective model for image restoration, because of its ability of edge preserved. However, as the TV favors a piece-wise constant solution, the processing results in the flat regions of the image are easily produced "staircase effects", and the amplitude of the edges will be underestimated; the underlying cause of the problem is that the regularization parameter can not be changeable with spatial local information of image. In this paper, we propose a novel Scatter-matrix eigenvalues-based TV(SMETV) regularization with image blind restoration algorithm for deblurring medical images. The spatial information in different image regions is incorporated into regularization by using the edge indicator called difference eigenvalue to distinguish edges from flat areas. The proposed algorithm can effectively reduce the noise in flat regions as well as preserve the edge and detailed information. Moreover, it becomes more robust with the change of the regularization parameter. Extensive experiments demonstrate that the proposed approach produces results superior to most methods in both visual image quality and quantitative measures.
Adaptive regularization of the NL-means: application to image and video denoising.
Sutour, Camille; Deledalle, Charles-Alban; Aujol, Jean-François
2014-08-01
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
New second order Mumford-Shah model based on Γ-convergence approximation for image processing
NASA Astrophysics Data System (ADS)
Duan, Jinming; Lu, Wenqi; Pan, Zhenkuan; Bai, Li
2016-05-01
In this paper, a second order variational model named the Mumford-Shah total generalized variation (MSTGV) is proposed for simultaneously image denoising and segmentation, which combines the original Γ-convergence approximated Mumford-Shah model with the second order total generalized variation (TGV). For image denoising, the proposed MSTGV can eliminate both the staircase artefact associated with the first order total variation and the edge blurring effect associated with the quadratic H1 regularization or the second order bounded Hessian regularization. For image segmentation, the MSTGV can obtain clear and continuous boundaries of objects in the image. To improve computational efficiency, the implementation of the MSTGV does not directly solve its high order nonlinear partial differential equations and instead exploits the efficient split Bregman algorithm. The algorithm benefits from the fast Fourier transform, analytical generalized soft thresholding equation, and Gauss-Seidel iteration. Extensive experiments are conducted to demonstrate the effectiveness and efficiency of the proposed model.
Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation
2012-05-01
deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-21
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
NASA Astrophysics Data System (ADS)
Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua
2016-11-01
Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Noisy image magnification with total variation regularization and order-changed dictionary learning
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-12-01
Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.
5 CFR 550.1307 - Authority to regularize paychecks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks... period to pay period. Such a plan must provide that the total pay any firefighter would otherwise receive... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a...
5 CFR 550.1307 - Authority to regularize paychecks.
Code of Federal Regulations, 2014 CFR
2014-01-01
... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks... period to pay period. Such a plan must provide that the total pay any firefighter would otherwise receive... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a...
5 CFR 550.1307 - Authority to regularize paychecks.
Code of Federal Regulations, 2011 CFR
2011-01-01
... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks... period to pay period. Such a plan must provide that the total pay any firefighter would otherwise receive... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a...
5 CFR 550.1307 - Authority to regularize paychecks.
Code of Federal Regulations, 2012 CFR
2012-01-01
... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks... period to pay period. Such a plan must provide that the total pay any firefighter would otherwise receive... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a...
5 CFR 550.1307 - Authority to regularize paychecks.
Code of Federal Regulations, 2013 CFR
2013-01-01
... an agency's plan to reduce or eliminate variation in the amount of firefighters' biweekly paychecks... period to pay period. Such a plan must provide that the total pay any firefighter would otherwise receive... PAY ADMINISTRATION (GENERAL) Firefighter Pay § 550.1307 Authority to regularize paychecks. Upon a...
Numerical Differentiation of Noisy, Nonsmooth Data
Chartrand, Rick
2011-01-01
We consider the problem of differentiating a function specified by noisy data. Regularizing the differentiation process avoids the noise amplification of finite-difference methods. We use total-variation regularization, which allows for discontinuous solutions. The resulting simple algorithm accurately differentiates noisy functions, including those which have a discontinuous derivative.
Debatin, Maurice; Hesser, Jürgen
2015-01-01
Reducing the amount of time for data acquisition and reconstruction in industrial CT decreases the operation time of the X-ray machine and therefore increases the sales. This can be achieved by reducing both, the dose and the pulse length of the CT system and the number of projections for the reconstruction, respectively. In this paper, a novel generalized Anisotropic Total Variation regularization for under-sampled, low-dose iterative CT reconstruction is discussed and compared to the standard methods, Total Variation, Adaptive weighted Total Variation and Filtered Backprojection. The novel regularization function uses a priori information about the Gradient Magnitude Distribution of the scanned object for the reconstruction. We provide a general parameterization scheme and evaluate the efficiency of our new algorithm for different noise levels and different number of projection views. When noise is not present, error-free reconstructions are achievable for AwTV and GATV from 40 projections. In cases where noise is simulated, our strategy achieves a Relative Root Mean Square Error that is up to 11 times lower than Total Variation-based and up to 4 times lower than AwTV-based iterative statistical reconstruction (e.g. for a SNR of 223 and 40 projections). To obtain the same reconstruction quality as achieved by Total Variation, the projection number and the pulse length, and the acquisition time and the dose respectively can be reduced by a factor of approximately 3.5, when AwTV is used and a factor of approximately 6.7, when our proposed algorithm is used.
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
NASA Astrophysics Data System (ADS)
Han, Hao; Gao, Hao; Xing, Lei
2017-08-01
Excessive radiation exposure is still a major concern in 4D cone-beam computed tomography (4D-CBCT) due to its prolonged scanning duration. Radiation dose can be effectively reduced by either under-sampling the x-ray projections or reducing the x-ray flux. However, 4D-CBCT reconstruction under such low-dose protocols is prone to image artifacts and noise. In this work, we propose a novel joint regularization-based iterative reconstruction method for low-dose 4D-CBCT. To tackle the under-sampling problem, we employ spatiotemporal tensor framelet (STF) regularization to take advantage of the spatiotemporal coherence of the patient anatomy in 4D images. To simultaneously suppress the image noise caused by photon starvation, we also incorporate spatiotemporal nonlocal total variation (SNTV) regularization to make use of the nonlocal self-recursiveness of anatomical structures in the spatial and temporal domains. Under the joint STF-SNTV regularization, the proposed iterative reconstruction approach is evaluated first using two digital phantoms and then using physical experiment data in the low-dose context of both under-sampled and noisy projections. Compared with existing approaches via either STF or SNTV regularization alone, the presented hybrid approach achieves improved image quality, and is particularly effective for the reconstruction of low-dose 4D-CBCT data that are not only sparse but noisy.
NASA Astrophysics Data System (ADS)
Kazantsev, Daniil; Jørgensen, Jakob S.; Andersen, Martin S.; Lionheart, William R. B.; Lee, Peter D.; Withers, Philip J.
2018-06-01
Rapid developments in photon-counting and energy-discriminating detectors have the potential to provide an additional spectral dimension to conventional x-ray grayscale imaging. Reconstructed spectroscopic tomographic data can be used to distinguish individual materials by characteristic absorption peaks. The acquired energy-binned data, however, suffer from low signal-to-noise ratio, acquisition artifacts, and frequently angular undersampled conditions. New regularized iterative reconstruction methods have the potential to produce higher quality images and since energy channels are mutually correlated it can be advantageous to exploit this additional knowledge. In this paper, we propose a novel method which jointly reconstructs all energy channels while imposing a strong structural correlation. The core of the proposed algorithm is to employ a variational framework of parallel level sets to encourage joint smoothing directions. In particular, the method selects reference channels from which to propagate structure in an adaptive and stochastic way while preferring channels with a high data signal-to-noise ratio. The method is compared with current state-of-the-art multi-channel reconstruction techniques including channel-wise total variation and correlative total nuclear variation regularization. Realistic simulation experiments demonstrate the performance improvements achievable by using correlative regularization methods.
Image superresolution by midfrequency sparse representation and total variation regularization
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-01-01
Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
NASA Astrophysics Data System (ADS)
Taubmann, O.; Haase, V.; Lauritsch, G.; Zheng, Y.; Krings, G.; Hornegger, J.; Maier, A.
2017-04-01
Time-resolved tomographic cardiac imaging using an angiographic C-arm device may support clinicians during minimally invasive therapy by enabling a thorough analysis of the heart function directly in the catheter laboratory. However, clinically feasible acquisition protocols entail a highly challenging reconstruction problem which suffers from sparse angular sampling of the trajectory. Compressed sensing theory promises that useful images can be recovered despite massive undersampling by means of sparsity-based regularization. For a multitude of reasons—most notably the desired reduction of scan time, dose and contrast agent required—it is of great interest to know just how little data is actually sufficient for a certain task. In this work, we apply a convex optimization approach based on primal-dual splitting to 4D cardiac C-arm computed tomography. We examine how the quality of spatially and temporally total-variation-regularized reconstruction degrades when using as few as 6.9+/- 1.2 projection views per heart phase. First, feasible regularization weights are determined in a numerical phantom study, demonstrating the individual benefits of both regularizers. Secondly, a task-based evaluation is performed in eight clinical patients. Semi-automatic segmentation-based volume measurements of the left ventricular blood pool performed on strongly undersampled images show a correlation of close to 99% with measurements obtained from less sparsely sampled data.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Lu, Hongyang; Wei, Jingbo; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values. PMID:27110235
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
NASA Astrophysics Data System (ADS)
Yu, Haiqing; Chen, Shuhang; Chen, Yunmei; Liu, Huafeng
2017-05-01
Dynamic positron emission tomography (PET) is capable of providing both spatial and temporal information of radio tracers in vivo. In this paper, we present a novel joint estimation framework to reconstruct temporal sequences of dynamic PET images and the coefficients characterizing the system impulse response function, from which the associated parametric images of the system macro parameters for tracer kinetics can be estimated. The proposed algorithm, which combines statistical data measurement and tracer kinetic models, integrates a dictionary sparse coding (DSC) into a total variational minimization based algorithm for simultaneous reconstruction of the activity distribution and parametric map from measured emission sinograms. DSC, based on the compartmental theory, provides biologically meaningful regularization, and total variation regularization is incorporated to provide edge-preserving guidance. We rely on techniques from minimization algorithms (the alternating direction method of multipliers) to first generate the estimated activity distributions with sub-optimal kinetic parameter estimates, and then recover the parametric maps given these activity estimates. These coupled iterative steps are repeated as necessary until convergence. Experiments with synthetic, Monte Carlo generated data, and real patient data have been conducted, and the results are very promising.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
Hessian-based norm regularization for image restoration with biomedical applications.
Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael
2012-03-01
We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.
X-ray computed tomography using curvelet sparse regularization.
Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias
2015-04-01
Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise
Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang
2015-01-01
The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860
NASA Astrophysics Data System (ADS)
Krauze, W.; Makowski, P.; Kujawińska, M.
2015-06-01
Standard tomographic algorithms applied to optical limited-angle tomography result in the reconstructions that have highly anisotropic resolution and thus special algorithms are developed. State of the art approaches utilize the Total Variation (TV) minimization technique. These methods give very good results but are applicable to piecewise constant structures only. In this paper, we propose a novel algorithm for 3D limited-angle tomography - Total Variation Iterative Constraint method (TVIC) which enhances the applicability of the TV regularization to non-piecewise constant samples, like biological cells. This approach consists of two parts. First, the TV minimization is used as a strong regularizer to create a sharp-edged image converted to a 3D binary mask which is then iteratively applied in the tomographic reconstruction as a constraint in the object domain. In the present work we test the method on a synthetic object designed to mimic basic structures of a living cell. For simplicity, the test reconstructions were performed within the straight-line propagation model (SIRT3D solver from the ASTRA Tomography Toolbox), but the strategy is general enough to supplement any algorithm for tomographic reconstruction that supports arbitrary geometries of plane-wave projection acquisition. This includes optical diffraction tomography solvers. The obtained reconstructions present resolution uniformity and general shape accuracy expected from the TV regularization based solvers, but keeping the smooth internal structures of the object at the same time. Comparison between three different patterns of object illumination arrangement show very small impact of the projection acquisition geometry on the image quality.
2016-05-01
norm does not cap - ture the geometry completely. The L1−L2 in (c) does a better job than TV while L1 in (b) and L1−0.5L2 in (d) capture the squares most...and isotropic total variation (TV) norms into a relaxed formu- lation of the two phase Mumford-Shah (MS) model for image segmentation. We show...results exceeding those obtained by the MS model when using the standard TV norm to regular- ize partition boundaries. In particular, examples illustrating
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
NASA Astrophysics Data System (ADS)
Gong, Changfei; Han, Ce; Gan, Guanghui; Deng, Zhenxiang; Zhou, Yongqiang; Yi, Jinling; Zheng, Xiaomin; Xie, Congying; Jin, Xiance
2017-04-01
Dynamic myocardial perfusion CT (DMP-CT) imaging provides quantitative functional information for diagnosis and risk stratification of coronary artery disease by calculating myocardial perfusion hemodynamic parameter (MPHP) maps. However, the level of radiation delivered by dynamic sequential scan protocol can be potentially high. The purpose of this work is to develop a pre-contrast normal-dose scan induced structure tensor total variation regularization based on the penalized weighted least-squares (PWLS) criteria to improve the image quality of DMP-CT with a low-mAs CT acquisition. For simplicity, the present approach was termed as ‘PWLS-ndiSTV’. Specifically, the ndiSTV regularization takes into account the spatial-temporal structure information of DMP-CT data and further exploits the higher order derivatives of the objective images to enhance denoising performance. Subsequently, an effective optimization algorithm based on the split-Bregman approach was adopted to minimize the associative objective function. Evaluations with modified dynamic XCAT phantom and preclinical porcine datasets have demonstrated that the proposed PWLS-ndiSTV approach can achieve promising gains over other existing approaches in terms of noise-induced artifacts mitigation, edge details preservation, and accurate MPHP maps calculation.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging
Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz
2013-01-01
Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
Bayesian denoising in digital radiography: a comparison in the dental field.
Frosio, I; Olivieri, C; Lucchese, M; Borghese, N A; Boccacci, P
2013-01-01
We compared two Bayesian denoising algorithms for digital radiographs, based on Total Variation regularization and wavelet decomposition. The comparison was performed on simulated radiographs with different photon counts and frequency content and on real dental radiographs. Four different quality indices were considered to quantify the quality of the filtered radiographs. The experimental results suggested that Total Variation is more suited to preserve fine anatomical details, whereas wavelets produce images of higher quality at global scale; they also highlighted the need for more reliable image quality indices. Copyright © 2012 Elsevier Ltd. All rights reserved.
Blind image fusion for hyperspectral imaging with the directional total variation
NASA Astrophysics Data System (ADS)
Bungert, Leon; Coomes, David A.; Ehrhardt, Matthias J.; Rasch, Jennifer; Reisenhofer, Rafael; Schönlieb, Carola-Bibiane
2018-04-01
Hyperspectral imaging is a cutting-edge type of remote sensing used for mapping vegetation properties, rock minerals and other materials. A major drawback of hyperspectral imaging devices is their intrinsic low spatial resolution. In this paper, we propose a method for increasing the spatial resolution of a hyperspectral image by fusing it with an image of higher spatial resolution that was obtained with a different imaging modality. This is accomplished by solving a variational problem in which the regularization functional is the directional total variation. To accommodate for possible mis-registrations between the two images, we consider a non-convex blind super-resolution problem where both a fused image and the corresponding convolution kernel are estimated. Using this approach, our model can realign the given images if needed. Our experimental results indicate that the non-convexity is negligible in practice and that reliable solutions can be computed using a variety of different optimization algorithms. Numerical results on real remote sensing data from plant sciences and urban monitoring show the potential of the proposed method and suggests that it is robust with respect to the regularization parameters, mis-registration and the shape of the kernel.
NASA Astrophysics Data System (ADS)
Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud
2017-11-01
Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.
Computerized tomography with total variation and with shearlets
NASA Astrophysics Data System (ADS)
Garduño, Edgar; Herman, Gabor T.
2017-04-01
To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.
Dynamical spreading of small bodies in 1:1 resonance with planets by the diurnal Yarkovsky effect
NASA Astrophysics Data System (ADS)
Wang, Xuefeng; Hou, Xiyun
2017-10-01
A simple model is introduced to describe the inherent dynamics of Trojans in the presence of the diurnal Yarkovsky effect. For different spin statuses, the orbital elements of the Trojans (mainly semimajor axis, eccentricity and inclination) undergo different variations. The variation rate is generally very small, but the total variation of the semimajor axis or the orbit eccentricity over the age of the Solar system may be large enough to send small Trojans out of the regular region (or, vice versa, to capture small bodies in the regular region). In order to demonstrate the analytical analysis, we first carry out numerical simulations in a simple model, and then generalize these to two 'real' systems, namely the Sun-Jupiter system and the Sun-Earth system. In the Sun-Jupiter system, where the motion of Trojans is regular, the Yarkovsky effect gradually alters the libration width or the orbit eccentricity, forcing the Trojan to move from regular regionsto chaotic regions, where chaos may eventually cause it to escape. In the Sun-Earth system, where the motion of Trojans is generally chaotic, our limited numerical simulations indicate that the Yarkovsky effect is negligible for Trojans of 100 m in size, and even for larger ones. The Yarkovsky effect on small bodies captured in other 1:1 resonance orbits is also briefly discussed.
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Schwinger-variational-principle theory of collisions in the presence of multiple potentials
NASA Astrophysics Data System (ADS)
Robicheaux, F.; Giannakeas, P.; Greene, Chris H.
2015-08-01
A theoretical method for treating collisions in the presence of multiple potentials is developed by employing the Schwinger variational principle. The current treatment agrees with the local (regularized) frame transformation theory and extends its capabilities. Specifically, the Schwinger variational approach gives results without the divergences that need to be regularized in other methods. Furthermore, it provides a framework to identify the origin of these singularities and possibly improve the local frame transformation. We have used the method to obtain the scattering parameters for different confining potentials symmetric in x ,y . The method is also used to treat photodetachment processes in the presence of various confining potentials, thereby highlighting effects of the infinitely many closed channels. Two general features predicted are the vanishing of the total photoabsorption probability at every channel threshold and the occurrence of resonances below the channel thresholds for negative scattering lengths. In addition, the case of negative-ion photodetachment in the presence of uniform magnetic fields is also considered where unique features emerge at large scattering lengths.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Hierarchical Bayesian modeling of ionospheric TEC disturbances as non-stationary processes
NASA Astrophysics Data System (ADS)
Seid, Abdu Mohammed; Berhane, Tesfahun; Roininen, Lassi; Nigussie, Melessew
2018-03-01
We model regular and irregular variation of ionospheric total electron content as stationary and non-stationary processes, respectively. We apply the method developed to SCINDA GPS data set observed at Bahir Dar, Ethiopia (11.6 °N, 37.4 °E) . We use hierarchical Bayesian inversion with Gaussian Markov random process priors, and we model the prior parameters in the hyperprior. We use Matérn priors via stochastic partial differential equations, and use scaled Inv -χ2 hyperpriors for the hyperparameters. For drawing posterior estimates, we use Markov Chain Monte Carlo methods: Gibbs sampling and Metropolis-within-Gibbs for parameter and hyperparameter estimations, respectively. This allows us to quantify model parameter estimation uncertainties as well. We demonstrate the applicability of the method proposed using a synthetic test case. Finally, we apply the method to real GPS data set, which we decompose to regular and irregular variation components. The result shows that the approach can be used as an accurate ionospheric disturbance characterization technique that quantifies the total electron content variability with corresponding error uncertainties.
Bayesian nonparametric dictionary learning for compressed sensing MRI.
Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping
2014-12-01
We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2014-05-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
NASA Technical Reports Server (NTRS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2013-01-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
Limited data tomographic image reconstruction via dual formulation of total variation minimization
NASA Astrophysics Data System (ADS)
Jang, Kwang Eun; Sung, Younghun; Lee, Kangeui; Lee, Jongha; Cho, Seungryong
2011-03-01
The X-ray mammography is the primary imaging modality for breast cancer screening. For the dense breast, however, the mammogram is usually difficult to read due to tissue overlap problem caused by the superposition of normal tissues. The digital breast tomosynthesis (DBT) that measures several low dose projections over a limited angle range may be an alternative modality for breast imaging, since it allows the visualization of the cross-sectional information of breast. The DBT, however, may suffer from the aliasing artifact and the severe noise corruption. To overcome these problems, a total variation (TV) regularized statistical reconstruction algorithm is presented. Inspired by the dual formulation of TV minimization in denoising and deblurring problems, we derived a gradient-type algorithm based on statistical model of X-ray tomography. The objective function is comprised of a data fidelity term derived from the statistical model and a TV regularization term. The gradient of the objective function can be easily calculated using simple operations in terms of auxiliary variables. After a descending step, the data fidelity term is renewed in each iteration. Since the proposed algorithm can be implemented without sophisticated operations such as matrix inverse, it provides an efficient way to include the TV regularization in the statistical reconstruction method, which results in a fast and robust estimation for low dose projections over the limited angle range. Initial tests with an experimental DBT system confirmed our finding.
NASA Astrophysics Data System (ADS)
Gu, Chengwei; Zeng, Dong; Lin, Jiahui; Li, Sui; He, Ji; Zhang, Hao; Bian, Zhaoying; Niu, Shanzhou; Zhang, Zhang; Huang, Jing; Chen, Bo; Zhao, Dazhe; Chen, Wufan; Ma, Jianhua
2018-06-01
Myocardial perfusion computed tomography (MPCT) imaging is commonly used to detect myocardial ischemia quantitatively. A limitation in MPCT is that an additional radiation dose is required compared to unenhanced CT due to its repeated dynamic data acquisition. Meanwhile, noise and streak artifacts in low-dose cases are the main factors that degrade the accuracy of quantifying myocardial ischemia and hamper the diagnostic utility of the filtered backprojection reconstructed MPCT images. Moreover, it is noted that the MPCT images are composed of a series of 2/3D images, which can be naturally regarded as a 3/4-order tensor, and the MPCT images are globally correlated along time and are sparse across space. To obtain higher fidelity ischemia from low-dose MPCT acquisitions quantitatively, we propose a robust statistical iterative MPCT image reconstruction algorithm by incorporating tensor total generalized variation (TTGV) regularization into a penalized weighted least-squares framework. Specifically, the TTGV regularization fuses the spatial correlation of the myocardial structure and the temporal continuation of the contrast agent intake during the perfusion. Then, an efficient iterative strategy is developed for the objective function optimization. Comprehensive evaluations have been conducted on a digital XCAT phantom and a preclinical porcine dataset regarding the accuracy of the reconstructed MPCT images, the quantitative differentiation of ischemia and the algorithm’s robustness and efficiency.
Total variation superiorized conjugate gradient method for image reconstruction
NASA Astrophysics Data System (ADS)
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
An iterative algorithm for L1-TV constrained regularization in image restoration
NASA Astrophysics Data System (ADS)
Chen, K.; Loli Piccolomini, E.; Zama, F.
2015-11-01
We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.
Total variation-based method for radar coincidence imaging with model mismatch for extended target
NASA Astrophysics Data System (ADS)
Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang
2017-11-01
Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.
Total variation optimization for imaging through turbid media with transmission matrix
NASA Astrophysics Data System (ADS)
Gong, Changmei; Shao, Xiaopeng; Wu, Tengfei; Liu, Jietao; Zhang, Jianqi
2016-12-01
With the transmission matrix (TM) of the whole optical system measured, the image of the object behind a turbid medium can be recovered from its speckle field by means of an image reconstruction algorithm. Instead of Tikhonov regularization algorithm (TRA), the total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3) is introduced to recover object images. As a total variation (TV)-based approach, TVAL3 allows to effectively damp more noise and preserve more edges compared with TRA, thus providing more outstanding image quality. Different levels of detector noise and TM-measurement noise are successively added to analyze the antinoise performance of these two algorithms. Simulation results show that TVAL3 is able to recover more details and suppress more noise than TRA under different noise levels, thus providing much more excellent image quality. Furthermore, whether it be detector noise or TM-measurement noise, the reconstruction images obtained by TVAL3 at SNR=15 dB are far superior to those by TRA at SNR=50 dB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mory, Cyril, E-mail: cyril.mory@philips.com; Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes; Auvray, Vincent
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method,more » which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn
Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
Joint MR-PET reconstruction using a multi-channel image regularizer
Koesters, Thomas; Otazo, Ricardo; Bredies, Kristian; Sodickson, Daniel K
2016-01-01
While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy. PMID:28055827
Nonsmooth, nonconvex regularizers applied to linear electromagnetic inverse problems
NASA Astrophysics Data System (ADS)
Hidalgo-Silva, H.; Gomez-Trevino, E.
2017-12-01
Tikhonov's regularization method is the standard technique applied to obtain models of the subsurface conductivity distribution from electric or electromagnetic measurements by solving UT (m) = | F (m) - d |2 + λ P(m). The second term correspond to the stabilizing functional, with P (m) = | ∇ m |2 the usual approach, and λ the regularization parameter. Due to the roughness penalizer inclusion, the model developed by Tikhonov's algorithm tends to smear discontinuities, a feature that may be undesirable. An important requirement for the regularizer is to allow the recovery of edges, and smooth the homogeneous parts. As is well known, Total Variation (TV) is now the standard approach to meet this requirement. Recently, Wang et.al. proved convergence for alternating direction method of multipliers in nonconvex, nonsmooth optimization. In this work we present a study of several algorithms for model recovering of Geosounding data based on Infimal Convolution, and also on hybrid, TV and second order TV and nonsmooth, nonconvex regularizers, observing their performance on synthetic and real data. The algorithms are based on Bregman iteration and Split Bregman method, and the geosounding method is the low-induction numbers magnetic dipoles. Non-smooth regularizers are considered using the Legendre-Fenchel transform.
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
A long-term study of H(alpha) line variations in FK Comae Berenices
NASA Technical Reports Server (NTRS)
Welty, Alan D.; Ramsey, Lawrence W.; Iyengar, Mrinal; Nations, Harold L.; Buzasi, Derek L.
1993-01-01
We present observations of H(alpha) V/R ratio variations in FK Comae Berencies obtained during several observing seasons from 1981 to 1992. The raw H(alpha) emission profile is always observed to be double peaked due to the stellar-absorption component. During the most years the V/R ratio varies regularly with the period of the photometric light curve. The V/R periodicity is most obvious when time spans no longer than several stellar rotations are considered. We propose that the bulk of the emission component of the H(alpha) line arises in corotating circumstellar material that may be similar to that of a quiescent solar prominence. The lifetime of these structures appears to be on the order of weeks. A weak contribution from a circumstellar disk is evident and chromospheric emission may also be present. The appearance or disappearance of circumstellar structures over periods longer than a few weeks, or the total absence of such structures, blurs the more regular variations in H(alpha) seen over short time scales. Other more stochastic activity, such as flares, also clearly occurs. Phase shifts of the V/R ratio from year to year rule out the hypothesis that mass tranfer in a close binary system is responsible for the V/R variations.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
NASA Astrophysics Data System (ADS)
Kaltenbacher, Barbara; Klassen, Andrej
2018-05-01
In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.
Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-11-01
A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.
Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping
2017-03-01
Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Applications of compressed sensing image reconstruction to sparse view phase tomography
NASA Astrophysics Data System (ADS)
Ueda, Ryosuke; Kudo, Hiroyuki; Dong, Jian
2017-10-01
X-ray phase CT has a potential to give the higher contrast in soft tissue observations. To shorten the measure- ment time, sparse-view CT data acquisition has been attracting the attention. This paper applies two major compressed sensing (CS) approaches to image reconstruction in the x-ray sparse-view phase tomography. The first CS approach is the standard Total Variation (TV) regularization. The major drawbacks of TV regularization are a patchy artifact and loss in smooth intensity changes due to the piecewise constant nature of image model. The second CS method is a relatively new approach of CS which uses a nonlinear smoothing filter to design the regularization term. The nonlinear filter based CS is expected to reduce the major artifact in the TV regular- ization. The both cost functions can be minimized by the very fast iterative reconstruction method. However, in the past research activities, it is not clearly demonstrated how much image quality difference occurs between the TV regularization and the nonlinear filter based CS in x-ray phase CT applications. We clarify the issue by applying the two CS applications to the case of x-ray phase tomography. We provide results with numerically simulated data, which demonstrates that the nonlinear filter based CS outperforms the TV regularization in terms of textures and smooth intensity changes.
A time-accurate high-resolution TVD scheme for solving the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Kim, Hyun Dae; Liu, Nan-Suey
1992-01-01
A total variation diminishing (TVD) scheme has been developed and incorporated into an existing time-accurate high-resolution Navier-Stokes code. The accuracy and the robustness of the resulting solution procedure have been assessed by performing many calculations in four different areas: shock tube flows, regular shock reflection, supersonic boundary layer, and shock boundary layer interactions. These numerical results compare well with corresponding exact solutions or experimental data.
NASA Astrophysics Data System (ADS)
Su, Yuepeng; Ma, Shen; Dong, Shuanglin
2005-01-01
Nine enclosures (5 m × 5 m) were built in a Fenneropenaeus chinensis culture pond of Rushan Gulf in April, 2001. The probiotics and BIO ENERGIZER solution were applied for disparate treatments. Variations of alkaline phosphatase activity (APA) and its relationship with the contents of C, N and P in sediments were studied. Results show that APA of sediments increases from 3.096 nmol g-1min-1 to 5.407nmol g-1min-1 in culture period; the bacteria biomass is not the only factor to determine APA; the contents of total P and total organic carbon have a significant positive correlation with APA, while that of total nitrogen has a negative correlation. In addition, the contents of inorganic P and organic P are not regular with APA. By comparison, TOC shows a more significant coherence with APA, meaning that organic pollution in sediments affects APA remarkably.
Compressed sensing with gradient total variation for low-dose CBCT reconstruction
NASA Astrophysics Data System (ADS)
Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung
2015-06-01
This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.
A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.
He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi
2014-06-27
The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
NASA Technical Reports Server (NTRS)
Paul, M. P.
1982-01-01
Measurement of integrated columnar electron content and total electron content for the local ionosphere and the overlying protonosphere via Faraday rotation and group delay techniques has proven very useful. A field station was established having the geographic location of 31.5 deg N latitude and 91.06 deg W longitude to accomplish these objectives. A polarimeter receiving system was set up in the beginning to measure the Faraday rotation of 137.35 MHz radio signal from geostationary satellite ATS 3 to yield the integrated columnar electron content of the local ionosphere. The measurement was continued regularly, and the analysis of the data thus collected provided a synopsis of the statistical variation of the ionosphere along with the transient variations that occurred during the periods of geomagnetic and other disturbances.
The Total Variation Regularized L1 Model for Multiscale Decomposition
2006-01-01
L1 fidelity term, and presented impressive and successful applications of the TV-L1 model to impulsive noise removal and outlier identification. She...used to filter 1D signal [3], to remove impulsive (salt-n- pepper) noise [35], to extract textures from natural images [45], to remove varying...34, 35, 36] discovery of the usefulness of this model for removing impul- sive noise , Chan and Esedoglu’s [17] further analysis of this model, and a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Andrew J.; McDonald, Benjamin S.; Smith, Leon E.
The methods currently used by the International Atomic Energy Agency to account for nuclear materials at fuel fabrication facilities are time consuming and require in-field chemistry and operation by experts. Spectral X-ray radiography, along with advanced inverse algorithms, is an alternative inspection that could be completed noninvasively, without any in-field chemistry, with inspections of tens of seconds. The proposed inspection system and algorithms are presented here. The inverse algorithm uses total variation regularization and adaptive regularization parameter selection with the unbiased predictive risk estimator. Performance of the system is quantified with simulated X-ray inspection data and sensitivity of the outputmore » is tested against various inspection system instabilities. Material quantification from a fully-characterized inspection system is shown to be very accurate, with biases on nuclear material estimations of < 0.02%. It is shown that the results are sensitive to variations in the fuel powder sample density and detector pixel gain, which increase biases to 1%. Options to mitigate these inaccuracies are discussed.« less
Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Comparison Study of Regularizations in Spectral Computed Tomography Reconstruction
NASA Astrophysics Data System (ADS)
Salehjahromi, Morteza; Zhang, Yanbo; Yu, Hengyong
2018-12-01
The energy-resolving photon-counting detectors in spectral computed tomography (CT) can acquire projections of an object in different energy channels. In other words, they are able to reliably distinguish the received photon energies. These detectors lead to the emerging spectral CT, which is also called multi-energy CT, energy-selective CT, color CT, etc. Spectral CT can provide additional information in comparison with the conventional CT in which energy integrating detectors are used to acquire polychromatic projections of an object being investigated. The measurements obtained by X-ray CT detectors are noisy in reality, especially in spectral CT where the photon number is low in each energy channel. Therefore, some regularization should be applied to obtain a better image quality for this ill-posed problem in spectral CT image reconstruction. Quadratic-based regularizations are not often satisfactory as they blur the edges in the reconstructed images. As a result, different edge-preserving regularization methods have been adopted for reconstructing high quality images in the last decade. In this work, we numerically evaluate the performance of different regularizers in spectral CT, including total variation, non-local means and anisotropic diffusion. The goal is to provide some practical guidance to accurately reconstruct the attenuation distribution in each energy channel of the spectral CT data.
A second order derivative scheme based on Bregman algorithm class
NASA Astrophysics Data System (ADS)
Campagna, Rosanna; Crisci, Serena; Cuomo, Salvatore; Galletti, Ardelio; Marcellino, Livia
2016-10-01
The algorithms based on the Bregman iterative regularization are known for efficiently solving convex constraint optimization problems. In this paper, we introduce a second order derivative scheme for the class of Bregman algorithms. Its properties of convergence and stability are investigated by means of numerical evidences. Moreover, we apply the proposed scheme to an isotropic Total Variation (TV) problem arising out of the Magnetic Resonance Image (MRI) denoising. Experimental results confirm that our algorithm has good performance in terms of denoising quality, effectiveness and robustness.
Seismic waveform inversion best practices: regional, global and exploration test cases
NASA Astrophysics Data System (ADS)
Modrak, Ryan; Tromp, Jeroen
2016-09-01
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.
Collins, R Lorraine; Kashdan, Todd B; Koutsky, James R; Morsheimer, Elizabeth T; Vetter, Charlene J
2008-01-01
Underage drinkers typically have not developed regular patterns of drinking and so are likely to exhibit situational variation in alcohol intake, including binge drinking. Information about such variation is not well captured by quantity/frequency (QF) measures, which require that drinkers blend information over time to derive a representative estimate of "typical" drinking. The Timeline Followback (TLFB) method is designed to retrospectively capture situational variations in drinking during a specific period of time. We compared our newly-developed Self-administered TLFB (STLFB) measure to a QF measure for reporting alcohol intake. Our sample of 429 (men=204; women=225) underage (i.e., age 18-20 years) drinkers completed the two drinking measures and reported on alcohol problems. The STLFB and QF measures converged in assessing typical daily intake, but the STLFB provided more information about situational variations in alcohol use and better identification of regular versus intermittent binge drinkers. Regular binge drinkers reported more alcohol problems. The STLFB is an easy-to-administer measure of variations in alcohol intake, which can be useful for understanding drinking behavior.
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
Nonlocal variational model and filter algorithm to remove multiplicative noise
NASA Astrophysics Data System (ADS)
Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi
2010-07-01
The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.
Kainz, Philipp; Pfeiffer, Michael; Urschler, Martin
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses.
Kainz, Philipp; Pfeiffer, Michael
2017-01-01
Segmentation of histopathology sections is a necessary preprocessing step for digital pathology. Due to the large variability of biological tissue, machine learning techniques have shown superior performance over conventional image processing methods. Here we present our deep neural network-based approach for segmentation and classification of glands in tissue of benign and malignant colorectal cancer, which was developed to participate in the GlaS@MICCAI2015 colon gland segmentation challenge. We use two distinct deep convolutional neural networks (CNN) for pixel-wise classification of Hematoxylin-Eosin stained images. While the first classifier separates glands from background, the second classifier identifies gland-separating structures. In a subsequent step, a figure-ground segmentation based on weighted total variation produces the final segmentation result by regularizing the CNN predictions. We present both quantitative and qualitative segmentation results on the recently released and publicly available Warwick-QU colon adenocarcinoma dataset associated with the GlaS@MICCAI2015 challenge and compare our approach to the simultaneously developed other approaches that participated in the same challenge. On two test sets, we demonstrate our segmentation performance and show that we achieve a tissue classification accuracy of 98% and 95%, making use of the inherent capability of our system to distinguish between benign and malignant tissue. Our results show that deep learning approaches can yield highly accurate and reproducible results for biomedical image analysis, with the potential to significantly improve the quality and speed of medical diagnoses. PMID:29018612
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Changes in atmospheric composition inferred from ionospheric production rates
NASA Technical Reports Server (NTRS)
Titheridge, J. E.
1974-01-01
Changes in the total electron content of the ionosphere near sunrise are used to determine the integrated production rate in the ionosphere (Q) from 1965 to 1971 at latitudes of 34S, 20N, and 34N. The observed regular semiannual variation in Q through a range of 1:3:1 is interpreted as an increase in the ratio O/N2 (relative densities) near the equinoxes. It follows that there is a worldwide semiannual variation in atmospheric composition, with the above ratio maximum just after the equinoxes. There is a large seasonal variation in the Northern hemisphere with a maximum in mid-summer. This effect is absent in the Southern hemisphere. At all times except solar maximum in the Northern hemisphere there is a global asymmetry. The ratio O/N2 is about three times as large in the Northern hemisphere. The overall mechanism appears to be N2 absorption.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was usedmore » on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant No.61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Image super-resolution via adaptive filtering and regularization
NASA Astrophysics Data System (ADS)
Ren, Jingbo; Wu, Hao; Dong, Weisheng; Shi, Guangming
2014-11-01
Image super-resolution (SR) is widely used in the fields of civil and military, especially for the low-resolution remote sensing images limited by the sensor. Single-image SR refers to the task of restoring a high-resolution (HR) image from the low-resolution image coupled with some prior knowledge as a regularization term. One classic method regularizes image by total variation (TV) and/or wavelet or some other transform which introduce some artifacts. To compress these shortages, a new framework for single image SR is proposed by utilizing an adaptive filter before regularization. The key of our model is that the adaptive filter is used to remove the spatial relevance among pixels first and then only the high frequency (HF) part, which is sparser in TV and transform domain, is considered as the regularization term. Concretely, through transforming the original model, the SR question can be solved by two alternate iteration sub-problems. Before each iteration, the adaptive filter should be updated to estimate the initial HF. A high quality HF part and HR image can be obtained by solving the first and second sub-problem, respectively. In experimental part, a set of remote sensing images captured by Landsat satellites are tested to demonstrate the effectiveness of the proposed framework. Experimental results show the outstanding performance of the proposed method in quantitative evaluation and visual fidelity compared with the state-of-the-art methods.
Multichannel blind iterative image restoration.
Sroubek, Filip; Flusser, Jan
2003-01-01
Blind image deconvolution is required in many applications of microscopy imaging, remote sensing, and astronomical imaging. Unfortunately in a single-channel framework, serious conceptual and numerical problems are often encountered. Very recently, an eigenvector-based method (EVAM) was proposed for a multichannel framework which determines perfectly convolution masks in a noise-free environment if channel disparity, called co-primeness, is satisfied. We propose a novel iterative algorithm based on recent anisotropic denoising techniques of total variation and a Mumford-Shah functional with the EVAM restoration condition included. A linearization scheme of half-quadratic regularization together with a cell-centered finite difference discretization scheme is used in the algorithm and provides a unified approach to the solution of total variation or Mumford-Shah. The algorithm performs well even on very noisy images and does not require an exact estimation of mask orders. We demonstrate capabilities of the algorithm on synthetic data. Finally, the algorithm is applied to defocused images taken with a digital camera and to data from astronomical ground-based observations of the Sun.
Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Patterns of Variation for the Sun and Sun-like Stars
NASA Astrophysics Data System (ADS)
Radick, Richard R.; Lockwood, G. Wesley; Henry, Gregory W.; Hall, Jeffrey C.; Pevtsov, Alexei A.
2018-03-01
We compare patterns of variation for the Sun and 72 Sun-like stars by combining total and spectral solar irradiance measurements between 2003 and 2017 from the SORCE satellite, Strömgren b, y stellar photometry between 1993 and 2017 from Fairborn Observatory, and solar and stellar chromospheric Ca II H+K emission observations between 1992 and 2016 from Lowell Observatory. The new data and their analysis strengthen the relationships found previously between chromospheric and brightness variability on the decadal timescale of the solar activity cycle. Both chromospheric H+K and photometric b, y variability among Sun-like stars are related to average chromospheric activity by power laws on this timescale. Young active stars become fainter as their H+K emission increases, and older, less active, more Sun-age stars tend to show a pattern of direct correlation between photometric and chromospheric emission variations. The directly correlated pattern between total solar irradiance and chromospheric Ca II emission variations shown by the Sun appears to extend also to variations in the Strömgren b, y portion of the solar spectrum. Although the Sun does not differ strongly from its stellar age and spectral class mates in the activity and variability characteristics that we have now studied for over three decades, it may be somewhat unusual in two respects: (1) its comparatively smooth, regular activity cycle, and (2) its rather low photometric brightness variation relative to its chromospheric activity level and variation, perhaps indicating that facular emission and sunspot darkening are especially well-balanced on the Sun.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Samara, Anna; Smith, Kenny; Brown, Helen; Wonnacott, Elizabeth
2017-05-01
Languages exhibit sociolinguistic variation, such that adult native speakers condition the usage of linguistic variants on social context, gender, and ethnicity, among other cues. While the existence of this kind of socially conditioned variation is well-established, less is known about how it is acquired. Studies of naturalistic language use by children provide various examples where children's production of sociolinguistic variants appears to be conditioned on similar factors to adults' production, but it is difficult to determine whether this reflects knowledge of sociolinguistic conditioning or systematic differences in the input to children from different social groups. Furthermore, artificial language learning experiments have shown that children have a tendency to eliminate variation, a process which could potentially work against their acquisition of sociolinguistic variation. The current study used a semi-artificial language learning paradigm to investigate learning of the sociolinguistic cue of speaker identity in 6-year-olds and adults. Participants were trained and tested on an artificial language where nouns were obligatorily followed by one of two meaningless particles and were produced by one of two speakers (one male, one female). Particle usage was conditioned deterministically on speaker identity (Experiment 1), probabilistically (Experiment 2), or not at all (Experiment 3). Participants were given tests of production and comprehension. In Experiments 1 and 2, both children and adults successfully acquired the speaker identity cue, although the effect was stronger for adults and in Experiment 1. In addition, in all three experiments, there was evidence of regularization in participants' productions, although the type of regularization differed with age: children showed regularization by boosting the frequency of one particle at the expense of the other, while adults regularized by conditioning particle usage on lexical items. Overall, results demonstrate that children and adults are sensitive to speaker identity cues, an ability which is fundamental to tracking sociolinguistic variation, and that children's well-established tendency to regularize does not prevent them from learning sociolinguistically conditioned variation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Bias correction for magnetic resonance images via joint entropy regularization.
Wang, Shanshan; Xia, Yong; Dong, Pei; Luo, Jianhua; Huang, Qiu; Feng, Dagan; Li, Yuanxiang
2014-01-01
Due to the imperfections of the radio frequency (RF) coil or object-dependent electrodynamic interactions, magnetic resonance (MR) images often suffer from a smooth and biologically meaningless bias field, which causes severe troubles for subsequent processing and quantitative analysis. To effectively restore the original signal, this paper simultaneously exploits the spatial and gradient features of the corrupted MR images for bias correction via the joint entropy regularization. With both isotropic and anisotropic total variation (TV) considered, two nonparametric bias correction algorithms have been proposed, namely IsoTVBiasC and AniTVBiasC. These two methods have been applied to simulated images under various noise levels and bias field corruption and also tested on real MR data. The test results show that the proposed two methods can effectively remove the bias field and also present comparable performance compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
Discharge regularity in the turtle posterior crista: comparisons between experiment and theory.
Goldberg, Jay M; Holt, Joseph C
2013-12-01
Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively.
Discharge regularity in the turtle posterior crista: comparisons between experiment and theory
Holt, Joseph C.
2013-01-01
Intra-axonal recordings were made from bouton fibers near their termination in the turtle posterior crista. Spike discharge, miniature excitatory postsynaptic potentials (mEPSPs), and afterhyperpolarizations (AHPs) were monitored during resting activity in both regularly and irregularly discharging units. Quantal size (qsize) and quantal rate (qrate) were estimated by shot-noise theory. Theoretically, the ratio, σV/(dμV/dt), between synaptic noise (σV) and the slope of the mean voltage trajectory (dμV/dt) near threshold crossing should determine discharge regularity. AHPs are deeper and more prolonged in regular units; as a result, dμV/dt is larger, the more regular the discharge. The qsize is larger and qrate smaller in irregular units; these oppositely directed trends lead to little variation in σV with discharge regularity. Of the two variables, dμV/dt is much more influential than the nearly constant σV in determining regularity. Sinusoidal canal-duct indentations at 0.3 Hz led to modulations in spike discharge and synaptic voltage. Gain, the ratio between the amplitudes of the two modulations, and phase leads re indentation of both modulations are larger in irregular units. Gain variations parallel the sensitivity of the postsynaptic spike encoder, the set of conductances that converts synaptic input into spike discharge. Phase variations reflect both synaptic inputs to the encoder and postsynaptic processes. Experimental data were interpreted using a stochastic integrate-and-fire model. Advantages of an irregular discharge include an enhanced encoder gain and the prevention of nonlinear phase locking. Regular and irregular units are more efficient, respectively, in the encoding of low- and high-frequency head rotations, respectively. PMID:24004525
NASA Astrophysics Data System (ADS)
Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki
2018-05-01
We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.
Healthy and unhealthy eating at lower secondary school in Norway.
Hilsen, Marit; Eikemo, Terje A; Bere, Elling
2010-11-01
To assess adolescents' eating/drinking habits of a selection of healthy and unhealthy food items at school, variations in gender and socioeconomic status in these eating habits, and variations between the schools. A cross-sectional study among 2870 adolescents (mean age: 15.5 years) within the Fruits and Vegetables Make the Marks (FVMM) project. A survey questionnaire was completed by the pupils in the classroom in the presence of a trained project worker. One school lesson (45 minutes) was used to complete the questionnaire. A total of two healthy (fruit and vegetables (FV), water) and five unhealthy (candy and/or potato chips, sweet bakery, instant noodles, regular soft drinks, and diet soft drinks) food items were assessed by food frequency questions. All variables were dichotomised to less than once a week and once a week or more. Several pupils reported to consume snacks (33%), sweet bakery (36%) and regular soft drinks (24%) at school at least once a week. The proportion of pupils who reported to eat FV at least once a week (40%) was low. Girls and pupils with plans of higher education had a more favourable intake of healthy versus unhealthy food items at school. In two-level variance component analyses the proportional school variation ranged from 3.4% (diet soft drinks) to 30.7% (noodles). A large number of adolescents consume unhealthy food items at school and few eat FV. Large differences were observed between groups of pupils and between the schools in consumption of these foods.
Teichtmeister, S.; Aldakheel, F.
2016-01-01
This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic–plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. PMID:27002069
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
A novel approach of ensuring layout regularity correct by construction in advanced technologies
NASA Astrophysics Data System (ADS)
Ahmed, Shafquat Jahan; Vaderiya, Yagnesh; Gupta, Radhika; Parthasarathy, Chittoor; Marin, Jean-Claude; Robert, Frederic
2017-03-01
In advanced technology nodes, layout regularity has become a mandatory prerequisite to create robust designs less sensitive to variations in manufacturing process in order to improve yield and minimizing electrical variability. In this paper we describe a method for designing regular full custom layouts based on design and process co-optimization. The method includes various design rule checks that can be used on-the-fly during leaf-cell layout development. We extract a Layout Regularity Index (LRI) from the layouts based on the jogs, alignments and pitches used in the design for any given metal layer. Regularity Index of a layout is the direct indicator of manufacturing yield and is used to compare the relative health of different layout blocks in terms of process friendliness. The method has been deployed for 28nm and 40nm technology nodes for Memory IP and is being extended to other IPs (IO, standard-cell). We have quantified the gain of layout regularity with the deployed method on printability and electrical characteristics by process-variation (PV) band simulation analysis and have achieved up-to 5nm reduction in PV band.
A dictionary learning approach for Poisson image deblurring.
Ma, Liyan; Moisan, Lionel; Yu, Jian; Zeng, Tieyong
2013-07-01
The restoration of images corrupted by blur and Poisson noise is a key issue in medical and biological image processing. While most existing methods are based on variational models, generally derived from a maximum a posteriori (MAP) formulation, recently sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, we propose in this paper a model containing three terms: a patch-based sparse representation prior over a learned dictionary, the pixel-based total variation regularization term and a data-fidelity term capturing the statistics of Poisson noise. The resulting optimization problem can be solved by an alternating minimization technique combined with variable splitting. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio value and the method noise, the proposed algorithm outperforms state-of-the-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akiyama, Kazunori; Fish, Vincent L.; Doeleman, Sheperd S.
We propose a new imaging technique for radio and optical/infrared interferometry. The proposed technique reconstructs the image from the visibility amplitude and closure phase, which are standard data products of short-millimeter very long baseline interferometers such as the Event Horizon Telescope (EHT) and optical/infrared interferometers, by utilizing two regularization functions: the ℓ {sub 1}-norm and total variation (TV) of the brightness distribution. In the proposed method, optimal regularization parameters, which represent the sparseness and effective spatial resolution of the image, are derived from data themselves using cross-validation (CV). As an application of this technique, we present simulated observations of M87more » with the EHT based on four physically motivated models. We confirm that ℓ {sub 1} + TV regularization can achieve an optimal resolution of ∼20%–30% of the diffraction limit λ / D {sub max}, which is the nominal spatial resolution of a radio interferometer. With the proposed technique, the EHT can robustly and reasonably achieve super-resolution sufficient to clearly resolve the black hole shadow. These results make it promising for the EHT to provide an unprecedented view of the event-horizon-scale structure in the vicinity of the supermassive black hole in M87 and also the Galactic center Sgr A*.« less
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
Ozone and nitrogen dioxide above the northern Tien Shan
NASA Technical Reports Server (NTRS)
Arefev, Vladimir N.; Volkovitsky, Oleg A.; Kamenogradsky, Nikita E.; Semyonov, Vladimir K.; Sinyakov, Valery P.
1994-01-01
The results of systematic perennial measurements of the total ozone (since 1979) and nitrogen dioxide column (since 1983) in the atmosphere in the European-Asian continent center above the mountainmass of the Tien Shan are given. This region is distinguished by a great number of sunny days during a year. The observation station is at the Northern shore of Issyk Kul Lake (42.56 N 77.04 E 1650 m above the sea level). The measurement results are presented as the monthly averaged atmospheric total ozone and NO2 stratospheric column abundances (morning and evening). The peculiarities of seasonal variations of ozone and nitrogen dioxide atmospheric contents, their regular variances with a quasi-biennial cycles and trends have been noticed. Irregular variances of ozone and nitrogen dioxide atmospheric contents, i.e. their positive and negative anomalies in the monthly averaged contents relative to the perennial averaged monthly means, have been analyzed. The synchronous and opposite in phase anomalies in variations of ozone and nitrogen dioxide atmospheric contents were explained by the transport and zonal circulation in the stratosphere (Kamenogradsky et al., 1990).
Brenten, Thomas; Morris, Penelope J.; Salt, Carina; Raila, Jens; Kohn, Barbara; Schweigert, Florian J.; Zentek, Jürgen
2016-01-01
Breed, sex and age effects on haematological and biochemical variables were investigated in 24 labrador retriever and 25 miniature schnauzer dogs during the first year of life. Blood samples were taken regularly between weeks 8 and 52. White blood cell and red blood cell counts, haemoglobin concentration, haematocrit, mean cell volume, mean cell haemoglobin, mean cell haemoglobin concentration, platelet count as well as total protein, albumin, calcium, phosphate, alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, glutamate dehydrogenase, total cholesterol, triglycerides, creatine and urea were evaluated. For all haematological and biochemical parameters, there were significant effects of age on test results. Statistically significant effects for breed and the breed×age interaction on test results were observed for most of the parameters with the exception of haemoglobin. Variations in test results illustrate growth related alterations in body tissue and metabolism leading to dynamic and marked changes in haematological and biochemical parameters, which have to be considered for the interpretation of clinical data obtained from dogs in the first year of life. PMID:27252875
Brenten, Thomas; Morris, Penelope J; Salt, Carina; Raila, Jens; Kohn, Barbara; Schweigert, Florian J; Zentek, Jürgen
2016-01-01
Breed, sex and age effects on haematological and biochemical variables were investigated in 24 labrador retriever and 25 miniature schnauzer dogs during the first year of life. Blood samples were taken regularly between weeks 8 and 52. White blood cell and red blood cell counts, haemoglobin concentration, haematocrit, mean cell volume, mean cell haemoglobin, mean cell haemoglobin concentration, platelet count as well as total protein, albumin, calcium, phosphate, alkaline phosphatase, alanine aminotransferase, aspartate aminotransferase, glutamate dehydrogenase, total cholesterol, triglycerides, creatine and urea were evaluated. For all haematological and biochemical parameters, there were significant effects of age on test results. Statistically significant effects for breed and the breed×age interaction on test results were observed for most of the parameters with the exception of haemoglobin. Variations in test results illustrate growth related alterations in body tissue and metabolism leading to dynamic and marked changes in haematological and biochemical parameters, which have to be considered for the interpretation of clinical data obtained from dogs in the first year of life.
Miehe, C; Teichtmeister, S; Aldakheel, F
2016-04-28
This work outlines a novel variational-based theory for the phase-field modelling of ductile fracture in elastic-plastic solids undergoing large strains. The phase-field approach regularizes sharp crack surfaces within a pure continuum setting by a specific gradient damage modelling. It is linked to a formulation of gradient plasticity at finite strains. The framework includes two independent length scales which regularize both the plastic response as well as the crack discontinuities. This ensures that the damage zones of ductile fracture are inside of plastic zones, and guarantees on the computational side a mesh objectivity in post-critical ranges. © 2016 The Author(s).
On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Binsinger, Caroline; Laure, Patrick; Ambard, Marie-France
2006-01-01
Physical activity is often presented as an effective tool to improve self-esteem and/or to reduce anxiety. The aim of this study was to measure the influence of a regular extra curricular sports practice on self-esteem and anxiety. We conducted a prospective cohort study, which has included all of the pupils entering the first year of secondary school (sixth grade) in the Vosges Department (east France) during the school year 2001-2002 and followed during three years. Data were collected every six months by self-reported questionnaires. 1791 pupils were present at each of the six data collection sessions and completed all the questionnaires, representing 10,746 documents: 835 boys (46.6 %) and 956 girls (53.4 %), in November 2001, the average age was 11.1 ± 0.5 years (mean ± standard deviation). 722 pupils (40.3 %) reported that they had practiced an extra-school physical activity in a sporting association from November 2001 to May 2004 (ECS group), whereas, 195 (10.9 %) pupils had not practiced any extra-school physical activity at all (NECS group). The average global scores of self-esteem (Rosenberg’s Scale) and trait anxiety (Spielberger’s Scale) of the ECS pupils were, respectively, higher and lower than those of the NECS group. However, the incidence density (number of new cases during a given period / total person-time of observation) of moderate or severe decrease of self-esteem (less than “mean - one standard deviation ”or less than “mean - two standard deviations”) was not significantly different between the two groups, a finding that was also evident also in the case of trait anxiety. Finally, among ECS pupils, the incidence density of severe decrease of self-esteem was lower at the girls’. Practitioners and physical education teachers, as well as parents, should be encouraged to seek out ways to involve pupils in extra-school physical activities. Key Points A regular extra-curricular sports practice is associated to better levels of self-esteem and trait anxiety among young adolescent. This activity seems to protect girls from severe variations of self-esteem. Boys do not seem to be protected from moderate or severe variations, neither of self-esteem, nor of trait anxiety, by a regular extracurricular sport practice. PMID:24198689
Binsinger, Caroline; Laure, Patrick; Ambard, Marie-France
2006-01-01
Physical activity is often presented as an effective tool to improve self-esteem and/or to reduce anxiety. The aim of this study was to measure the influence of a regular extra curricular sports practice on self-esteem and anxiety. We conducted a prospective cohort study, which has included all of the pupils entering the first year of secondary school (sixth grade) in the Vosges Department (east France) during the school year 2001-2002 and followed during three years. Data were collected every six months by self-reported questionnaires. 1791 pupils were present at each of the six data collection sessions and completed all the questionnaires, representing 10,746 documents: 835 boys (46.6 %) and 956 girls (53.4 %), in November 2001, the average age was 11.1 ± 0.5 years (mean ± standard deviation). 722 pupils (40.3 %) reported that they had practiced an extra-school physical activity in a sporting association from November 2001 to May 2004 (ECS group), whereas, 195 (10.9 %) pupils had not practiced any extra-school physical activity at all (NECS group). The average global scores of self-esteem (Rosenberg's Scale) and trait anxiety (Spielberger's Scale) of the ECS pupils were, respectively, higher and lower than those of the NECS group. However, the incidence density (number of new cases during a given period / total person-time of observation) of moderate or severe decrease of self-esteem (less than "mean - one standard deviation "or less than "mean - two standard deviations") was not significantly different between the two groups, a finding that was also evident also in the case of trait anxiety. Finally, among ECS pupils, the incidence density of severe decrease of self-esteem was lower at the girls'. Practitioners and physical education teachers, as well as parents, should be encouraged to seek out ways to involve pupils in extra-school physical activities. Key PointsA regular extra-curricular sports practice is associated to better levels of self-esteem and trait anxiety among young adolescent.This activity seems to protect girls from severe variations of self-esteem.Boys do not seem to be protected from moderate or severe variations, neither of self-esteem, nor of trait anxiety, by a regular extracurricular sport practice.
Mass change distribution inverted from space-borne gravimetric data using a Monte Carlo method
NASA Astrophysics Data System (ADS)
Zhou, X.; Sun, X.; Wu, Y.; Sun, W.
2017-12-01
Mass estimate plays a key role in using temporally satellite gravimetric data to quantify the terrestrial water storage change. GRACE (Gravity Recovery and Climate Experiment) only observes the low degree gravity field changes, which can be used to estimate the total surface density or equivalent water height (EWH) variation, with a limited spatial resolution of 300 km. There are several methods to estimate the mass variation in an arbitrary region, such as averaging kernel, forward modelling and mass concentration (mascon). Mascon method can isolate the local mass from the gravity change at a large scale through solving the observation equation (objective function) which represents the relationship between unknown masses and the measurements. To avoid the unreasonable local mass inverted from smoothed gravity change map, regularization has to be used in the inversion. We herein give a Markov chain Monte Carlo (MCMC) method to objectively determine the regularization parameter for the non-negative mass inversion problem. We first apply this approach to the mass inversion from synthetic data. Result show MCMC can effectively reproduce the local mass variation taking GRACE measurement error into consideration. We then use MCMC to estimate the ground water change rate of North China Plain from GRACE gravity change rate from 2003 to 2014 under a supposition of the continuous ground water loss in this region. Inversion result show that the ground water loss rate in North China Plain is 7.6±0.2Gt/yr during past 12 years which is coincident with that from previous researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu
2014-05-15
Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less
[The relations of corneal, lenticular and total astigmatism].
Liang, D; Guan, Z; Lin, J
1995-06-01
To determine the relations of corneal, lenticular and total astigmatism and the changes of the astigmatism with age. Out-patients with refractive errors were refracted with retinoscope after using cycloplegic drops and measured the radii of anterior corneal curvature. One hundred and ninety-four cases (382 eyes) with refractive errors were studied. Of the eyes 67.9% had regular corneal astigmatism, 68.1% irregular lenticular astigmatism and 60.7% regular total astigmatism, 88.5% of the corneal astigmatism has the same quality as the total astigmatism. The total astigmatism in 46% of the eyes included the summation of corneal and lenticular astigmatism, but in 41.3% of the eyes irregular lenticular astigmatism corrected the regular corneal astigmatism. The astigmatism of cornea, lens and total astigmatism changed from regular to irregular with the increase of age. The linear correlation analysis showed a positive correlation between the power of horizontal corneal refraction and age, and a negative corrlation between the power of vertical corneal refraction and age. The shape of cornea was the major cause of total astigmatism. The influence of lens on the total astigmatism was different. The reasons for the change of the total astigmatism from regular to irregular with the increase of age were the changes of the power of corneal refraction, particularly the increase of the power of horizontal corneal refraction and lenticular irregular astigmatism.
NASA Astrophysics Data System (ADS)
Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi
2018-05-01
A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
Regularizing Unpredictable Variation: Evidence from a Natural Language Setting
ERIC Educational Resources Information Center
Hendricks, Alison Eisel; Miller, Karen; Jackson, Carrie N.
2018-01-01
While previous sociolinguistic research has demonstrated that children faithfully acquire probabilistic input constrained by sociolinguistic and linguistic factors (e.g., gender and socioeconomic status), research suggests children regularize inconsistent input-probabilistic input that is not sociolinguistically constrained (e.g., Hudson Kam &…
Total ozone variations at Reykjavik since 1957
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjarnason, G.G.; Rognvaldsson, O.E.; Sigfusson, T.I.
1993-12-01
Total ozone measurements using a Dobson spectrophotometer have been performed on a regular basis at Reykjavik (65 deg 08 min N, 21 deg 54 min W), Iceland, since 1957. The data set for the entire period of observations has been critically examined. Due to problems related to the calibration of the instrument the data record of ozone observations is divided into two periods in the following analysis (1957-1977 and 1977-1990). A statistical model was developed to fit the data and estimate long-term changes in total ozone. The model includes seasonal variations, solar cycle influences, quasi-biennial oscillation (QBO) effects, and linearmore » trends. Some variants of the model are applied to investigate to what extent the estimated trends depend on the form of the model. Trend analysis of the revised data reveals a statistically significant linear decrease of 0.11 +/- 0.07% per year in the annual total ozone amount during the earlier period and 0.30 +/- 0.11% during the latter. The annual total ozone decline since 1977 is caused by a 0.47 +/- 0.14% decrease per year during the summer with no significant change during the winter or fall. On an annual basis, ozone varies by 3.5 +/- 0.8% over a solar cycle and by 2.1 +/- 0.6% over a QBO for the whole observation period. The effect of the 11-year solar cycle is particularly strong in the data during the early months of the year and in the westerly phase of the QBO. The data also suggest a strong response of total ozone to major solar proton events.« less
Grading of Total Mesorectal Excision Specimens: Assessment of Interrater Agreement.
Goebel, Emily A; Stegmaier, Melissa; Gorassini, Donald R; Kubica, Matthew; Parfitt, Jeremy R; Driman, David K
2018-06-01
Total mesorectal excision is the standard of care for patients with rectal cancer. Pathological evaluation of the quality of the total mesorectal excision specimen is an important prognostic factor that correlates with local recurrence, but is potentially subjective. This study aimed to determine the degree of variation in grading, both between assessors and between fresh and formalin-fixed specimens. Raters included surgeons, pathologists, pathology residents, pathologists' assistants, and pathologists' assistant trainees. Specimens were assessed by up to 6 raters in the fresh state and by 2 raters postfixation. Four parameters were evaluated: mesorectal bulk, surface regularity, defects, and coning. Interrater agreement was measured using ordinal α-values. The study was conducted at a single academic center. The primary outcome was agreement between individuals when grading total mesorectal excision specimens. A total of 37 total mesorectal excision specimens were assessed. Reliability between all raters for fresh specimens for mesorectal bulk, surface regularity, defects, coning, and overall grade were 0.85, 0.85, 0.92, 0.84, and 0.91. When compared with all raters, pathologists and residents had higher agreement and pathologists and surgeons had lower agreement. Ordinal α-values comparing pathologist and pathologist's assistant agreement for overall grade were similar pre- and postfixation (0.78 vs 0.80), but agreement for assessing defects decreased postfixation. Among pathologists' assistants, agreement was higher when grading specimens postfixation than when grading fresh specimens. Assessment bias may have occurred because of the greater number of pathologists' assistants participating than the number of residents and pathologists. The results indicate good interrater agreement for the assessment of overall grade, with defects showing the best interrater agreement in fresh specimens. Although total mesorectal excision specimens may be consistently graded postfixation, the assessment of defects postfixation may be less reliable. This study highlights the need for additional knowledge-transfer activities to ensure consistency and accurate grading of total mesorectal excision specimens. See Video Abstract at http://links.lww.com/DCR/A497.
Huang, Zhuo; Long, Hai; Wei, Yu-Ming; Yan, Ze-Hong; Zheng, You-Liang
2016-04-01
The α-gliadins account for 15-30 % of the total storage protein in wheat endosperm and play important roles in the dough extensibility and nutritional quality. On the other side, they act as a main source of toxic peptides triggering celiac disease. In this study, 37 α-gliadins were isolated from three species of Aegilops section Sitopsis. Sequence similarity and phylogenetic analyses revealed novel allelic variation at Gli-2 loci of species of Sitopsis and regular organization of motifs in their repetitive domain. Based on the comprehensive analyses of a large number of known sequences of bread wheat and its diploid genome progenitors, the distributions of four T cell epitopes and length variations of two polyglutamine domains are analyzed. Additionally, according to the organization of repeat motifs, we classified the α-gliadins of Triticum and Aegilops into eight types. Their most recent common ancestor and putative divergence patterns were further considered. This study provides new insights into the allelic variations of α-gliadins in Aegilops section Sitopsis, as well as evolution of α-gliadin multigene family among Triticum and Aegilops species.
Diversity of human lip prints: a collaborative study of ethnically distinct world populations.
Sharma, Namita Alok; Eldomiaty, Magda Ahmed; Gutiérrez-Redomero, Esperanza; George, Adekunle Olufemi; Garud, Rajendra Somnath; Sánchez-Andrés, Angeles; Almasry, Shaima Mohamed; Rivaldería, Noemí; Al-Gaidi, Sami Awda; Ilesanmi, Toyosi
2014-01-01
Cheiloscopy is a comparatively recent counterpart to the long established dactyloscopic studies. Ethnic variability of these lip groove patterns has not yet been explored. This study was a collaborative effort aimed at establishing cheiloscopic variations amongst modern human populations from four geographically and culturally far removed nations: India, Saudi Arabia, Spain and Nigeria. Lip prints from a total of 754 subjects were collected and each was divided into four equal quadrants. The patterns were classified into six regular types (A-F), while some patterns which could not be fitted into the regular ones were segregated into G groups (G-0, G-1, G-2). Furthermore, co-dominance of more than one pattern type in a single quadrant forced us to identify the combination (COM, G-COM) patterns. The remarkable feature noted after compilation of the data included pattern C (a bifurcate/branched prototype extending the entire height of the lip) being a frequent feature of the lips of all the populations studied, save for the Nigerian population in which it was completely absent and which showed a tendency for pattern A (a vertical linear groove) and a significantly higher susceptibility for combination (COM) patterns. Chi-square test and correspondence analysis applied to the frequency of patterns appearing in the defined topographical areas indicated a significant variation for the populations studied.
Linguistic Correlates of Social Differences in the Negro Community.
ERIC Educational Resources Information Center
Wolfram, Walter A.
The regularity with which much variation between forms, formerly dismissed as "free variation," can be accounted for on the basis of extra-linguistic and independent linguistic factors has made the concept of the linguistic variable an invaluable construct in the description of patterned speech variation. The linguistic variable, itself…
Wu, Chuan; Ye, Zhihong; Shu, Wensheng; Zhu, Yongguan; Wong, Minghung
2011-05-01
Root aeration, arsenic (As) accumulation, and speciation in rice of 20 different genotypes with regular irrigation of water containing 0.4 mg As l(-1) were investigated. Different genotypes had different root anatomy demonstrated by entire root porosity (ranging from 12.43% to 33.21%), which was significantly correlated with radial oxygen loss (ROL) (R=0.64, P<0.01). Arsenic accumulation differed between genotypes, but there were no significant differences between Indica and Japonica subspecies, as well as paddy and upland rice. Total ROL from entire roots was correlated with metal tolerance (expressed as percentage mean of control straw biomass, R=0.69, P<0.01) among the 20 genotypes; total As concentration (R=-0.67, P<0.01) and inorganic As concentration (R=-0.47, P<0.05) in rice grains of different genotypes were negatively correlated with ROL. There were also significant genotype effects in percentage inorganic As (F=15.8, P<0.001) and percentage cacodylic acid (F=22.1, P<0.001), respectively. Root aeration of different genotypes and variation of genotypes on As accumulation and speciation would be useful for selecting genotypes to grow in areas contaminated by As.
Blind motion image deblurring using nonconvex higher-order total variation model
NASA Astrophysics Data System (ADS)
Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo
2016-09-01
We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.
Cai, Ailong; Wang, Linyuan; Zhang, Hanming; Yan, Bin; Li, Lei; Xi, Xiaoqi; Li, Jianxin
2014-01-01
Linear scan computed tomography (CT) is a promising imaging configuration with high scanning efficiency while the data set is under-sampled and angularly limited for which high quality image reconstruction is challenging. In this work, an edge guided total variation minimization reconstruction (EGTVM) algorithm is developed in dealing with this problem. The proposed method is modeled on the combination of total variation (TV) regularization and iterative edge detection strategy. In the proposed method, the edge weights of intermediate reconstructions are incorporated into the TV objective function. The optimization is efficiently solved by applying alternating direction method of multipliers. A prudential and conservative edge detection strategy proposed in this paper can obtain the true edges while restricting the errors within an acceptable degree. Based on the comparison on both simulation studies and real CT data set reconstructions, EGTVM provides comparable or even better quality compared to the non-edge guided reconstruction and adaptive steepest descent-projection onto convex sets method. With the utilization of weighted alternating direction TV minimization and edge detection, EGTVM achieves fast and robust convergence and reconstructs high quality image when applied in linear scan CT with under-sampled data set.
NASA Astrophysics Data System (ADS)
Filatov, Michael; Cremer, Dieter
2005-02-01
The regular approximation to the normalized elimination of the small component (NESC) in the modified Dirac equation has been developed and presented in matrix form. The matrix form of the infinite-order regular approximation (IORA) expressions, obtained in [Filatov and Cremer, J. Chem. Phys. 118, 6741 (2003)] using the resolution of the identity, is the exact matrix representation and corresponds to the zeroth-order regular approximation to NESC (NESC-ZORA). Because IORA (=NESC-ZORA) is a variationally stable method, it was used as a suitable starting point for the development of the second-order regular approximation to NESC (NESC-SORA). As shown for hydrogenlike ions, NESC-SORA energies are closer to the exact Dirac energies than the energies from the fifth-order Douglas-Kroll approximation, which is much more computationally demanding than NESC-SORA. For the application of IORA (=NESC-ZORA) and NESC-SORA to many-electron systems, the number of the two-electron integrals that need to be evaluated (identical to the number of the two-electron integrals of a full Dirac-Hartree-Fock calculation) was drastically reduced by using the resolution of the identity technique. An approximation was derived, which requires only the two-electron integrals of a nonrelativistic calculation. The accuracy of this approach was demonstrated for heliumlike ions. The total energy based on the approximate integrals deviates from the energy calculated with the exact integrals by less than 5×10-9hartree units. NESC-ZORA and NESC-SORA can easily be implemented in any nonrelativistic quantum chemical program. Their application is comparable in cost with that of nonrelativistic methods. The methods can be run with density functional theory and any wave function method. NESC-SORA has the advantage that it does not imply a picture change.
SparseBeads data: benchmarking sparsity-regularized computed tomography
NASA Astrophysics Data System (ADS)
Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.
2017-12-01
Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.
NASA Astrophysics Data System (ADS)
Alves, Claudianor O.; Miyagaki, Olímpio H.
2017-08-01
In this paper, we establish some results concerning the existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation. Variational methods are used to get an existence result, as well as, to study the concentration phenomenon, while the regularity is more delicate because we are leading with functions in an anisotropic Sobolev space.
Genetic variations in taste perception modify alcohol drinking behavior in Koreans.
Choi, Jeong-Hwa; Lee, Jeonghee; Yang, Sarah; Kim, Jeongseon
2017-06-01
The sensory components of alcohol affect the onset of individual's drinking. Therefore, variations in taste receptor genes may lead to differential sensitivity for alcohol taste, which may modify an individual's drinking behavior. This study examined the influence of genetic variants in the taste-sensing mechanism on alcohol drinking behavior and the choice of alcoholic beverages. A total of 1829 Koreans were analyzed for their alcohol drinking status (drinker/non-drinker), total alcohol consumption (g/day), heavy drinking (≥30 g/day) and type of regularly consumed alcoholic beverages. Twenty-one genetic variations in bitterness, sweetness, umami and fatty acid sensing were also genotyped. Our findings suggested that multiple genetic variants modified individuals' alcohol drinking behavior. Genetic variations in the T2R bitterness receptor family were associated with overall drinking behavior. Subjects with the TAS2R38 AVI haplotype were less likely to be a drinker [odds ratio (OR): 0.75, 95% confidence interval (CI): 0.59-0.95], and TAS2R5 rs2227264 predicted the level of total alcohol consumption (p = 0.01). In contrast, the T1R sweet and umami receptor family was associated with heavy drinking. TAS1R3 rs307355 CT carriers were more likely to be heavy drinkers (OR: 1.53, 95% CI: 1.06-2.19). The genetic variants were also associated with the choice of alcoholic beverages. The homo-recessive type of TAS2R4 rs2233998 (OR: 1.62, 95% CI: 1.11-2.37) and TAS2R5 rs2227264 (OR: 1.72, 95% CI: 1.14-2.58) were associated with consumption of rice wine. However, TAS1R2 rs35874116 was associated with wine drinking (OR: 0.65, 95% CI: 0.43-0.98) and the consumption level (p = 0.04). These findings suggest that multiple genetic variations in taste receptors influence drinking behavior in Koreans. Genetic variations are also responsible for the preference of particular alcoholic beverages, which may contribute to an individual's alcohol drinking behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accelerated high-resolution photoacoustic tomography via compressed sensing
NASA Astrophysics Data System (ADS)
Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward
2016-12-01
Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Statistical Interior Tomography
Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.
2011-01-01
This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044
Empirical correction for earth sensor horizon radiance variation
NASA Technical Reports Server (NTRS)
Hashmall, Joseph A.; Sedlak, Joseph; Andrews, Daniel; Luquette, Richard
1998-01-01
A major limitation on the use of infrared horizon sensors for attitude determination is the variability of the height of the infrared Earth horizon. This variation includes a climatological component and a stochastic component of approximately equal importance. The climatological component shows regular variation with season and latitude. Models based on historical measurements have been used to compensate for these systematic changes. The stochastic component is analogous to tropospheric weather. It can cause extreme, localized changes that for a period of days, overwhelm the climatological variation. An algorithm has been developed to compensate partially for the climatological variation of horizon height and at least to mitigate the stochastic variation. This method uses attitude and horizon sensor data from spacecraft to update a horizon height history as a function of latitude. For spacecraft that depend on horizon sensors for their attitudes (such as the Total Ozone Mapping Spectrometer-Earth Probe-TOMS-EP) a batch least squares attitude determination system is used. It is assumed that minimizing the average sensor residual throughout a full orbit of data results in attitudes that are nearly independent of local horizon height variations. The method depends on the additional assumption that the mean horizon height over all latitudes is approximately independent of season. Using these assumptions, the method yields the latitude dependent portion of local horizon height variations. This paper describes the algorithm used to generate an empirical horizon height. Ideally, an international horizon height database could be established that would rapidly merge data from various spacecraft to provide timely corrections that could be used by all.
Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard
2016-01-01
To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable. PMID:26872333
Zhong, Chen; Batty, Michael; Manley, Ed; Wang, Jiaqiu; Wang, Zijia; Chen, Feng; Schmitt, Gerhard
2016-01-01
To discover regularities in human mobility is of fundamental importance to our understanding of urban dynamics, and essential to city and transport planning, urban management and policymaking. Previous research has revealed universal regularities at mainly aggregated spatio-temporal scales but when we zoom into finer scales, considerable heterogeneity and diversity is observed instead. The fundamental question we address in this paper is at what scales are the regularities we detect stable, explicable, and sustainable. This paper thus proposes a basic measure of variability to assess the stability of such regularities focusing mainly on changes over a range of temporal scales. We demonstrate this by comparing regularities in the urban mobility patterns in three world cities, namely London, Singapore and Beijing using one-week of smart-card data. The results show that variations in regularity scale as non-linear functions of the temporal resolution, which we measure over a scale from 1 minute to 24 hours thus reflecting the diurnal cycle of human mobility. A particularly dramatic increase in variability occurs up to the temporal scale of about 15 minutes in all three cities and this implies that limits exist when we look forward or backward with respect to making short-term predictions. The degree of regularity varies in fact from city to city with Beijing and Singapore showing higher regularity in comparison to London across all temporal scales. A detailed discussion is provided, which relates the analysis to various characteristics of the three cities. In summary, this work contributes to a deeper understanding of regularities in patterns of transit use from variations in volumes of travellers entering subway stations, it establishes a generic analytical framework for comparative studies using urban mobility data, and it provides key points for the management of variability by policy-makers intent on for making the travel experience more amenable.
Phantom experiments using soft-prior regularization EIT for breast cancer imaging.
Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J
2017-06-01
A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ho; Xing Lei; Lee, Rena
2012-05-15
Purpose: X-ray scatter incurred to detectors degrades the quality of cone-beam computed tomography (CBCT) and represents a problem in volumetric image guided and adaptive radiation therapy. Several methods using a beam blocker for the estimation and subtraction of scatter have been proposed. However, due to missing information resulting from the obstruction of the blocker, such methods require dual scanning or dynamically moving blocker to obtain a complete volumetric image. Here, we propose a half beam blocker-based approach, in conjunction with a total variation (TV) regularized Feldkamp-Davis-Kress (FDK) algorithm, to correct scatter-induced artifacts by simultaneously acquiring image and scatter information frommore » a single-rotation CBCT scan. Methods: A half beam blocker, comprising lead strips, is used to simultaneously acquire image data on one side of the projection data and scatter data on the other half side. One-dimensional cubic B-Spline interpolation/extrapolation is applied to derive patient specific scatter information by using the scatter distributions on strips. The estimated scatter is subtracted from the projection image acquired at the opposite view. With scatter-corrected projections where this subtraction is completed, the FDK algorithm based on a cosine weighting function is performed to reconstruct CBCT volume. To suppress the noise in the reconstructed CBCT images produced by geometric errors between two opposed projections and interpolated scatter information, total variation regularization is applied by a minimization using a steepest gradient descent optimization method. The experimental studies using Catphan504 and anthropomorphic phantoms were carried out to evaluate the performance of the proposed scheme. Results: The scatter-induced shading artifacts were markedly suppressed in CBCT using the proposed scheme. Compared with CBCT without a blocker, the nonuniformity value was reduced from 39.3% to 3.1%. The root mean square error relative to values inside the regions of interest selected from a benchmark scatter free image was reduced from 50 to 11.3. The TV regularization also led to a better contrast-to-noise ratio. Conclusions: An asymmetric half beam blocker-based FDK acquisition and reconstruction technique has been established. The proposed scheme enables simultaneous detection of patient specific scatter and complete volumetric CBCT reconstruction without additional requirements such as prior images, dual scans, or moving strips.« less
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography
Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.
2012-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906
Constrained H1-regularization schemes for diffeomorphic image registration
Mang, Andreas; Biros, George
2017-01-01
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361
Bone mass, depressive and anxiety symptoms in adolescent girls: Variation by smoking and alcohol use
Dorn, L.D.; Pabst, S.; Sontag, L.M.; Kalkwarf, H.; Hillman, J.B.; Susman, E.J.
2011-01-01
PURPOSE The purpose of the study was to examine (a) the association between depressive and anxiety symptoms with bone health, (b) the association of smoking or alcohol use with bone health, and, in turn, (c) whether the association between depressive and anxiety symptoms with bone health varied by smoking or alcohol use individually or by combined use. Bone health included total body bone mineral content (TB BMC) and bone mineral density (BMD) of the lumbar spine, total hip, and femoral neck. Previous literature has not examined these issues in adolescence, a time when more than 50% of bone mass is accrued. METHODS An observational study enrolled 262 healthy adolescent girls by age cohort (11, 13, 15, and 17 years). Participants completed questionnaires and interviews on substance use, depressive symptoms, and anxiety. BMC and BMD were measured by dual energy x-ray absorptiometry. RESULTS Higher depressive symptoms were associated with lower TB BMC and BMD (total hip, femoral neck). Those with the lowest level of smoking had higher BMD of the hip and femoral neck whereas no differences were noted by alcohol use. Regular users of both cigarettes and alcohol demonstrated a stronger negative association between depressive symptoms and TB BMC compared with non-users/experimental users and regular alcohol users. Findings were parallel for anxiety symptoms. CONCLUSION Depressive and anxiety symptoms may negatively influence bone health in adolescent girls. Consideration of multiple substances, rather than cigarettes or alcohol separately, may be particularly informative with respect to the association of depression with bone health. PMID:22018564
In vivo dental plaque pH variation with regular and diet soft drinks.
Roos, Erik H; Donly, Kevin J
2002-01-01
Despite the presence or absence of artificial sweeteners in cola drinks, both regular and diet soft drinks still contain phosphoric and citric acid, which contributes to the total acidic challenge potential on enamel. The purpose of this study was to assess the plaque pH, in vivo, after a substrate challenge of diet and regular soft drinks. Seventeen subjects were recruited for this study. All subjects were between the ages of 12 and 15 and had at least 4 restored tooth surfaces present. The subjects were given consent by their parents and were asked to refrain from brushing for 48 hours prior to the study. At baseline, plaque pH was measured from 4 separate locations using touch electrode methodology. Each subject was then randomly assigned to one of two groups. The first group was exposed to regular Coke followed by Diet Coke, while the second group was exposed to Diet Coke followed by regular Coke. Subjects were asked to swish with 15 ml of the respective soft drink for one minute. Plaque pH was measured at the 4 designated tooth sites at 5-, 10- and 20-minute intervals. Subjects then repeated the experiment using the other soft drink. The results showed that regular Coke had significantly more acidic plaque pH values at the 5-, 10- and 20-minute intervals compared to Diet Coke, (P = < .001), when subjected to a t test. The mean pH at 5 minutes for Coke and Diet Coke was 5.5 +/- 0.5 and 6.0 +/- 0.7, respectively. At 10 minutes, the pH for Coke and Diet Coke was 5.6 +/- 0.6 and 6.2 +/- 0.7, respectively. The pH at 20 minutes for Coke and Diet Coke was 5.7 +/- 0.7 and 6.5 +/- 0.5, respectively. These data suggest that regular Coke possesses a greater acid challenge potential on enamel than Diet Coke. However, in this clinical trial, the pH associated with either soft drink did not reach the critical pH which is expected for enamel demineralization and dissolution.
20 CFR 226.14 - Employee regular annuity rate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...
Phenotypic variation of the Mexican duck (Anas platyrhynchos diazi) in Mexico
Scott, N.J.; Reynolds, R.P.
1984-01-01
A collection of 98 breeding Mexican Ducks (Anas platyrhynchos diazi) was made in Mexico from six areas between the United States border with Chihuahua and Lake Chapala, Jalisco, in order to study geographic variation. Plumage indices showed a relatively smooth clinal change from north to south; northern populations were most influenced by the Northern Mallard (A. platyrhynchos) phenotype. Measurements of total, wing, and culmen lengths and bill width were usually significantly larger in males at any one site, but showed no regular geographic trends. Hybridization between platyrhynchos and diazi phenotypes may or may not be increasing in the middle Rio Grande and Rio Conchos valleys; available data are insufficient to decide. A spring 1978 aerial census yielded an estimate of 55,500 diazi -like birds in Mexico. Populations of diazi appear to be as large as the available habitat allows; management should be directed towards increasing and stabilizing the nesting habitat; and the stability of the zone of intergradation should be investigated.
NASA Astrophysics Data System (ADS)
Zhao, Xia; Wang, Guang-xin
2008-12-01
Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.
NASA Astrophysics Data System (ADS)
Zhai, Liang; Li, Shuang; Zou, Bin; Sang, Huiyong; Fang, Xin; Xu, Shan
2018-05-01
Considering the spatial non-stationary contributions of environment variables to PM2.5 variations, the geographically weighted regression (GWR) modeling method has been using to estimate PM2.5 concentrations widely. However, most of the GWR models in reported studies so far were established based on the screened predictors through pretreatment correlation analysis, and this process might cause the omissions of factors really driving PM2.5 variations. This study therefore developed a best subsets regression (BSR) enhanced principal component analysis-GWR (PCA-GWR) modeling approach to estimate PM2.5 concentration by fully considering all the potential variables' contributions simultaneously. The performance comparison experiment between PCA-GWR and regular GWR was conducted in the Beijing-Tianjin-Hebei (BTH) region over a one-year-period. Results indicated that the PCA-GWR modeling outperforms the regular GWR modeling with obvious higher model fitting- and cross-validation based adjusted R2 and lower RMSE. Meanwhile, the distribution map of PM2.5 concentration from PCA-GWR modeling also clearly depicts more spatial variation details in contrast to the one from regular GWR modeling. It can be concluded that the BSR enhanced PCA-GWR modeling could be a reliable way for effective air pollution concentration estimation in the coming future by involving all the potential predictor variables' contributions to PM2.5 variations.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
ERIC Educational Resources Information Center
Aldinger, Loviah E., Ed.
Five papers describe ways to integrate knowledge from regular and special education at the university level. L. Hudson and M. Carroll ("The Preservice Teacher Experiences Variation in the Meaning Making of Handicapped and Nonhandicapped Learners") review adaptations in a competency based teacher education program to include information on high…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bildhauer, Michael, E-mail: bibi@math.uni-sb.de; Fuchs, Martin, E-mail: fuchs@math.uni-sb.de
2012-12-15
We discuss several variants of the TV-regularization model used in image recovery. The proposed alternatives are either of nearly linear growth or even of linear growth, but with some weak ellipticity properties. The main feature of the paper is the investigation of the analytic properties of the corresponding solutions.
Phenologic variation of major triterpenoids in regular and white Antrodia cinnamomea.
Chen, Wei-Lun; Ho, Yen-Peng; Chou, Jyh-Ching
2016-12-01
Antrodia cinnamomea and its host Cinnamomum kanehirae are both endemic species unique to Taiwan. Many studies have confirmed that A. cinnamomea is rich in polysaccharides and triterpenoids that may carry medicinal effects in anti-cancer, anti-inflammation, anti-hypertension, and anti-oxidation. Therefore it is of interest to study the chemical variation of regular orange-red strains and white strains, which included naturally occurring and blue-light induced white A. cinnamomea. The chemical profiles of A. cinnamomea extracts at different growth stages were compared using thin layer chromatography (TLC) and high performance liquid chromatography (HPLC). The TLC and HPLC profiles indicated that specific triterpenoids varied between white and regular strains. Moreover, the compounds of blue-light induced white strain were similar to those of naturally occurring white strain but retained specific chemical characteristics in more polar region of the HPLC chromatogram of regular strain. Blue-light radiation could change color of the regular A. cinnamomea from orange-red to white by changing its secondary metabolism and growth condition. Naturally occurring white strain did not show a significantly different composition of triterpenoid profiles up to eight weeks old when compared with the triterpenoid profiles of the regular strain at the same age. The ergostane-type triterpenoids were found existing in both young mycelia and old mycelia with fruiting body in artificial agar-plate medium culture, suggesting a more diversified biosynthetic pathway in artificial agar-plate culture rather than wild or submerged culture.
2014-01-01
Background Patients with antibody deficiencies depend on the presence of a variety of antibody specificities in intravenous immunoglobulin (IVIG) to ensure continued protection against pathogens. Few studies have examined levels of antibodies to specific pathogens in IVIG preparations and little is known about the specific antibody levels in patients under regular IVIG treatment. The current study determined the range of antibodies to tetanus, diphtheria, measles and varicella in IVIG products and the levels of these antibodies in patients undergoing IVIG treatment. Methods We selected 21 patients with primary antibody deficiencies who were receiving regular therapy with IVIG. Over a period of one year, we collected four blood samples from each patient (every 3 months), immediately before immunoglobulin infusion. We also collected samples from the IVIG preparation the patients received the month prior to blood collection. Antibody levels to tetanus, diphtheria, measles and varicella virus were measured in plasma and IVIG samples. Total IgG levels were determined in plasma samples. Results Antibody levels to tetanus, diphtheria, varicella virus and measles showed considerable variation in different IVIG lots, but they were similar when compared between commercial preparations. All patients presented with protective levels of antibodies specific for tetanus, measles and varicella. Some patients had suboptimal diphtheria antibody levels. There was a significant correlation between serum and IVIG antibodies to all pathogens, except tetanus. There was a significant correlation between diphtheria and varicella antibodies with total IgG levels, but there was no significant correlation with antibodies to tetanus or measles. Conclusions The study confirmed the variation in specific antibody levels between batches of the same brand of IVIG. Apart from the most common infections to which these patients are susceptible, health care providers must be aware of other vaccine preventable diseases, which still exist globally. PMID:24952415
Noise sensitivity and loudness derivative index for urban road traffic noise annoyance computation.
Gille, Laure-Anne; Marquis-Favre, Catherine; Weber, Reinhard
2016-12-01
Urban road traffic composed of powered-two-wheelers (PTWs), buses, heavy, and light vehicles is a major source of noise annoyance. In order to assess annoyance models considering different acoustical and non-acoustical factors, a laboratory experiment on short-term annoyance due to urban road traffic noise was conducted. At the end of the experiment, participants were asked to rate their noise sensitivity and to describe the noise sequences they heard. This verbalization task highlights that annoyance ratings are highly influenced by the presence of PTWs and by different acoustical features: noise intensity, irregular temporal amplitude variation, regular amplitude modulation, and spectral content. These features, except irregular temporal amplitude variation, are satisfactorily characterized by the loudness, the total energy of tonal components and the sputtering and nasal indices. Introduction of the temporal derivative of loudness allows successful modeling of perceived amplitude variations. Its contribution to the tested annoyance models is high and seems to be higher than the contribution of mean loudness index. A multilevel regression is performed to assess annoyance models using selected acoustical indices and noise sensitivity. Three models are found to be promising for further studies that aim to enhance current annoyance models.
German, Alina; Livshits, Gregory; Peter, Inga; Malkin, Ida; Dubnov, Jonathan; Akons, Hannah; Shmoish, Michael; Hochberg, Ze'ev
2015-03-01
Using a twins study, we sought to assess the contribution of genetic against environmental factor as they affect the age at transition from infancy to childhood (ICT). The subjects were 56 pairs of monozygotic twins, 106 pairs of dizygotic twins, and 106 pairs of regular siblings (SBs), for a total of 536 children. Their ICT was determined, and a variance component analysis was implemented to estimate components of the familial variance, with simultaneous adjustment for potential covariates. We found substantial contribution of the common environment shared by all types of SBs that explained 27.7% of the total variance in ICT, whereas the common twin environment explained 9.2% of the variance, gestational age 3.5%, and birth weight 1.8%. In addition, 8.7% was attributable to sex difference, but we found no detectable contribution of genetic factors to inter-individual variation in ICT age. Developmental plasticity impacts much of human growth. Here we show that of the ∼50% of the variance provided to adult height by the ICT, 42.2% is attributable to adaptive cues represented by shared twin and SB environment, with no detectable genetic involvement. Copyright © 2015 Elsevier Inc. All rights reserved.
Evolution of surface sensible heat over the Tibetan Plateau under the recent global warming hiatus
NASA Astrophysics Data System (ADS)
Zhu, Lihua; Huang, Gang; Fan, Guangzhou; Qu, Xia; Zhao, Guijie; Hua, Wei
2017-10-01
Based on regular surface meteorological observations and NCEP/DOE reanalysis data, this study investigates the evolution of surface sensible heat (SH) over the central and eastern Tibetan Plateau (CE-TP) under the recent global warming hiatus. The results reveal that the SH over the CE-TP presents a recovery since the slowdown of the global warming. The restored surface wind speed together with increased difference in ground-air temperature contribute to the recovery in SH. During the global warming hiatus, the persistent weakening wind speed is alleviated due to the variation of the meridional temperature gradient. Meanwhile, the ground surface temperature and the difference in ground-air temperature show a significant increasing trend in that period caused by the increased total cloud amount, especially at night. At nighttime, the increased total cloud cover reduces the surface effective radiation via a strengthening of atmospheric counter radiation and subsequently brings about a clear upward trend in ground surface temperature and the difference in ground-air temperature. Cloud-radiation feedback plays a significant role in the evolution of the surface temperature and even SH during the global warming hiatus. Consequently, besides the surface wind speed, the difference in ground-air temperature becomes another significant factor for the variation in SH since the slowdown of global warming, particularly at night.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Chen, Wu; Shen, Yunzhong; Zhang, Xingfu; Hsu, Houze
2016-04-01
The existing unconstrained Gravity Recovery and Climate Experiment (GRACE) monthly solutions i.e. CSR RL05 from Center for Space Research (CSR), GFZ RL05a from GeoForschungsZentrum (GFZ), JPL RL05 from Jet Propulsion Laboratory (JPL), DMT-1 from Delft Institute of Earth Observation and Space Systems (DEOS), AIUB from Bern University, and Tongji-GRACE01 as well as Tongji-GRACE02 from Tongji University, are dominated by correlated noise (such as north-south stripe errors) in high degree coefficients. To suppress the correlated noise of the unconstrained GRACE solutions, one typical option is to use post-processing filters such as decorrelation filtering and Gaussian smoothing , which are quite effective to reduce the noise and convenient to be implemented. Unlike these post-processing methods, the CNES/GRGS monthly GRACE solutions from Centre National d'Etudes Spatiales (CNES) were developed by using regularization with Kaula rule, whose correlated noise are reduced to such a great extent that no decorrelation filtering is required. Actually, the previous studies demonstrated that the north-south stripes in the GRACE solutions are due to the poor sensitivity of gravity variation in east-west direction. In other words, the longitudinal sampling of GRACE mission is very sparse but the latitudinal sampling of GRACE mission is quite dense, indicating that the recoverability of the longitudinal gravity variation is poor or unstable, leading to the ill-conditioned monthly GRACE solutions. To stabilize the monthly solutions, we constructed the regularization matrices by minimizing the difference between the longitudinal and latitudinal gravity variations and applied them to derive a time series of regularized GRACE monthly solutions named RegTongji RL01 for the period Jan. 2003 to Aug. 2011 in this paper. The signal powers and noise level of RegTongji RL01 were analyzed in this paper, which shows that: (1) No smoothing or decorrelation filtering is required for RegTongji RL01 anymore. (2) The signal powers of RegTongji RL01 are obviously stronger than those of the filtered solutions but the noise levels of the regularized and filtered solutions are consistent, suggesting that RegTongji RL01 has the higher signal-to-noise ratio.
Readiness for self-directed learning: How bridging and traditional nursing students differs?
Alharbi, Homood A
2018-02-01
The dean of the nursing college has an initiative to reform the BSN program in the college to minimize the use of lecturing and maximize interactive and lifelong learning. Appropriate assessment of how our students are prepared to be self-directed learners is crucial. To compare traditional and bridging students in regard to their SDLR scores in the nursing college in Saudi Arabia. This was a comparative study to compare traditional and bridging students in regard to their self-directed learning readiness scores (SDLR). The data was collected at the Nursing College, King Saud University, Riyadh, Saudi Arabia. A convenient sample of undergraduate nursing students at the sixth and eighth levels in both regular and bridging programs were recruited in this study to indicate their SDLR scores. The study used Fisher et al.'s (2001) Self-Directed Learning Readiness Scale to measure the self-directed learning readiness among undergraduate nursing students. The total mean score of SDLR was 144 out of 200, which indicated a low level of readiness for SDL. There were significant variations between the included academic levels among participants. Students in the sixth academic level scored higher in the total SDLR scores compared to eighth-level students. There were no significant variations with gender and program types in the total SDLR scores. A comprehensive plan is needed to prepare both faculty members and students to improve the SDL skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mexican forest fires and their decadal variations
NASA Astrophysics Data System (ADS)
Velasco Herrera, Graciela
2016-11-01
A high forest fire season of two to three years is regularly observed each decade in Mexican forests. This seems to be related to the presence of the El Niño phenomenon and to the amount of total solar irradiance. In this study, the results of a multi-cross wavelet analysis are reported based on the occurrence of Mexican forest fires, El Niño and the total solar irradiance for the period 1970-2014. The analysis shows that Mexican forest fires and the strongest El Niño phenomena occur mostly around the minima of the solar cycle. This suggests that the total solar irradiance minima provide the appropriate climatological conditions for the occurrence of these forest fires. The next high season for Mexican forest fires could start in the next solar minimum, which will take place between the years 2017 and 2019. A complementary space analysis based on MODIS active fire data for Mexican forest fires from 2005 to 2014 shows that most of these fires occur in cedar and pine forests, on savannas and pasturelands, and in the central jungles of the Atlantic and Pacific coasts.
Exploring Regularities and Dynamic Systems in L2 Development
ERIC Educational Resources Information Center
Lenzing, Anke
2015-01-01
This article focuses on a theoretical and empirical exploration of developmental trajectories and individual learner variation in second language (L2) acquisition. Taking a processability perspective, I view learner language as a dynamic system that includes predictable universal developmental trajectories as well as individual learner variation,…
Regular variation and probability
NASA Astrophysics Data System (ADS)
Bingham, N. H.
2007-03-01
It is a pleasure for Bingham of Bingham, Goldie and Teugels to write in appreciation of Teugels of Bingham, Goldie and Teugels, on the occasion of Jef Teugels' retirement, and also to remind myself of the promise we made each other--all those years ago, in the early 1970s--to write the book that regular variation so obviously required. The theme has continued to attract my interest, Jef's and that of his pupils since. As for the book (BGT below), it continues to be my most cited work, and to find its place in the working library of probabilists. It is a pleasure also to return to the theme of Bingham [5], with the benefit of another 15 years' worth of hindsight.
When things go pear shaped: contour variations of contacts
NASA Astrophysics Data System (ADS)
Utzny, Clemens
2013-04-01
Traditional control of critical dimensions (CD) on photolithographic masks considers the CD average and a measure for the CD variation such as the CD range or the standard deviation. Also systematic CD deviations from the mean such as CD signatures are subject to the control. These measures are valid for mask quality verification as long as patterns across a mask exhibit only size variations and no shape variation. The issue of shape variations becomes especially important in the context of contact holes on EUV masks. For EUV masks the CD error budget is much smaller than for standard optical masks. This means that small deviations from the contact shape can impact EUV waver prints in the sense that contact shape deformations induce asymmetric bridging phenomena. In this paper we present a detailed study of contact shape variations based on regular product data. Two data sets are analyzed: 1) contacts of varying target size and 2) a regularly spaced field of contacts. Here, the methods of statistical shape analysis are used to analyze CD SEM generated contour data. We demonstrate that contacts on photolithographic masks do not only show size variations but exhibit also pronounced nontrivial shape variations. In our data sets we find pronounced shape variations which can be interpreted as asymmetrical shape squeezing and contact rounding. Thus we demonstrate the limitations of classic CD measures for describing the feature variations on masks. Furthermore we show how the methods of statistical shape analysis can be used for quantifying the contour variations thus paving the way to a new understanding of mask linearity and its specification.
Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond
2015-01-01
Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024
Bach, Alex; Busto, Isabel
2005-02-01
A database consisting of 35291 milking records from 83 cows was built over a period of 10 months with the objectives of studying the effect of teat cup attachment failures and milking interval regularity on milk production with an automated milking system (AMS). The database collected records of lactation number, days in milk (DIM), milk production, interval between milkings (for both the entire udder and individual quarters in case of a teat cup attachment failure) and average and peak milk flows for each milking. The weekly coefficient of variation (CV) of milking intervals was used as a measure of milking regularity. DIM, milking intervals, and CV of milking intervals were divided into four categories coinciding with the four quartiles of their respective distributions. The data were analysed by analysis of variance with cow as a random effect and lactation number, DIM, the occurrence of a milking failure, and the intervals between milkings or the weekly CV of milking intervals as fixed effects. The incidence of attachment failures was 7.6% of total milkings. Milk production by quarters affected by a milking failure following the failure was numerically greater owing to the longer interval between milkings. When accounting for the effect of milking intervals, milk production by affected quarters following a milking failure was 26% lower than with regular milkings. However, the decrease in milk production by quarters affected by milking failures was more severe as DIM increased. Average and peak milk flows by quarters affected by a milking failure were lower than when milkings occurred normally. However, milk production recovered its former level within seven milkings following a milking failure. Uneven frequency (weekly CV of milking intervals >27%) decreased daily milk yield, and affected multiparous more negatively than primiparous cows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, H
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less
Evaluating secular acceleration in geomagnetic field model GRIMM-3
NASA Astrophysics Data System (ADS)
Lesur, V.; Wardinski, I.
2012-12-01
Secular acceleration of the magnetic field is the rate of change of its secular variation. One of the main results of studying magnetic data collected by the German survey satellite CHAMP was the mapping of field acceleration and its evolution in time. Questions remain about the accuracy of the modeled acceleration and the effect of the applied regularization processes. We have evaluated to what extent the regularization affects the temporal variability of the Gauss coefficients. We also obtained results of temporal variability of the Gauss coefficients where alternative approaches to the usual smoothing norms have been applied for regularization. Except for the dipole term, the secular acceleration of the Gauss coefficients is fairly well described up to spherical harmonic degree 5 or 6. There is no clear evidence from observatory data that the spectrum of this acceleration is underestimated at the Earth surface. Assuming a resistive mantle, the observed acceleration supports a characteristic time scale for the secular variation of the order of 11 years.
de Carvalho Bittencourt, Marcelo; Kohler, Chantal; Henard, Sandrine; Rabaud, Christian; Béné, Marie C; Faure, Gilbert C
2013-01-01
Quality assessment in flow cytometry cannot obey the same rules as those applicable to the measurement of chemical analytes. However, regular follow-up of known patients may provide a robust in-house control of cell subsets evaluation. Sequential blood samples assessed for 32 HIV patients over several years and showing good stability were retrospectively assessed to establish coefficient of variations of the percentages of CD3+, CD4+, CD8+ cells, and CD4+ absolute counts (ACs). Mean relative standard variations for the whole cohort were of 0.04, 0.14, 0.08, and 0.18 for CD3%, CD4%, CD8%, and CD4 ACs, respectively. In-house follow-up of regularly checked compliant patients is a good alternative to traditional and costly repeatability and reproducibility studies for the validation of routine flow cytometry. © 2013 International Clinical Cytometry Society. Copyright © 2013 International Clinical Cytometry Society.
de Carvalho Bittencourt, Marcelo; Kohler, Chantal; Henard, Sandrine; Rabaud, Christian; Béné, Marie C; Faure, Gilbert C
2013-07-08
Background. Quality assessment in flow cytometry cannot obey the same rules as those applicable to the measurement of chemical analytes. However, regular follow-up of known patients may provide a robust in-house control of cell subsets evaluation. Methods. Sequential blood samples assessed for 32 HIV patients over several years and showing good stability were retrospectively assessed to establish coefficient of variations of the percentages of CD3+, CD4+, CD8+ cells and CD4+ absolute counts. Results. Mean relative standard variations for the whole cohort were of 0.04, 0.14, 0.08 and 0.18 for CD3%, CD4% CD8% and CD4 absolute counts respectively. Discussion. In-house follow up of regularly checked compliant patients is a good alternative to traditional and costly repeatability and reproducibility studies for the validation of routine flow cytometry. © 2013 Clinical Cytometry Society. Copyright © 2013 Clinical Cytometry Society.
Wang, Chunhao; Yin, Fang-Fang; Kirkpatrick, John P; Chang, Zheng
2017-08-01
To investigate the feasibility of using undersampled k-space data and an iterative image reconstruction method with total generalized variation penalty in the quantitative pharmacokinetic analysis for clinical brain dynamic contrast-enhanced magnetic resonance imaging. Eight brain dynamic contrast-enhanced magnetic resonance imaging scans were retrospectively studied. Two k-space sparse sampling strategies were designed to achieve a simulated image acquisition acceleration factor of 4. They are (1) a golden ratio-optimized 32-ray radial sampling profile and (2) a Cartesian-based random sampling profile with spatiotemporal-regularized sampling density constraints. The undersampled data were reconstructed to yield images using the investigated reconstruction technique. In quantitative pharmacokinetic analysis on a voxel-by-voxel basis, the rate constant K trans in the extended Tofts model and blood flow F B and blood volume V B from the 2-compartment exchange model were analyzed. Finally, the quantitative pharmacokinetic parameters calculated from the undersampled data were compared with the corresponding calculated values from the fully sampled data. To quantify each parameter's accuracy calculated using the undersampled data, error in volume mean, total relative error, and cross-correlation were calculated. The pharmacokinetic parameter maps generated from the undersampled data appeared comparable to the ones generated from the original full sampling data. Within the region of interest, most derived error in volume mean values in the region of interest was about 5% or lower, and the average error in volume mean of all parameter maps generated through either sampling strategy was about 3.54%. The average total relative error value of all parameter maps in region of interest was about 0.115, and the average cross-correlation of all parameter maps in region of interest was about 0.962. All investigated pharmacokinetic parameters had no significant differences between the result from original data and the reduced sampling data. With sparsely sampled k-space data in simulation of accelerated acquisition by a factor of 4, the investigated dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic parameters can accurately estimate the total generalized variation-based iterative image reconstruction method for reliable clinical application.
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
Lea, Rod A; Dickson, Stuart; Benowitz, Neal L
2006-01-01
Nicotine is the major addictive compound in tobacco and is responsible for tobacco dependence. It is primarily metabolized to cotinine (COT) and trans-3'-hydroxycotinine (3HC) by the liver enzyme cytochrome P-450 2A6 (CYP2A6). The 3HC/COT ratio measured in the saliva of smokers is highly correlated with the intrinsic hepatic clearance of nicotine and, therefore, may be a useful non-invasive marker of CYP2A6 activity and metabolic rate of nicotine. This study assessed within-subject variation in salivary 3HC/COT ratios in six regular daily smokers. Our data provide evidence that 1. variation in the 3HC/COT ratio is not dependent on the time of sampling during the day (i.e., morning vs. night ) (P > 0.1) and 2. the average within-subject biological variation in the 3HC/COT ratio is approximately 26%. These findings should be useful for designing large-scale population surveys to assess the variation in the metabolic rate of nicotine (via CYP2A6) in smokers.
NASA Technical Reports Server (NTRS)
Monk, T. H.; Petrie, S. R.; Hayes, A. J.; Kupfer, D. J.
1994-01-01
A diary-like instrument to measure lifestyle regularity (the 'Social Rhythm Metric'-SRM) was given to 96 subjects (48 women, 48 men), 39 of whom repeated the study after at least one year, with additional objective measures of rest/activity. Lifestyle regularity as measured by the SRM related to age, morningness, subjective sleep quality and time-of-day variations in alertness, but not to gender, extroversion or neuroticism. Statistically significant test-retest correlations of about 0.4 emerged for SRM scores over the 12-30 month delay. Diary-based estimates of bedtime and waketime appeared fairly reliable. In a further study of healthy young men, 4 high SRM scorers ('regular') had a deeper nocturnal body temperature trough than 5 low SRM scorers ('irregular'), suggesting a better functioning circadian system in the 'regular' group.
2014-01-01
Background The built environment in which older people live plays an important role in promoting or inhibiting physical activity. Most work on this complex relationship between physical activity and the environment has excluded people with reduced physical function or ignored the difference between groups with different levels of physical function. This study aims to explore the role of neighbourhood green space in determining levels of participation in physical activity among elderly men with different levels of lower extremity physical function. Method Using data collected from the Caerphilly Prospective Study (CaPS) and green space data collected from high resolution Landmap true colour aerial photography, we first investigated the effect of the quantity of neighbourhood green space and the variation in neighbourhood vegetation on participation in physical activity for 1,010 men aged 66 and over in Caerphilly county borough, Wales, UK. Second, we explored whether neighbourhood green space affects groups with different levels of lower extremity physical function in different ways. Results Increasing percentage of green space within a 400 meters radius buffer around the home was significantly associated with more participation in physical activity after adjusting for lower extremity physical function, psychological distress, general health, car ownership, age group, marital status, social class, education level and other environmental factors (OR = 1.21, 95% CI 1.05, 1.41). A statistically significant interaction between the variation in neighbourhood vegetation and lower extremity physical function was observed (OR = 1.92, 95% CI 1.12, 3.28). Conclusion Elderly men living in neighbourhoods with more green space have higher levels of participation in regular physical activity. The association between variation in neighbourhood vegetation and regular physical activity varied according to lower extremity physical function. Subjects reporting poor lower extremity physical function living in neighbourhoods with more homogeneous vegetation (i.e. low variation) were more likely to participate in regular physical activity than those living in neighbourhoods with less homogeneous vegetation (i.e. high variation). Good lower extremity physical function reduced the adverse effect of high variation vegetation on participation in regular physical activity. This provides a basis for the future development of novel interventions that aim to increase levels of physical activity in later life, and has implications for planning policy to design, preserve, facilitate and encourage the use of green space near home. PMID:24646136
Gong, Yi; Gallacher, John; Palmer, Stephen; Fone, David
2014-03-19
The built environment in which older people live plays an important role in promoting or inhibiting physical activity. Most work on this complex relationship between physical activity and the environment has excluded people with reduced physical function or ignored the difference between groups with different levels of physical function. This study aims to explore the role of neighbourhood green space in determining levels of participation in physical activity among elderly men with different levels of lower extremity physical function. Using data collected from the Caerphilly Prospective Study (CaPS) and green space data collected from high resolution Landmap true colour aerial photography, we first investigated the effect of the quantity of neighbourhood green space and the variation in neighbourhood vegetation on participation in physical activity for 1,010 men aged 66 and over in Caerphilly county borough, Wales, UK. Second, we explored whether neighbourhood green space affects groups with different levels of lower extremity physical function in different ways. Increasing percentage of green space within a 400 meters radius buffer around the home was significantly associated with more participation in physical activity after adjusting for lower extremity physical function, psychological distress, general health, car ownership, age group, marital status, social class, education level and other environmental factors (OR = 1.21, 95% CI 1.05, 1.41). A statistically significant interaction between the variation in neighbourhood vegetation and lower extremity physical function was observed (OR = 1.92, 95% CI 1.12, 3.28). Elderly men living in neighbourhoods with more green space have higher levels of participation in regular physical activity. The association between variation in neighbourhood vegetation and regular physical activity varied according to lower extremity physical function. Subjects reporting poor lower extremity physical function living in neighbourhoods with more homogeneous vegetation (i.e. low variation) were more likely to participate in regular physical activity than those living in neighbourhoods with less homogeneous vegetation (i.e. high variation). Good lower extremity physical function reduced the adverse effect of high variation vegetation on participation in regular physical activity. This provides a basis for the future development of novel interventions that aim to increase levels of physical activity in later life, and has implications for planning policy to design, preserve, facilitate and encourage the use of green space near home.
A method to account for the temperature sensitivity of TCCON total column measurements
NASA Astrophysics Data System (ADS)
Niebling, Sabrina G.; Wunch, Debra; Toon, Geoffrey C.; Wennberg, Paul O.; Feist, Dietrich G.
2014-05-01
The Total Carbon Column Observing Network (TCCON) consists of ground-based Fourier Transform Spectrometer (FTS) systems all around the world. It achieves better than 0.25% precision and accuracy for total column measurements of CO2 [Wunch et al. (2011)]. In recent years, the TCCON data processing and retrieval software (GGG) has been improved to achieve better and better results (e. g. ghost correction, improved a priori profiles, more accurate spectroscopy). However, a small error is also introduced by the insufficent knowledge of the true temperature profile in the atmosphere above the individual instruments. This knowledge is crucial to retrieve highly precise gas concentrations. In the current version of the retrieval software, we use six-hourly NCEP reanalysis data to produce one temperature profile at local noon for each measurement day. For sites in the mid latitudes which can have a large diurnal variation of the temperature in the lowermost kilometers of the atmosphere, this approach can lead to small errors in the final gas concentration of the total column. Here, we present and describe a method to account for the temperature sensitivity of the total column measurements. We exploit the fact that H2O is most abundant in the lowermost kilometers of the atmosphere where the largest diurnal temperature variations occur. We use single H2O absorption lines with different temperature sensitivities to gain information about the temperature variations over the course of the day. This information is used to apply a posteriori correction of the retrieved gas concentration of total column. In addition, we show that the a posteriori temperature correction is effective by applying it to data from Lamont, Oklahoma, USA (36,6°N and 97,5°W). We chose this site because regular radiosonde launches with a time resolution of six hours provide detailed information of the real temperature in the atmosphere and allow us to test the effectiveness of our correction. References: Wunch, D., Toon, G. C., Blavier, J.-F. L., Washenfelder, R. A., Notholt, J., Connor, B. J., Griffith, D. W. T., Sherlock, V., and Wennberg, P. O.: The Total Carbon Column Observing Network, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369, 2087-2112, 2011.
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Norris, Darren; Fortin, Marie-Josée; Magnusson, William E.
2014-01-01
Background Ecological monitoring and sampling optima are context and location specific. Novel applications (e.g. biodiversity monitoring for environmental service payments) call for renewed efforts to establish reliable and robust monitoring in biodiversity rich areas. As there is little information on the distribution of biodiversity across the Amazon basin, we used altitude as a proxy for biological variables to test whether meso-scale variation can be adequately represented by different sample sizes in a standardized, regular-coverage sampling arrangement. Methodology/Principal Findings We used Shuttle-Radar-Topography-Mission digital elevation values to evaluate if the regular sampling arrangement in standard RAPELD (rapid assessments (“RAP”) over the long-term (LTER [“PELD” in Portuguese])) grids captured patters in meso-scale spatial variation. The adequacy of different sample sizes (n = 4 to 120) were examined within 32,325 km2/3,232,500 ha (1293×25 km2 sample areas) distributed across the legal Brazilian Amazon. Kolmogorov-Smirnov-tests, correlation and root-mean-square-error were used to measure sample representativeness, similarity and accuracy respectively. Trends and thresholds of these responses in relation to sample size and standard-deviation were modeled using Generalized-Additive-Models and conditional-inference-trees respectively. We found that a regular arrangement of 30 samples captured the distribution of altitude values within these areas. Sample size was more important than sample standard deviation for representativeness and similarity. In contrast, accuracy was more strongly influenced by sample standard deviation. Additionally, analysis of spatially interpolated data showed that spatial patterns in altitude were also recovered within areas using a regular arrangement of 30 samples. Conclusions/Significance Our findings show that the logistically feasible sample used in the RAPELD system successfully recovers meso-scale altitudinal patterns. This suggests that the sample size and regular arrangement may also be generally appropriate for quantifying spatial patterns in biodiversity at similar scales across at least 90% (≈5 million km2) of the Brazilian Amazon. PMID:25170894
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lesht, B.M.; Liljegren, J.C.
1996-12-31
Comparisons between the precipitable water vapor (PWV) estimated by passive microwave radiometers (MWRs) and that obtained by integrating the vertical profile of water vapor density measured by radiosondes (BBSS) have generally shown good agreement. These comparisons, however, have usually been done over rather short time periods and consequently within limited ranges of total PWV and with limited numbers of radiosondes. We have been making regular comparisons between MWR and BBSS estimates of PWV at the Southern Great Plains Cloud and Radiation Testbed (SGP/CART) site since late 1992 as part of an ongoing quality measurement experiment (QME). This suite of comparisonsmore » spans three annual cycles and a relatively wide range of total PWV amounts. Our findings show that although for the most part the agreement is excellent, differences between the two measurements occur. These differences may be related to the MWR retrieval of PWV and to calibration variations between radiosonde batches.« less
A Better Sunscreen: Structural Effects on Spectral Properties
ERIC Educational Resources Information Center
Huck, Lawrence A.; Leigh, William J.
2010-01-01
A modification of the mixed-aldol synthesis of dibenzylideneacetone, prepared from acetone and benzaldehyde, is described wherein acetone is replaced with a series of cyclic ketones with ring sizes of 5-7 carbons. The structural variations in the resulting conjugated ketones produce regular variations in the UV-vis absorption spectra. The choice…
Prugger, Christof; Wellmann, Jürgen; Heidrich, Jan; De Bacquer, Dirk; De Smedt, Delphine; De Backer, Guy; Reiner, Željko; Empana, Jean-Philippe; Fras, Zlatko; Gaita, Dan; Jennings, Catriona; Kotseva, Kornelia; Wood, David; Keil, Ulrich
2017-01-01
Regular exercise lowers the risk of cardiovascular death in coronary heart disease (CHD) patients. We aimed to investigate regular exercise behaviour and intention in relation to symptoms of anxiety and depression in CHD patients across Europe. This study was based on a multicentre cross-sectional survey. In the EUROpean Action on Secondary and Primary Prevention through Intervention to Reduce Events (EUROASPIRE) III survey, 8966 CHD patients <80 years of age from 22 European countries were interviewed on average 15 months after hospitalisation. Whether patients exercised or intended to exercise regularly was assessed using the Stages of Change questionnaire in 8330 patients. Symptoms of anxiety and depression were evaluated using the Hospital Anxiety and Depression Scale. Total physical activity was measured by the International Physical Activity Questionnaire in patients from a subset of 14 countries. Overall, 50.3% of patients were not intending to exercise regularly, 15.9% were intending to exercise regularly, and 33.8% were exercising regularly. Patients with severe symptoms of depression less frequently exercised regularly than patients with symptoms in the normal range (20.2%, 95% confidence interval (CI) 14.8-26.8 vs 36.7%, 95% CI 29.8-44.2). Among patients not exercising regularly, patients with severe symptoms of depression were less likely to have an intention to exercise regularly (odds ratio 0.62, 95% CI 0.46-0.85). Symptoms of anxiety did not affect regular exercise intention. In sensitivity analysis, results were consistent when adjusting for total physical activity. Lower frequency of regular exercise and decreased likelihood of exercise intention were observed in CHD patients with severe depressive symptoms. Severe symptoms of depression may preclude CHD patients from performing regular exercise. © The European Society of Cardiology 2016.
High-resolution CSR GRACE RL05 mascons
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2016-10-01
The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
Soil temperature extrema recovery rates after precipitation cooling
NASA Technical Reports Server (NTRS)
Welker, J. E.
1984-01-01
From a one dimensional view of temperature alone variations at the Earth's surface manifest themselves in two cyclic patterns of diurnal and annual periods, due principally to the effects of diurnal and seasonal changes in solar heating as well as gains and losses of available moisture. Beside these two well known cyclic patterns, a third cycle has been identified which occurs in values of diurnal maxima and minima soil temperature extrema at 10 cm depth usually over a mesoscale period of roughly 3 to 14 days. This mesoscale period cycle starts with precipitation cooling of soil and is followed by a power curve temperature recovery. The temperature recovery clearly depends on solar heating of the soil with an increased soil moisture content from precipitation combined with evaporation cooling at soil temperatures lowered by precipitation cooling, but is quite regular and universal for vastly different geographical locations, and soil types and structures. The regularity of the power curve recovery allows a predictive model approach over the recovery period. Multivariable linear regression models alloy predictions of both the power of the temperature recovery curve as well as the total temperature recovery amplitude of the mesoscale temperature recovery, from data available one day after the temperature recovery begins.
Independence of reaction time and response force control during isometric leg extension.
Fukushi, Tamami; Ohtsuki, Tatsuyuki
2004-04-01
In this study, we examined the relative control of reaction time and force in responses of the lower limb. Fourteen female participants (age 21.2 +/- 1.0 years, height 1.62 +/- 0.05 m, body mass 54.1 +/- 6.1 kg; mean +/- s) were instructed to exert their maximal isometric one-leg extension force as quickly as possible in response to an auditory stimulus presented after one of 13 foreperiod durations, ranging from 0.5 to 10.0 s. In the 'irregular condition' each foreperiod was presented in random order, while in the 'regular condition' each foreperiod was repeated consecutively. A significant interactive effect of foreperiod duration and regularity on reaction time was observed (P < 0.001 in two-way ANOVA with repeated measures). In the irregular condition the shorter foreperiod induced a longer reaction time, while in the regular condition the shorter foreperiod induced a shorter reaction time. Peak amplitude of isometric force was affected only by the regularity of foreperiod and there was a significant variation of changes in peak force across participants; nine participants were shown to significantly increase peak force for the regular condition (P < 0.001), three to decrease it (P < 0.05) and two showed no difference. These results indicate the independence of reaction time and response force control in the lower limb motor system. Variation of changes in peak force across participants may be due to the different attention to the bipolar nature of the task requirements such as maximal force and maximal speed.
Lim, Jun-Seok; Pang, Hee-Suk
2016-01-01
In this paper an [Formula: see text]-regularized recursive total least squares (RTLS) algorithm is considered for the sparse system identification. Although recursive least squares (RLS) has been successfully applied in sparse system identification, the estimation performance in RLS based algorithms becomes worse, when both input and output are contaminated by noise (the error-in-variables problem). We proposed an algorithm to handle the error-in-variables problem. The proposed [Formula: see text]-RTLS algorithm is an RLS like iteration using the [Formula: see text] regularization. The proposed algorithm not only gives excellent performance but also reduces the required complexity through the effective inversion matrix handling. Simulations demonstrate the superiority of the proposed [Formula: see text]-regularized RTLS for the sparse system identification setting.
Willingness of Regular and Special Educators to Teach Students with Handicaps.
ERIC Educational Resources Information Center
Gans, Karen Derk
1987-01-01
Regular educators (N=128) and special educators (N=133) in 21 Ohio school districts responded to a questionnaire regarding handicap integration. Willingness of regular educators to teach handicapped students depended more heavily on demographic variables (e.g., total number of years in teaching); willingness of special educators depended more on…
Liu, Feng
2018-01-01
In this paper we investigate the endpoint regularity of the discrete m -sublinear fractional maximal operator associated with [Formula: see text]-balls, both in the centered and uncentered versions. We show that these operators map [Formula: see text] into [Formula: see text] boundedly and continuously. Here [Formula: see text] represents the set of functions of bounded variation defined on [Formula: see text].
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Kuświk, Piotr; Ehresmann, Arno; Tekielak, Maria; Szymański, Bogdan; Sveklo, Iosif; Mazalski, Piotr; Engel, Dieter; Kisielewski, Jan; Lengemann, Daniel; Urbaniak, Maciej; Schmidt, Christoph; Maziewski, Andrzej; Stobiecki, Feliks
2011-03-04
Regularly arranged magnetic out-of-plane patterns in continuous and flat films are promising for applications in data storage technology (bit patterned media) or transport of individual magnetic particles. Whereas topographic magnetic structures are fabricated by standard lithographical techniques, the fabrication of regularly arranged artificial domains in topographically flat films is difficult, since the free energy minimization determines the existence, shape, and regularity of domains. Here we show that keV He(+) ion bombardment of Au/Co/Au layer systems through a colloidal mask of hexagonally arranged spherical polystyrene beads enables magnetic patterning of regularly arranged cylindrical magnetic monodomains with out-of-plane magnetization embedded in a ferromagnetic matrix with easy-plane anisotropy. This colloidal domain lithography creates artificial domains via periodic lateral anisotropy variations induced by periodic defect density modulations. Magnetization reversal of the layer system observed by magnetic force microscopy shows individual disc switching indicating monodomain states.
Decelerated medical education.
McGrath, Brian; McQuail, Diane
2004-09-01
The aim of the study was to obtain information regarding the prevalence, structure, student characteristics and outcomes of formal decelerated medical education programs. A 13-item survey was mailed to all US medical schools examining characteristics of decelerated curricular programs. Responses were received from 77 schools (62% response). Some 24 (31%) indicated a formal decelerated option; 13 (57%) decelerate the first year while four (17%) decelerate year 1 or year 2. Participants may be selected before matriculation or after difficulty in 14 (61%) programs while four (17%) select only after encountering difficulty. Students may unilaterally choose deceleration in 10 (43%); 4.3% (0.1-12) of total matriculants were decelerated. The proportion of decelerated students identified as underrepresented minority (URM) was 37% (0-100), representing 10.5% (0-43) of total URM enrollment. Twelve (52%) programs do not provide unique support beyond deceleration. Standards for advancement are identical for decelerated and regular students in 17 schools (81%). In total, 10% (0-100) of decelerated students were dismissed within the last five years, representing 24% (0-90) of all dismissals. Few schools provided grade point average (GPA) or Medical College Admissions Test (MCAT) data but the limited responses indicate that many decelerated students are at risk for academic difficulty. It is concluded that decelerated curricular options are available at a significant number of US medical schools. Decelerated students comprise a small proportion of total enrollment but URM matriculants represent a disproportionate share of participants. Decelerated programs appear to be successful as measured by dismissal rates if one accepts attrition which exceeds that for regular MD students. Variation in dismissal rates is difficult to interpret given the lack of GPA and MCAT data. One half of all programs offer no additional support activities beyond deceleration. More data are needed to determine the relative contribution of deceleration vs. other support measures to the advancement of students at academic risk.
Luo, X N; Yang, M; Liang, X F; Jin, K; Lv, L Y; Tian, C X; Yuan, Y C; Sun, J
2015-09-25
In this study, 12 polymorphic microsatellites were inves-tigated to determine the genetic diversity and structure of 5 consecu-tive selected populations of golden mandarin fish (Siniperca scherzeri Steindachner). The total numbers of alleles, average heterozyosity, and average polymorphism information content showed that the genetic diversity of these breeding populations was decreasing. Additionally, pairwise fixation index FST values among populations and Da values in-creased from F1 generation to subsequent generations (FST values from 0.0221-0.1408; Da values from 0.0608-0.1951). Analysis of molecular variance indicated that most genetic variations arise from individuals within populations (about 92.05%), while variation among populations accounted for only 7.95%. The allele frequency of the loci SC75-220 and SC101-222 bp changed regularly in the 5 breeding generations. Their frequencies were gradually increased and showed an enrichment trend, indicating that there may be genetic correlations between these 2 loci and breeding traits. Our study indicated that microsatellite markers are effective for assessing the genetic variability in the golden mandarin fish breeding program.
Liu, Hong; Zhang, Lanying; Deng, Haijing; Liu, Na; Liu, Cuizhu
2011-10-01
A multi-media bio-PRB reactor was designed to treat groundwater contaminated with petroleum hydrocarbons. After a 208-day bioremediation, combined with the total petroleum hydrocarbons content in the groundwater flowed through the reactor, microbiological characteristics of the PRB reactor including microbes immobilized and its dehydrogenase activity were investigated. TPH was significantly reduced by as much as 65% in the back of the second media layer, whereas in the third layer, the TPH content reached lower than 1 mg l⁻¹. For microbes immobilized on the media, the variations with depth in different media were significantly the same and the regularity was obvious in the forepart of the media, which increased with depth at first and then reduced gradually, while in the back-end, the microbes almost did not have any variations with depth but decreased with the distance. The dehydrogenase activity varied from 2.98 to 16.16 mg TF L⁻¹ h⁻¹ and its distribution illustrated a similar trend with numbers of microbial cell, therefore, the noticeable correlation was found between them.
Venus mesospheric sulfur dioxide measurement retrieved from SOIR on board Venus Express
NASA Astrophysics Data System (ADS)
Mahieux, A.; Vandaele, A. C.; Robert, S.; Wilquet, V.; Drummond, R.; Chamberlain, S.; Belyaev, D.; Bertaux, J. L.
2015-08-01
SOIR on board Venus Express sounds the Venus upper atmosphere using the solar occultation technique. It detects the signature from many Venus atmosphere species, including those of SO2 and CO2. SO2 has a weak absorption structure at 4 μm, from which number density profiles are regularly inferred. SO2 volume mixing ratios (VMR) are calculated from the total number density that are also derived from the SOIR measurements. This work is an update of the previous work by Belyaev et al. (2012), considering the SO2 profiles on a broader altitude range, from 65 to 85 km. Positive detection VMR profiles are presented. In 68% of the occultation spectral datasets, SO2 is detected. The SO2 VMR profiles show a large variability up to two orders of magnitude, on a short term time scales. We present mean VMR profiles for various bins of latitudes, and study the latitudinal variations; the mean latitude variations are much smaller than the short term temporal variations. A permanent minimum showing a weak latitudinal structure is observed. Long term temporal trends are also considered and discussed. The trend observed by Marcq et al. (2013) is not observed in this dataset. Our results are compared to literature data and generally show a good agreement.
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
Singular optimal control and the identically non-regular problem in the calculus of variations
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.
1985-01-01
A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.
Robust Low-dose CT Perfusion Deconvolution via Tensor Total-Variation Regularization
Zhang, Shaoting; Chen, Tsuhan; Sanelli, Pina C.
2016-01-01
Acute brain diseases such as acute strokes and transit ischemic attacks are the leading causes of mortality and morbidity worldwide, responsible for 9% of total death every year. ‘Time is brain’ is a widely accepted concept in acute cerebrovascular disease treatment. Efficient and accurate computational framework for hemodynamic parameters estimation can save critical time for thrombolytic therapy. Meanwhile the high level of accumulated radiation dosage due to continuous image acquisition in CT perfusion (CTP) raised concerns on patient safety and public health. However, low-radiation leads to increased noise and artifacts which require more sophisticated and time-consuming algorithms for robust estimation. In this paper, we focus on developing a robust and efficient framework to accurately estimate the perfusion parameters at low radiation dosage. Specifically, we present a tensor total-variation (TTV) technique which fuses the spatial correlation of the vascular structure and the temporal continuation of the blood signal flow. An efficient algorithm is proposed to find the solution with fast convergence and reduced computational complexity. Extensive evaluations are carried out in terms of sensitivity to noise levels, estimation accuracy, contrast preservation, and performed on digital perfusion phantom estimation, as well as in-vivo clinical subjects. Our framework reduces the necessary radiation dose to only 8% of the original level and outperforms the state-of-art algorithms with peak signal-to-noise ratio improved by 32%. It reduces the oscillation in the residue functions, corrects over-estimation of cerebral blood flow (CBF) and under-estimation of mean transit time (MTT), and maintains the distinction between the deficit and normal regions. PMID:25706579
The Effects of Regular Exercise on the Physical Fitness Levels
ERIC Educational Resources Information Center
Kirandi, Ozlem
2016-01-01
The purpose of the present research is investigating the effects of regular exercise on the physical fitness levels among sedentary individuals. The total of 65 sedentary male individuals between the ages of 19-45, who had never exercises regularly in their lives, participated in the present research. Of these participants, 35 wanted to be…
Scientific data interpolation with low dimensional manifold model
NASA Astrophysics Data System (ADS)
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Adverse Metabolic Response to Regular Exercise: Is It a Rare or Common Occurrence?
Bouchard, Claude; Blair, Steven N.; Church, Timothy S.; Earnest, Conrad P.; Hagberg, James M.; Häkkinen, Keijo; Jenkins, Nathan T.; Karavirta, Laura; Kraus, William E.; Leon, Arthur S.; Rao, D. C.; Sarzynski, Mark A.; Skinner, James S.; Slentz, Cris A.; Rankinen, Tuomo
2012-01-01
Background Individuals differ in the response to regular exercise. Whether there are people who experience adverse changes in cardiovascular and diabetes risk factors has never been addressed. Methodology/Principal Findings An adverse response is defined as an exercise-induced change that worsens a risk factor beyond measurement error and expected day-to-day variation. Sixty subjects were measured three times over a period of three weeks, and variation in resting systolic blood pressure (SBP) and in fasting plasma HDL-cholesterol (HDL-C), triglycerides (TG), and insulin (FI) was quantified. The technical error (TE) defined as the within-subject standard deviation derived from these measurements was computed. An adverse response for a given risk factor was defined as a change that was at least two TEs away from no change but in an adverse direction. Thus an adverse response was recorded if an increase reached 10 mm Hg or more for SBP, 0.42 mmol/L or more for TG, or 24 pmol/L or more for FI or if a decrease reached 0.12 mmol/L or more for HDL-C. Completers from six exercise studies were used in the present analysis: Whites (N = 473) and Blacks (N = 250) from the HERITAGE Family Study; Whites and Blacks from DREW (N = 326), from INFLAME (N = 70), and from STRRIDE (N = 303); and Whites from a University of Maryland cohort (N = 160) and from a University of Jyvaskyla study (N = 105), for a total of 1,687 men and women. Using the above definitions, 126 subjects (8.4%) had an adverse change in FI. Numbers of adverse responders reached 12.2% for SBP, 10.4% for TG, and 13.3% for HDL-C. About 7% of participants experienced adverse responses in two or more risk factors. Conclusions/Significance Adverse responses to regular exercise in cardiovascular and diabetes risk factors occur. Identifying the predictors of such unwarranted responses and how to prevent them will provide the foundation for personalized exercise prescription. PMID:22666405
NASA Astrophysics Data System (ADS)
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
Abe, Takafumi; Kamada, Masamitsu; Kitayuguchi, Jun; Okada, Shinpei; Mutoh, Yoshiteru; Uchio, Yuji
2017-03-14
Musculoskeletal pain (MSP) is a commonly reported symptom in youth sports players. Some sports-related risk factors have been reported, but previous studies on extrinsic risk factors did not focus on management of team members (e.g., regular or non-regular players, number of players) for reducing sports-related MSP. This study aimed to examine the association of playing status (regular or non-regular players) and team status (fewer or more teammates) with MSP in youth team sports. A total of 632 team sports players (age: 12-18 years) in public schools in Unnan, Japan completed a self-administered questionnaire to determine MSP (overall, upper limbs, lower back, and lower limbs) and playing status (regular or non-regular players). Team status was calculated as follows: teammate quantity index (TQI) = [number of teammates in their grade]/[required number of players for the sport]. Associations between the prevalence of pain and joint categories of playing and team status were examined by multivariable-adjusted Poisson regression. A total of 272 (44.3%) participants had MSP at least several times a week in at least one part of the body. When divided by playing or team status, 140 (47.0%) regular and 130 (41.7%) non-regular players had MSP, whereas 142 (47.0%) players with fewer teammates (lower TQI) and 127 (41.8%) players with more teammates (higher TQI) had MSP. When analyzed jointly, regular players with fewer teammates had a higher prevalence of lower back pain compared with non-regular players with more teammates (21.3% vs 8.3%; prevalence ratio = 2.08 [95% confidence interval 1.07-4.02]). The prevalence of MSP was highest in regular players with fewer teammates for all other pain outcomes, but this was not significant. Regular players with fewer teammates have a higher risk of lower back pain. Future longitudinal investigations are required.
Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A
2015-07-01
Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions. Copyright © 2015. Published by Elsevier B.V.
Tobacco cessation quitlines in North America: a descriptive study
Cummins, Sharon E; Bailey, Linda; Campbell, Sharon; Koon‐Kirby, Carrie; Zhu, Shu‐Hong
2007-01-01
Background Quitlines have become an integral part of tobacco control efforts in the United States and Canada. The demonstrated efficacy and the convenience of telephone based counselling have led to the fast adoption of quitlines, to the point of near universal access in North America. However, information on how these quitlines operate in actual practice is not often readily available. Objectives This study describes quitline practice in North America and examines commonalities and differences across quitlines. It will serve as a source of reference for practitioners and researchers, with the aim of furthering service quality and promoting continued innovation. Design A self administered questionnaire survey of large, publicly funded quitlines in the United States and Canada. A total of 52 US quitlines and 10 Canadian quitlines participated. Descriptive statistics are provided regarding quitline operational structures, clinical services, quality assurance procedures, funding sources and utilisation rates. Results Clinical services for the 62 state/provincial quitlines are supplied by a total of 26 service providers. Nine providers operate multiple quitlines, creating greater consistency in operation than would otherwise be expected. Most quitlines offer services over extended hours (mean 96 hours/week) and have multiple language capabilities. Most (98%) use proactive multisession counselling—a key feature of protocols tested in previous experimental trials. Almost all quitlines have extensive training programmes (>60 hours) for counselling staff, and over 70% conduct regular evaluation of outcomes. About half of quitlines use the internet to provide cessation information. A little over a third of US quitlines distribute free cessation medications to eligible callers. The average utilisation rate of the US state quitlines in the 2004–5 fiscal year was about 1.0% across states, with a strong correlation between the funding level of the quitlines and the smokers' utilisation of them (r = 0.74, p<0.001). Conclusions Quitlines in North America display core commonalities: they have adopted the principles of multisession proactive counselling and they conduct regular outcome evaluation. Yet variations, tested and untested, exist. Standardised reporting procedures would be of benefit to the field. Shared discussion of the rationale behind variations can inform future decision making for all North American quitlines. PMID:18048639
NASA Astrophysics Data System (ADS)
Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei
2018-04-01
Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction. PCTV is superior to TV and EPTV methods in edge enhancement, which can potentially improve the localization accuracy in radiation therapy.
An end to endless forms: epistasis, phenotype distribution bias, and nonuniform evolution.
Borenstein, Elhanan; Krakauer, David C
2008-10-01
Studies of the evolution of development characterize the way in which gene regulatory dynamics during ontogeny constructs and channels phenotypic variation. These studies have identified a number of evolutionary regularities: (1) phenotypes occupy only a small subspace of possible phenotypes, (2) the influence of mutation is not uniform and is often canalized, and (3) a great deal of morphological variation evolved early in the history of multicellular life. An important implication of these studies is that diversity is largely the outcome of the evolution of gene regulation rather than the emergence of new, structural genes. Using a simple model that considers a generic property of developmental maps-the interaction between multiple genetic elements and the nonlinearity of gene interaction in shaping phenotypic traits-we are able to recover many of these empirical regularities. We show that visible phenotypes represent only a small fraction of possibilities. Epistasis ensures that phenotypes are highly clustered in morphospace and that the most frequent phenotypes are the most similar. We perform phylogenetic analyses on an evolving, developmental model and find that species become more alike through time, whereas higher-level grades have a tendency to diverge. Ancestral phenotypes, produced by early developmental programs with a low level of gene interaction, are found to span a significantly greater volume of the total phenotypic space than derived taxa. We suggest that early and late evolution have a different character that we classify into micro- and macroevolutionary configurations. These findings complement the view of development as a key component in the production of endless forms and highlight the crucial role of development in constraining biotic diversity and evolutionary trajectories.
Spatial pulses of water inputs in deciduous and hemlock forest stands
NASA Astrophysics Data System (ADS)
Guswa, A. J.; Mussehl, M.; Pecht, A.; Spence, C.
2010-12-01
Trees intercept and redistribute precipitation in time and space. While spatial patterns of throughfall are challenging to link to plant and canopy characteristics, many studies have shown that the spatial patterns persist through time. This persistence leads to wet and dry spots under the trees, creating spatial pulses of moisture that can affect infiltration, transpiration, and biogeochemical processes. In the northeast, the invasive hemlock woolly adelgid poses a significant threat to eastern hemlock (Tsuga canadensis), and replacement of hemlock forests by other species, such as birch, maple, and oak, has the potential to alter throughfall patterns and hydrologic processes. During the summers of 2009 and 2010, we measured throughfall in both hemlock and deciduous plots to assess its spatial distribution and temporal persistence. From 3 June to 25 July 2009, we measured throughfall in one hemlock and one deciduous plot over fourteen events with rainfall totaling 311 mm. From 8 June through 28 July 2010, we measured throughfall in the same two plots plus an additional hemlock stand and a young black birch stand, and rainfall totaled 148 mm over eight events. Averaged over space and time, throughfall was 81% of open precipitation in the hemlock stands, 88% in the mixed deciduous stand, and 100% in the young black birch stand. On an event basis, spatial coefficients of variation are similar among the stands and range from 11% to 49% for rain events greater than 5 mm. With the exception of very light events, coefficients of variation are insensitive to precipitation amount. Spatial patterns of throughfall persist through time, and seasonal coefficients of variation range from 13% to 33%. All stands indicate localized concentrations of water inputs, and there were individual collectors in the deciduous stands that regularly received more than twice the stand-average throughfall.
Liu, Yanjun; Zhou, Qingxin; Xu, Jie; Xue, Yong; Liu, Xiaofang; Wang, Jingfeng; Xue, Changhu
2016-02-01
The objective of this study is to investigate the levels, inter-species-specific, locational differences and seasonal variations of vanadium in sea cucumbers and to validate further several potential factors controlling the distribution of metals in sea cucumbers. Vanadium levels were evaluated in samples of edible sea cucumbers and were demonstrated exhibit differences in different seasons, species and sampling sites. High vanadium concentrations were measured in the sea cucumbers, and all of the vanadium detected was in an organic form. Mean vanadium concentrations were considerably higher in the blood (sea cucumber) than in the other studied tissues. The highest concentration of vanadium (2.56 μg g(-1)), as well as a higher degree of organic vanadium (85.5 %), was observed in the Holothuria scabra samples compared with all other samples. Vanadium levels in Apostichopus japonicus from Bohai Bay and Yellow Sea have marked seasonal variations. Average values of 1.09 μg g(-1) of total vanadium and 0.79 μg g(-1) of organic vanadium were obtained in various species of sea cucumbers. Significant positive correlations between vanadium in the seawater and V org in the sea cucumber (r = 81.67 %, p = 0.00), as well as between vanadium in the sediment and V org in the sea cucumber (r = 77.98 %, p = 0.00), were observed. Vanadium concentrations depend on the seasons (salinity, temperature), species, sampling sites and seawater environment (seawater, sediment). Given the adverse toxicological effects of inorganic vanadium and positive roles in controlling the development of diabetes in humans, a regular monitoring programme of vanadium content in edible sea cucumbers can be recommended.
NASA Astrophysics Data System (ADS)
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2014-03-01
We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).
ERIC Educational Resources Information Center
Mhlolo, Michael Kainose
2017-01-01
Post independent reforms in South Africa moved from separate education for the gifted learners to inclusive education in regular classrooms. A specific concern that has been totally ignored since then is whether or not the regular classroom would expand or limit the gifted child's creativity. This study aimed at investigating the extent to which…
Do Lower Calorie or Lower Fat Foods Have More Sodium Than Their Regular Counterparts?
John, Katherine A.; Maalouf, Joyce; B. Barsness, Christina; Yuan, Keming; Cogswell, Mary E.; Gunn, Janelle P.
2016-01-01
The objective of this study was to compare the sodium content of a regular food and its lower calorie/fat counterpart. Four food categories, among the top 20 contributing the most sodium to the US diet, met the criteria of having the most matches between regular foods and their lower calorie/fat counterparts. A protocol was used to search websites to create a list of “matches”, a regular and comparable lower calorie/fat food(s) under each brand. Nutrient information was recorded and analyzed for matches. In total, 283 matches were identified across four food categories: savory snacks (N = 44), cheese (N = 105), salad dressings (N = 90), and soups (N = 44). As expected, foods modified from their regular versions had significantly reduced average fat (total fat and saturated fat) and caloric profiles. Mean sodium content among modified salad dressings and cheeses was on average 8%–12% higher, while sodium content did not change with modification of savory snacks. Modified soups had significantly lower mean sodium content than their regular versions (28%–38%). Consumers trying to maintain a healthy diet should consider that sodium content may vary in foods modified to be lower in calories/fat. PMID:27548218
Singularities of the quad curl problem
NASA Astrophysics Data System (ADS)
Nicaise, Serge
2018-04-01
We consider the quad curl problem in smooth and non smooth domains of the space. We first give an augmented variational formulation equivalent to the one from [25] if the datum is divergence free. We describe the singularities of the variational space which correspond to the ones of the Maxwell system with perfectly conducting boundary conditions. The edge and corner singularities of the solution of the corresponding boundary value problem with smooth data are also characterized. We finally obtain some regularity results of the variational solution.
Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo
2014-01-01
Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929
A TVSCAD approach for image deblurring with impulsive noise
NASA Astrophysics Data System (ADS)
Gu, Guoyong; Jiang, Suhong; Yang, Junfeng
2017-12-01
We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
Paraoxonase activity in athletic adolescents.
Cakmak, Alpay; Zeyrek, Dost; Atas, Ali; Erel, Ozcan
2010-02-01
Regular physical activity may play a protective role against cardiovascular disease in adults, and paraoxonase activity may serve to mediate this effect. This study compared paraoxonase activity and that of other antioxidative agents in adolescent athletes compared with inactive youth. Paraoxonase level was 177.32 +/- 100.10 (U/L) in children with regular physical activity and 98.11 +/- 40.92 (U/L) in the control group (P < 0.0001). The levels of total antioxidative capacity, total oxidative status, oxidative stress index, and lipid hydroperoxide were significantly higher in the athlete group compared with controls (P < 0.0001). Paraoxonase activity was found to be greater in adolescent athletes, suggesting that regular exercise might provide a cardio-protective effect by this means.
Ormoneit, D
1999-12-01
We consider the training of neural networks in cases where the nonlinear relationship of interest gradually changes over time. One possibility to deal with this problem is by regularization where a variation penalty is added to the usual mean squared error criterion. To learn the regularized network weights we suggest the Iterative Extended Kalman Filter (IEKF) as a learning rule, which may be derived from a Bayesian perspective on the regularization problem. A primary application of our algorithm is in financial derivatives pricing, where neural networks may be used to model the dependency of the derivatives' price on one or several underlying assets. After giving a brief introduction to the problem of derivatives pricing we present experiments with German stock index options data showing that a regularized neural network trained with the IEKF outperforms several benchmark models and alternative learning procedures. In particular, the performance may be greatly improved using a newly designed neural network architecture that accounts for no-arbitrage pricing restrictions.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard C.; ...
2017-09-28
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Information transmission using non-poisson regular firing.
Koyama, Shinsuke; Omi, Takahiro; Kass, Robert E; Shinomoto, Shigeru
2013-04-01
In many cortical areas, neural spike trains do not follow a Poisson process. In this study, we investigate a possible benefit of non-Poisson spiking for information transmission by studying the minimal rate fluctuation that can be detected by a Bayesian estimator. The idea is that an inhomogeneous Poisson process may make it difficult for downstream decoders to resolve subtle changes in rate fluctuation, but by using a more regular non-Poisson process, the nervous system can make rate fluctuations easier to detect. We evaluate the degree to which regular firing reduces the rate fluctuation detection threshold. We find that the threshold for detection is reduced in proportion to the coefficient of variation of interspike intervals.
Scientific data interpolation with low dimensional manifold model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Wei; Wang, Bao; Barnard, Richard C.
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on datamore » compression and interpolation from both regular and irregular samplings.« less
Estimation of Noise Properties for TV-regularized Image Reconstruction in Computed Tomography
Sánchez, Adrian A.
2016-01-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR. PMID:26308968
Sensing of the atmospheric variation using Low Cost GNSS Receiver
NASA Astrophysics Data System (ADS)
Bramanto, Brian; Gumilar, Irwan; Sidiq, Teguh P.; Kuntjoro, Wedyanto; Tampubolon, Daniel A.
2018-05-01
As the GNSS signals transmitted through the atmosphere, they are delayed by interference of TEC (Total Electron Content) in the ionosphere and water vapor in the troposphere. By using inverse-problem, name GNSS Meteorology, those parameters can be obtained precisely and several researches has approved and supported that method. However, the geodetic GNSS receivers are relatively high cost (30,000 to 70,000 each) to be established on a regular and uniform network. This research aims to investigate the potential use of low cost GNSS receiver (less than 2,000) to observe the atmospheric dynamic both in ionosphere and troposphere. Results indicated that low cost GNSS receiver is a promising tools to sensing the atmospheric dynamic, however, further processing is needed to enhance the data quality. It is found that both of ionosphere and troposphere dynamic has diurnal periodic component.
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
Estimation of noise properties for TV-regularized image reconstruction in computed tomography.
Sánchez, Adrian A
2015-09-21
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128 × 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
Estimation of noise properties for TV-regularized image reconstruction in computed tomography
NASA Astrophysics Data System (ADS)
Sánchez, Adrian A.
2015-09-01
A method for predicting the image covariance resulting from total-variation-penalized iterative image reconstruction (TV-penalized IIR) is presented and demonstrated in a variety of contexts. The method is validated against the sample covariance from statistical noise realizations for a small image using a variety of comparison metrics. Potential applications for the covariance approximation include investigation of image properties such as object- and signal-dependence of noise, and noise stationarity. These applications are demonstrated, along with the construction of image pixel variance maps for two-dimensional 128× 128 pixel images. Methods for extending the proposed covariance approximation to larger images and improving computational efficiency are discussed. Future work will apply the developed methodology to the construction of task-based image quality metrics such as the Hotelling observer detectability for TV-based IIR.
NASA Astrophysics Data System (ADS)
Panagiotopoulou, Antigoni; Bratsolis, Emmanuel; Charou, Eleni; Perantonis, Stavros
2017-10-01
The detailed three-dimensional modeling of buildings utilizing elevation data, such as those provided by light detection and ranging (LiDAR) airborne scanners, is increasingly demanded today. There are certain application requirements and available datasets to which any research effort has to be adapted. Our dataset includes aerial orthophotos, with a spatial resolution 20 cm, and a digital surface model generated from LiDAR, with a spatial resolution 1 m and an elevation resolution 20 cm, from an area of Athens, Greece. The aerial images are fused with LiDAR, and we classify these data with a multilayer feedforward neural network for building block extraction. The innovation of our approach lies in the preprocessing step in which the original LiDAR data are super-resolution (SR) reconstructed by means of a stochastic regularized technique before their fusion with the aerial images takes place. The Lorentzian estimator combined with the bilateral total variation regularization performs the SR reconstruction. We evaluate the performance of our approach against that of fusing unprocessed LiDAR data with aerial images. We present the classified images and the statistical measures confusion matrix, kappa coefficient, and overall accuracy. The results demonstrate that our approach predominates over that of fusing unprocessed LiDAR data with aerial images.
International Intercomparison of Regular Transmittance Scales
NASA Astrophysics Data System (ADS)
Eckerle, K. L.; Sutter, E.; Freeman, G. H. C.; Andor, G.; Fillinger, L.
1990-01-01
An intercomparison of the regular spectral transmittance scales of NIST, Gaithersburg, MD (USA); PTB, Braunschweig (FRG); NPL, Teddington, Middlesex (UK); and OMH, Budapest (H) was accomplished using three sets of neutral glass filters with transmittances ranging from approximately 0.92 to 0.001. The difference between the results from the reference spectrophotometers of the laboratories was generally smaller than the total uncertainty of the interchange. The relative total uncertainty ranges from 0.05% to 0.75% for transmittances from 0.92 to 0.001. The sample-induced error was large - contributing 40% or more of the total except in a few cases.
Reboussin, Beth A; Song, Eun-Young; Shrestha, Anshu; Lohman, Kurt K; Wolfson, Mark
2006-07-27
The aim of this paper is to shed light on the nature of underage problem drinking by using an empirically based method to characterize the variation in patterns of drinking in a community sample of underage drinkers. A total of 4056 16-20-year-old current drinkers from 212 communities in the US were surveyed by telephone as part of the National Evaluation of the Enforcing Underage Drinking Laws (EUDL) Program. Latent class models were used to create homogenous groups of drinkers with similar drinking patterns defined by multiple indicators of drinking behaviors and alcohol-related problems. Two types of underage problem drinkers were identified; risky drinkers (30%) and regular drinkers (27%). The most prominent behaviors among both types of underage problem drinkers were binge drinking and getting drunk. Being male, other drug use, early onset drinking and beliefs about friends drinking and getting drunk were all associated with an increased risk of being a problem drinker after adjustment for other factors. Beliefs that most friends drink and current marijuana use were the strongest predictors of both risky problem drinking (OR=4.0; 95% CI=3.1, 5.1 and OR=4.0; 95% CI=2.8, 5.6, respectively) and regular problem drinking (OR=10.8; 95% CI=7.0, 16.7 and OR=10.2; 95% CI=6.9, 15.2). Young adulthood (ages 18-20) was significantly associated with regular problem drinking but not risky problem drinking. The belief that most friends get drunk weekly was the strongest discriminator of risky and regular problem drinking patterns (OR=5.3; 95% CI=3.9, 7.1). These findings suggest that underage problem drinking is most strongly characterized by heavy drinking behaviors which can emerge in late adolescence and underscores its association with perceptions regarding friends drinking behaviors and illicit drug use.
Reboussin, Beth A.; Song, Eun-Young; Shrestha, Anshu; Lohman, Kurt K.; Wolfson, Mark
2008-01-01
The aim of this paper is to shed light on the nature of underage problem drinking by using an empirically based method to characterize the variation in patterns of drinking in a community sample of underage drinkers. A total of 4056 16−20-year-old current drinkers from 212 communities in the US were surveyed by telephone as part of the National Evaluation of the Enforcing Underage Drinking Laws (EUDL) Program. Latent class models were used to create homogenous groups of drinkers with similar drinking patterns defined by multiple indicators of drinking behaviors and alcohol-related problems. Two types of underage problem drinkers were identified; risky drinkers (30%) and regular drinkers (27%). The most prominent behaviors among both types of underage problem drinkers were binge drinking and getting drunk. Being male, other drug use, early onset drinking and beliefs about friends drinking and getting drunk were all associated with an increased risk of being a problem drinker after adjustment for other factors. Beliefs that most friends drink and current marijuana use were the strongest predictors of both risky problem drinking (OR = 4.0; 95% CI = 3.1, 5.1 and OR = 4.0; 95% CI = 2.8, 5.6, respectively) and regular problem drinking (OR = 10.8; 95% CI = 7.0, 16.7 and OR = 10.2; 95% CI = 6.9, 15.2). Young adulthood (ages 18−20) was significantly associated with regular problem drinking but not risky problem drinking. The belief that most friends get drunk weekly was the strongest discriminator of risky and regular problem drinking patterns (OR = 5.3; 95% CI = 3.9, 7.1). These findings suggest that underage problem drinking is most strongly characterized by heavy drinking behaviors which can emerge in late adolescence and underscores its association with perceptions regarding friends drinking behaviors and illicit drug use. PMID:16359829
Acoustic and elastic waveform inversion best practices
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.
Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.
NASA Astrophysics Data System (ADS)
Tsvetkov, AB; Pavlova, LD; Fryanov, VN
2018-03-01
The results of numerical simulation of the stress–strain state in a rock block and surrounding mass mass under multi-roadway preparation to mining are presented. The numerical solutions obtained by the nonlinear modeling and using the constitutive relations of the theory of elasticity are compared. The regularities of the stress distribution in the vicinity of the pillars located in the zone of the abutment pressure of are found.
Studies on the epidemiology and control of seasonal conjunctivitis and trachoma in southern Morocco*
Reinhards, J.; Weber, A.; Nižetič, B.; Kupka, K.; Maxwell-Lyons, F.
1968-01-01
It has been noted in many parts of the world that bacterial conjunctivitis is a major cause of total or partial loss of vision. In addition, trachoma is aggravated if there are associated bacterial infections and these lead to more frequent corneal complications. In the trials described the seasonal variation of bacterial infections was studied in addition to trachoma in 3 pilot sectors in southern Morocco. The frequency of complications and late sequelae from these infections in the whole population of these sectors was also studied. In one of the sectors 3 different methods of limiting the regular seasonal increase in bacterial infections and of curing trachoma were evaluated separately or in combination. These included the effect of fly-suppression on the transmission of infection, a possible method of chemoprophylaxis, and intermittent treatment with chlortetracycline ointment. The effect of the latter, when applied to a whole population group by auxiliary personnel, was compared with the long-term effect of self-treatment, in this and the two other sectors. The total observation period covered 12 years. PMID:5304804
Diet compositions and trophic guild structure of the eastern Chukchi Sea demersal fish community
NASA Astrophysics Data System (ADS)
Whitehouse, George A.; Buckley, Troy W.; Danielson, Seth L.
2017-01-01
Fishes are an important link in Arctic marine food webs, connecting production of lower trophic levels to apex predators. We analyzed 1773 stomach samples from 39 fish species collected during a bottom trawl survey of the eastern Chukchi Sea in the summer of 2012. We used hierarchical cluster analysis of diet dissimilarities on 21 of the most well sampled species to identify four distinct trophic guilds: gammarid amphipod consumers, benthic invertebrate generalists, fish and shrimp consumers, and zooplankton consumers. The trophic guilds reflect dominant prey types in predator diets. We used constrained analysis of principal coordinates (CAP) to determine if variation within the composite guild diets could be explained by a suite of non-diet variables. All CAP models explained a significant proportion of the variance in the diet matrices, ranging from 7% to 25% of the total variation. Explanatory variables tested included latitude, longitude, predator length, depth, and water mass. These results indicate a trophic guild structure is present amongst the demersal fish community during summer in the eastern Chukchi Sea. Regular monitoring of the food habits of the demersal fish community will be required to improve our understanding of the spatial, temporal, and interannual variation in diet composition, and to improve our ability to identify and predict the impacts of climate change and commercial development on the structure and functioning of the Chukchi Sea ecosystem.
A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.
Joy, Ajin; Paul, Joseph Suresh
2018-03-07
Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.
A survey of the advertising of nine new drugs in the general practice literature.
Jones, M; Greenfield, S; Bradley, C
1999-12-01
To undertake a survey of the advertising of new drugs in the general practice literature as part of a larger study investigating the factors which influence the introduction of new drugs into clinical practice. The advertisements for nine new drugs from a range of therapeutic groups were monitored for 30 months in 12 journals, which are received by most GPs. The amount of prescribing, in defined daily doses, of each new drug by 50 GPs, selected as regular users of a teaching hospital, was also recorded during this period. Of the journals, 798 issues were surveyed (93% of the total published). The total number of advertisements was almost 33 000, of which 2163 (6.6%) were for the study drugs. The pattern of advertising of each study drug was very complex and varied from month to month and between journals. There was no consistent pattern in the way the drugs were advertised, with large variations in the amount and timing of advertisements. The prescribing data showed wide variations in the number of GPs prescribing each drug and in the amount prescribed. There was no clear relationship between the extent of the advertising of a drug and the amount of prescribing by the GPs. This suggests that advertising in journals is only one of many factors which are important in influencing GPs to prescribe new drugs. However, the study may have been insufficiently comprehensive to capture complex relationships between advertising and prescribing.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Self-Esteem of Deaf and Hard of Hearing Students in Regular and Special Schools
ERIC Educational Resources Information Center
Lesar, Irena; Smrtnik Vitulic, Helena
2014-01-01
The study focuses on the self-esteem of deaf and hard of hearing (D/HH) students from Slovenia. A total of 80 D/HH students from regular and special primary schools (grades 6-9) and from regular and special secondary schools (grades 1-4) completed the Self-Esteem Questionnaire (Lamovec 1994). For the entire group of D/HH students, the results of…
Procter-Gray, Elizabeth; Leveille, Suzanne G.; Hannan, Marian T.; Cheng, Jie; Kane, Kevin; Li, Wenjun
2015-01-01
Background. Regular walking is critical to maintaining health in older age. We examined influences of individual and community factors on walking habits in older adults. Methods. We analyzed walking habits among participants of a prospective cohort study of 745 community-dwelling men and women, mainly aged 70 years or older. We estimated community variations in utilitarian and recreational walking, and examined whether the variations were attributable to community differences in individual and environmental factors. Results. Prevalence of recreational walking was relatively uniform while prevalence of utilitarian walking varied across the 16 communities in the study area. Both types of walking were associated with individual health and physical abilities. However, utilitarian walking was also strongly associated with several measures of neighborhood socioeconomic status and access to amenities while recreational walking was not. Conclusions. Utilitarian walking is strongly influenced by neighborhood environment, but intrinsic factors may be more important for recreational walking. Communities with the highest overall walking prevalence were those with the most utilitarian walkers. Public health promotion of regular walking should take this into account. PMID:26339507
Kouvaris, Kostas; Clune, Jeff; Kounios, Loizos; Brede, Markus; Watson, Richard A
2017-04-01
One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments. Such variability is crucial for evolvability, but poorly understood. In particular, how can natural selection favour developmental organisations that facilitate adaptive evolution in previously unseen environments? Such a capacity suggests foresight that is incompatible with the short-sighted concept of natural selection. A potential resolution is provided by the idea that evolution may discover and exploit information not only about the particular phenotypes selected in the past, but their underlying structural regularities: new phenotypes, with the same underlying regularities, but novel particulars, may then be useful in new environments. If true, we still need to understand the conditions in which natural selection will discover such deep regularities rather than exploiting 'quick fixes' (i.e., fixes that provide adaptive phenotypes in the short term, but limit future evolvability). Here we argue that the ability of evolution to discover such regularities is formally analogous to learning principles, familiar in humans and machines, that enable generalisation from past experience. Conversely, natural selection that fails to enhance evolvability is directly analogous to the learning problem of over-fitting and the subsequent failure to generalise. We support the conclusion that evolving systems and learning systems are different instantiations of the same algorithmic principles by showing that existing results from the learning domain can be transferred to the evolution domain. Specifically, we show that conditions that alleviate over-fitting in learning systems successfully predict which biological conditions (e.g., environmental variation, regularity, noise or a pressure for developmental simplicity) enhance evolvability. This equivalence provides access to a well-developed theoretical framework from learning theory that enables a characterisation of the general conditions for the evolution of evolvability.
Exponential series approaches for nonparametric graphical models
NASA Astrophysics Data System (ADS)
Janofsky, Eric
Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.
Structural characterization of the packings of granular regular polygons.
Wang, Chuncheng; Dong, Kejun; Yu, Aibing
2015-12-01
By using a recently developed method for discrete modeling of nonspherical particles, we simulate the random packings of granular regular polygons with three to 11 edges under gravity. The effects of shape and friction on the packing structures are investigated by various structural parameters, including packing fraction, the radial distribution function, coordination number, Voronoi tessellation, and bond-orientational order. We find that packing fraction is generally higher for geometrically nonfrustrated regular polygons, and can be increased by the increase of edge number and decrease of friction. The changes of packing fraction are linked with those of the microstructures, such as the variations of the translational and orientational orders and local configurations. In particular, the free areas of Voronoi tessellations (which are related to local packing fractions) can be described by log-normal distributions for all polygons. The quantitative analyses establish a clearer picture for the packings of regular polygons.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Biological variation of vitamins in blood of healthy individuals.
Talwar, Dinesh K; Azharuddin, Mohammed K; Williamson, Cathy; Teoh, Yee Ping; McMillan, Donald C; St J O'Reilly, Denis
2005-11-01
Components of biological variation can be used to define objective quality specifications (imprecision, bias, and total error), to assess the usefulness of reference values [index of individuality (II)], and to evaluate significance of changes in serial results from an individual [reference change value (RCV)]. However, biological variation data on vitamins in blood are limited. The aims of the present study were to determine the intra- and interindividual biological variation of vitamins A, E, B(1), B(2), B(6), C, and K and carotenoids in plasma, whole blood, or erythrocytes from apparently healthy persons and to define quality specifications for vitamin measurements based on their biology. Fasting plasma, whole blood, and erythrocytes were collected from 14 healthy volunteers at regular weekly intervals over 22 weeks. Vitamins were measured by HPLC. From the data generated, the intra- (CV(I)) and interindividual (CV(G)) biological CVs were estimated for each vitamin. Derived quality specifications, II, and RCV were calculated from CV(I) and CV(G). CV(I) was 4.8%-38% and CV(G) was 10%-65% for the vitamins measured. The CV(I)s for vitamins A, E, B(1), and B(2) were lower (4.8%-7.6%) than for the other vitamins in blood. For all vitamins, CV(G) was higher than CV(I), with II <1.0 (range, 0.36-0.95). The RCVs for vitamins were high (15.8%-108%). Apart from vitamins A, B(1), and erythrocyte B(2), the imprecision of our methods for measurement of vitamins in blood was within the desirable goal. For most vitamin measurements in plasma, whole blood, or erythrocytes, the desirable imprecision goals based on biological variation are obtainable by current methodologies. Population reference intervals for vitamins are of limited value in demonstrating deficiency or excess.
Paavola, Paula; Tiihonen, Jari
2010-01-01
A seasonal variation in violence and suicidal behaviour has been reported in several studies with partially congruent results. Most of forensic psychiatric patients have a history of severe violent behaviour that often continues in spite of regular treatment. In the forensic psychiatric hospital environment aggressive and suicidal acts are often sudden and unpredictable. For reasons of safety, rapid and intensive coercive measures, such as seclusion and restraint, are necessary in the treatment of such patients. To examine whether these involuntary seclusions have a seasonal pattern, possibly similar than the reported seasonal variation in violence and suicidal behaviour. By investigating the possibility of a seasonal variation of seclusion incidents from violent and suicidal acts, it may become possible to improve the management of forensic psychiatric patients. The hospital files of all secluded patients at Niuvanniemi Hospital from 1 January 1996 to 31 December 2002 were examined. In total, 385 patients (324 male and 61 female) were identified as being secluded at least once in 1930 different incidents (1476 from male and 454 from female patients). Seasonal decomposition and linear regression with dummy month variables were used to examine the possibility of annual variations for seclusions. The seasonal variation of involuntary seclusion incidents was statistically significant. According to the linear regression model, most of the seclusion incidents, affecting many different patients, began in July and August, and were concentrated throughout the fall until November. The sum of all seclusion days was lowest in January and highest between July and November (difference +31% to +37%). These findings are mainly in agreement with results from other studies on seasonal variation and violent behaviour. The allocation of staff for late summer and fall might enhance the management of forensic psychiatric patients, thus leading to possible decreases in seclusion incidents. The factors affecting violent, aggressive and suicidal behaviours are complex and more investigation is needed to understand, identify, intervene and effectively reduce such behaviours. Copyright 2009. Published by Elsevier Ltd.
Circulation controls of the spatial structure of maximum daily precipitation over Poland
NASA Astrophysics Data System (ADS)
Stach, Alfred
2015-04-01
Among forecasts made on the basis of global and regional climatic models is one of a high probability of an increase in the frequency and intensity of extreme precipitation events. Learning the regularities underlying the recurrence and spatial extent of extreme precipitation is obviously of great importance, both economic and social. The main goal of the study was to analyse regularities underlying spatial and temporal variations in monthly Maximum Daily Precipitation Totals (MDPTs) observed in Poland over the years 1956-1980. These data are specific because apart from being spatially discontinuous, which is typical of precipitation, they are also non-synchronic. The main aim of the study was accomplished via several detailed goals: • identification and typology of the spatial structure of monthly MDPTs, • determination of the character and probable origin of events generating MDPTs, and • quantitative assessment of the contribution of the particular events to the overall MDPT figures. The analysis of the spatial structure of MDPTs was based on 300 models of spatial structure, one for each of the analysed sets of monthly MDPTs. The models were built on the basis of empirical anisotropic semivariograms of normalised data. In spite of their spatial discontinuity and asynchronicity, the MDPT data from Poland display marked regularities in their spatial pattern that yield readily to mathematical modelling. The MDPT field in Poland is usually the sum of the outcomes of three types of processes operating at various spatial scales: local (<10-20 km), regional (50-150 km), and supra-regional (>200 km). The spatial scales are probably connected with a convective/ orographic, a frontal and a 'planetary waves' genesis of high precipitation. Their contributions are highly variable. Generally predominant, however, are high daily precipitation totals with a spatial extent of 50 to 150 km connected with mesoscale phenomena and the migration of atmospheric fronts (35-38%). The spatial extent of areas of high local-scale precipitation usually varies at random, especially in the warm season. At supra-local scales, structures of repetitive size predominate. Eight types of anisotropic structures of monthly MDPTs were distinguished. To identify them, an analysis was made of semivariance surface similarities. The types differ not only in the level and direction of anisotropy, but also in the number and type of elementary components, which is evidence of genetic differences in precipitation. Their appearance shows a significant seasonal variability, so the most probable supposition was that temporal variations in the MDPT pattern were connected with circulation conditions: the type and direction of inflow of air masses. This hypothesis was validated by testing differences in the frequency of occurrence of Grosswetterlagen circulation situations in the months belonging to the distinguished types of the spatial MDPT pattern.
On well-posedness of variational models of charged drops.
Muratov, Cyrill B; Novaga, Matteo
2016-03-01
Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges.
On well-posedness of variational models of charged drops
Novaga, Matteo
2016-01-01
Electrified liquids are well known to be prone to a variety of interfacial instabilities that result in the onset of apparent interfacial singularities and liquid fragmentation. In the case of electrically conducting liquids, one of the basic models describing the equilibrium interfacial configurations and the onset of instability assumes the liquid to be equipotential and interprets those configurations as local minimizers of the energy consisting of the sum of the surface energy and the electrostatic energy. Here we show that, surprisingly, this classical geometric variational model is mathematically ill-posed irrespective of the degree to which the liquid is electrified. Specifically, we demonstrate that an isolated spherical droplet is never a local minimizer, no matter how small is the total charge on the droplet, as the energy can always be lowered by a smooth, arbitrarily small distortion of the droplet's surface. This is in sharp contrast to the experimental observations that a critical amount of charge is needed in order to destabilize a spherical droplet. We discuss several possible regularization mechanisms for the considered free boundary problem and argue that well-posedness can be restored by the inclusion of the entropic effects resulting in finite screening of free charges. PMID:27118921
Sowers, M R; Finkelstein, J S; Ettinger, B; Bondarenko, I; Neer, R M; Cauley, J A; Sherman, S; Greendale, G A
2003-01-01
We evaluated bone mineral density (BMD), hormone concentrations and menstrual cycle status to test the hypothesis that greater variations in reproductive hormones and menstrual bleeding patterns in mid-aged women might engender an environment permissive for less bone. We studied 2336 women, aged 42-52 years, from the Study of Women's Health Across the Nation (SWAN) who self-identified as African-American (28.2%), Caucasian (49.9%), Japanese (10.5%) or Chinese (11.4%). Outcome measures were lumbar spine, femoral neck and total hip BMD by dual-energy X-ray densitometry (DXA). Explanatory variables were estradiol, testosterone, sex hormone binding globulin (SHBG) and follicle stimulating hormone (FSH) from serum collected in the early follicular phase of the menstrual cycle or menstrual status [premenopausal (menses in the 3 months prior to study entry without change in regularity) or early perimenopause (menstrual bleeding in the 3 months prior to study entry but some change in the regularity of cycles)]. Total testosterone and estradiol concentrations were indexed to SHBG for the Free Androgen Index (FAI) and the Free Estradiol Index (FEI). Serum logFSH concentrations were inversely correlated with BMD (r = -10 for lumbar spine [95% confidence interval (CI): -0.13, -0.06] and r = -0.08 for femoral neck (95% CI: -0.11, -0.05). Lumbar spine BMD values were approximately 0.5% lower for each successive FSH quartile. There were no significant associations of BMD with serum estradiol, total testosterone, FEI or FAI, respectively, after adjusting for covariates. BMD tended to be lower (p values = 0.009 to 0.06, depending upon the skeletal site) in women classified as perimenopausal versus premenopausal, after adjusting for covariates. Serum FSH but not serum estradiol, testosterone or SHBG were significantly associated with BMD in a multiethnic population of women classified as pre- versus perimenopausal, supporting the hypothesis that alterations in hormone environment are associated with BMD differences prior to the final menstrual period.
Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions
NASA Astrophysics Data System (ADS)
Ilgen, Marc R.
This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.
Jauch-Chara, Kamila; Hallschmid, Manfred; Schmid, Sebastian M; Bandorf, Nadine; Born, Jan; Schultes, Bernd
2010-05-01
Sleep deprivation (SD) impairs neurocognitive functions. Assuming that this effect is mediated by reduced cerebral glucose supply due to prolonged wakefulness inducing a progressive depletion of cerebral glycogen stores, we hypothesized that short-term sleep loss amplifies the deteriorating effects of acute hypoglycemia on neurocognitive functions. Seven healthy men were tested in a randomized and balanced order on 3 different conditions spaced 2 weeks apart. After a night of total SD (total SD), 4.5h of sleep (partial SD) and a night with 7h of regular sleep (regular sleep), subjects were exposed to a stepwise hypoglycemic clamp experiment. Reaction time (RT) and auditory evoked brain potentials (AEP) were assessed during a euglycemic baseline period and at the end of the clamp (blood glucose at 2.5mmol/l). During the euglycemic baseline, amplitude of the P3 component of the AEP was lower after total SD than after partial SD (9.2+/-3.2microV vs. 16.6+/-2.9microV; t(6)=3.2, P=0.02) and regular sleep (20.2+/-2.1microV; t(6)=18.8, P<0.01). Reaction time was longer after total SD in comparison to partial SD (367+/-45ms vs. 304+/-36ms; t(6)=2.7, P=0.04) and to regular sleep (322+/-36ms; t(6)=2.41, P=0.06) while there was no difference between partial SD and regular sleep condition (t(6)=0.60, P=0.57). Hypoglycemia decreased P3 amplitude by 11.2+/-4.1microV in the partial SD condition (t(6)=2.72, P=0.04) and by 9.3+/-0.7microV in the regular sleep condition (t(6)=12.51, P<0.01), but did not further reduce P3 amplitude after total SD (1.8+/-3.9microV; t(6)=0.46, P=0.66). Thus, at the end of hypoglycemia P3 amplitudes were similar across the 3 conditions (F(2,10)=0.89, P=0.42). RT generally showed a similar pattern with a significant prolongation due to hypoglycemia after partial SD (+42+/-12ms; t(6)=3.39, P=0.02) and regular sleep (+37+/-10ms; t(6)=3.53, P=0.01), but not after total SD (+15+/-16; t(6)=0.97, P=0.37), resulting in similar values at the end of hypoglycemia (F(1,6)=1.01, P=0.36). One night of total SD deteriorates neurocognitive function as reflected by indicators of attentive stimulus processing, but does not synergistically aggravate the impairing influence of acute hypoglycemia. The findings are not consistent with the view that neurocognitive deteriorations after SD result from challenged cerebral glucose metabolism. Copyright 2009 Elsevier Ltd. All rights reserved.
Batalla, Albert; Lorenzetti, Valentina; Chye, Yann; Yücel, Murat; Soriano-Mas, Carles; Bhattacharyya, Sagnik; Torrens, Marta; Crippa, José A.S.; Martín-Santos, Rocío
2018-01-01
Abstract Introduction: Hippocampal neuroanatomy is affected by genetic variations in dopaminergic candidate genes and environmental insults, such as early onset of chronic cannabis exposure. Here, we examine how hippocampal total and subregional volumes are affected by cannabis use and functional polymorphisms of dopamine-relevant genes, including the catechol-O-methyltransferase (COMT), dopamine transporter (DAT1), and the brain-derived neurotrophic factor (BDNF) genes. Material and Methods: We manually traced total hippocampal volumes and automatically segmented hippocampal subregions using high-resolution MRI images, and performed COMT, DAT1, and BDNF genotyping in 59 male Caucasian young adults aged 18–30 years. These included 30 chronic cannabis users with early-onset (regular use at <16 years) and 29 age-, education-, and intelligence-matched controls. Results: Cannabis use and dopaminergic gene polymorphism had both distinct and interactive effects on the hippocampus. We found emerging alterations of hippocampal total and specific subregional volumes in cannabis users relative to controls (i.e., CA1, CA2/3, and CA4), and associations between cannabis use levels and total and specific subregional volumes. Furthermore, total hippocampal volume and the fissure subregion were affected by cannabis×DAT1 polymorphism (i.e., 9/9R and in 10/10R alleles), reflecting high and low levels of dopamine availability. Conclusion: These findings suggest that cannabis exposure alters the normal relationship between DAT1 polymorphism and the anatomy of total and subregional hippocampal volumes, and that specific hippocampal subregions may be particularly affected. PMID:29404409
Batalla, Albert; Lorenzetti, Valentina; Chye, Yann; Yücel, Murat; Soriano-Mas, Carles; Bhattacharyya, Sagnik; Torrens, Marta; Crippa, José A S; Martín-Santos, Rocío
2018-01-01
Introduction: Hippocampal neuroanatomy is affected by genetic variations in dopaminergic candidate genes and environmental insults, such as early onset of chronic cannabis exposure. Here, we examine how hippocampal total and subregional volumes are affected by cannabis use and functional polymorphisms of dopamine-relevant genes, including the catechol-O-methyltransferase (COMT), dopamine transporter (DAT1), and the brain-derived neurotrophic factor (BDNF) genes. Material and Methods: We manually traced total hippocampal volumes and automatically segmented hippocampal subregions using high-resolution MRI images, and performed COMT, DAT1, and BDNF genotyping in 59 male Caucasian young adults aged 18-30 years. These included 30 chronic cannabis users with early-onset (regular use at <16 years) and 29 age-, education-, and intelligence-matched controls. Results: Cannabis use and dopaminergic gene polymorphism had both distinct and interactive effects on the hippocampus. We found emerging alterations of hippocampal total and specific subregional volumes in cannabis users relative to controls (i.e., CA1, CA2/3, and CA4), and associations between cannabis use levels and total and specific subregional volumes. Furthermore, total hippocampal volume and the fissure subregion were affected by cannabis×DAT1 polymorphism (i.e., 9/9R and in 10/10R alleles), reflecting high and low levels of dopamine availability. Conclusion: These findings suggest that cannabis exposure alters the normal relationship between DAT1 polymorphism and the anatomy of total and subregional hippocampal volumes, and that specific hippocampal subregions may be particularly affected.
NASA Astrophysics Data System (ADS)
Shevtsova, Ekaterina
2011-10-01
For the general renormalizable N=1 supersymmetric Yang-Mills theory, regularized by higher covariant derivatives, a two-loop β-function is calculated. It is shown that all integrals, needed for its obtaining are integrals of total derivatives.
Moche CAPE Formula: Cost Analysis of Public Education.
ERIC Educational Resources Information Center
Moche, Joanne Spiers
The Moche Cost Analysis of Public Education (CAPE) formula was developed to identify total and per pupil costs of regular elementary education, regular secondary education, elementary special education, and secondary special education. Costs are analyzed across five components: (1) comprehensive costs (including transportation and supplemental…
Regular Patterns in Cerebellar Purkinje Cell Simple Spike Trains
Shin, Soon-Lim; Hoebeek, Freek E.; Schonewille, Martijn; De Zeeuw, Chris I.; Aertsen, Ad; De Schutter, Erik
2007-01-01
Background Cerebellar Purkinje cells (PC) in vivo are commonly reported to generate irregular spike trains, documented by high coefficients of variation of interspike-intervals (ISI). In strong contrast, they fire very regularly in the in vitro slice preparation. We studied the nature of this difference in firing properties by focusing on short-term variability and its dependence on behavioral state. Methodology/Principal Findings Using an analysis based on CV2 values, we could isolate precise regular spiking patterns, lasting up to hundreds of milliseconds, in PC simple spike trains recorded in both anesthetized and awake rodents. Regular spike patterns, defined by low variability of successive ISIs, comprised over half of the spikes, showed a wide range of mean ISIs, and were affected by behavioral state and tactile stimulation. Interestingly, regular patterns often coincided in nearby Purkinje cells without precise synchronization of individual spikes. Regular patterns exclusively appeared during the up state of the PC membrane potential, while single ISIs occurred both during up and down states. Possible functional consequences of regular spike patterns were investigated by modeling the synaptic conductance in neurons of the deep cerebellar nuclei (DCN). Simulations showed that these regular patterns caused epochs of relatively constant synaptic conductance in DCN neurons. Conclusions/Significance Our findings indicate that the apparent irregularity in cerebellar PC simple spike trains in vivo is most likely caused by mixing of different regular spike patterns, separated by single long intervals, over time. We propose that PCs may signal information, at least in part, in regular spike patterns to downstream DCN neurons. PMID:17534435
Mascons, GRACE, and Time-variable Gravity
NASA Technical Reports Server (NTRS)
Lemoine, F.; Lutchke, S.; Rowlands, D.; Klosko, S.; Chinn, D.; Boy, J. P.
2006-01-01
The GRACE mission has been in orbit now for three years and now regularly produces snapshots of the Earth s gravity field on a monthly basis. The convenient standard approach has been to perform global solutions in spherical harmonics. Alternative local representations of mass variations using mascons show great promise and offer advantages in terms of computational efficiency, minimization of problems due to aliasing, and increased temporal resolution. In this paper, we discuss the results of processing the GRACE KBRR data from March 2003 through August 2005 to produce solutions for GRACE mass variations over mid-latitude and equatorial regions, such as South America, India and the United States, and over the polar regions (Antarctica and Greenland), with a focus on the methodology. We describe in particular mascon solutions developed on regular 4 degree x 4 degree grids, and those tailored specifically to drainage basins over these regions.
Regularization destriping of remote sensing imagery
NASA Astrophysics Data System (ADS)
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes
(strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
Time Variations of the Radial Velocity of H2O Masers in the Semi-Regular Variable R Crt
NASA Astrophysics Data System (ADS)
Sudou, Hiroshi; Shiga, Motoki; Omodaka, Toshihiro; Nakai, Chihiro; Ueda, Kazuki; Takaba, Hiroshi
2017-12-01
H2O maser emission {at 22 GHz} in the circumstellar envelope is one of the good tracers of detailed physics and inematics in the mass loss process of asymptotic giant branch stars. Long-term monitoring of an H2O maser spectrum with high time resolution enables us to clarify acceleration processes of the expanding shell in the stellar atmosphere. We monitored the H2O maser emission of the semi-regular variable R Crt with the Kagoshima 6-m telescope, and obtained a large data set of over 180 maser spectra over a period of 1.3 years with an observational span of a few days. Using an automatic peak detection method based on least-squares fitting, we exhaustively detected peaks as significant velocity components with the radial velocity on a 0.1 km s^{-1} scale. This analysis result shows that the radial velocity of red-shifted and blue-shifted components exhibits a change between acceleration and deceleration on the time scale of a few hundred days. These velocity variations are likely to correlate with intensity variations, in particular during flaring state of H2O masers. It seems reasonable to consider that the velocity variation of the maser source is caused by shock propagation in the envelope due to stellar pulsation.However, it is difficult to explain the relationship between the velocity variation and the intensity variation only from shock propagation effects. We found that a time delay of the integrated maser intensity with respect to the optical light curve is about 150 days.
Comparison between IRI-2012 and GPS-TEC observations over the western Black Sea
NASA Astrophysics Data System (ADS)
Inyurt, Samed; Yildirim, Omer; Mekik, Cetin
2017-07-01
The ionosphere is a dynamic layer which generally changes according to radiation emitted by the sun, the movement of the earth around the sun, and sunspot activity. Variations can generally be categorized as regular or irregular variations. Both types of variation have a huge effect on radio wave propagation. In this study, we have focused on the seasonal variation effect, which is one of the regular forms of variation in terms of the ionosphere. We examined the seasonal variation over the ZONG station in Turkey for the year 2014. Our analysis results and IRI-2012 present different ideas about ionospheric activity. According to our analysed results, the standard deviation reached a maximum value in April 2014. However, the maximum standard deviation obtained from IRI-2012 was seen in February 2014. Furthermore, it is clear that IRI-2012 underestimated the VTEC values when compared to our results for all the months analysed. The main source of difference between the two models is the IRI-2012 topside ionospheric representation. IRI-2012 VTEC has been produced as a result of the integration of an electron density profile within altitudinal limits of 60-2000 km. In other words, the main problem with regard to the IRI-2012 VTEC representation is not being situated in the plasmaspheric part of the ionosphere. Therefore we propose that the plasmaspheric part should be taken into account to calculate the correct TEC values in mid-latitude regions, and we note that IRI-2012 does not supply precise TEC values for use in ionospheric studies.
Salonia, Andrea; Pontillo, Marina; Nappi, Rossella E; Zanni, Giuseppe; Fabbri, Fabio; Scavini, Marina; Daverio, Rita; Gallina, Andrea; Rigatti, Patrizio; Bosi, Emanuele; Bonini, Pier Angelo; Montorsi, Francesco
2008-04-01
There is currently neither a clinically useful, reliable and inexpensive assay to measure circulating levels of free testosterone (T) in the range observed in women, nor is there agreement on the serum free T threshold defining hypoandrogenism that is associated with female-impaired sexual function. Following the Clinical and Laboratory Standards Institute guidelines, we generated clinically applicable ranges for circulating androgens during specific phases of the menstrual cycle in a convenience sample of 120 reproductive-aged, regularly cycling healthy European Caucasian women with self-reported normal sexual function. All participants were asked to complete a semistructured interview and fill out a set of validated questionnaires, including the Female Sexual Function Index, the Female Sexual Distress Scale, and the 21-item Beck's Inventory for Depression. Between 8 am and 10 am, a venous blood sample was drawn from each participant during the midfollicular (day 5 to 8), the ovulatory (day 13 to 15), and the midluteal phase (day 19 to 22) of the same menstrual cycle. Serum levels of total and free testosterone, Delta(4)-androstenedione, dehydroepiandrosterone sulphate and sex hormone-binding globulin during the midfollicular, ovulatory and midluteal phase of the same menstrual cycle. Total and free T levels showed significant fluctuations, peaking during the ovulatory phase. No significant variation during the menstrual cycle were observed for Delta(4)-androstenedione and dehydroepiandrosterone sulphate. Despite the careful selection of participants that yielded an homogeneous group of women without sexual disorders, we observed a wide range of distribution for each of the circulating androgens measured in this study. This report provides clinically applicable ranges for androgens throughout the menstrual cycle in reproductive-aged, regularly cycling, young healthy Caucasian European women with self-reported normal sexual function.
Image restoration for civil engineering structure monitoring using imaging system embedded on UAV
NASA Astrophysics Data System (ADS)
Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem
2013-04-01
Nowadays, civil engineering structures are periodically surveyed by qualified technicians (i.e. alpinist) operating visual inspection using heavy mechanical pods. This method is far to be safe, not only for civil engineering structures monitoring staff, but also for users. Due to the unceasing traffic increase, making diversions or closing lanes on bridge becomes more and more difficult. New inspection methods have to be found. One of the most promising technique is to develop inspection method using images acquired by a dedicated monitoring system operating around the civil engineering structures, without disturbing the traffic. In that context, the use of images acquired with an UAV, which fly around the structures is of particular interest. The UAV can be equipped with different vision system (digital camera, infrared sensor, video, etc.). Nonetheless, detection of small distresses on images (like cracks of 1 mm or less) depends on image quality, which is sensitive to internal parameters of the UAV (vibration modes, video exposure times, etc.) and to external parameters (turbulence, bad illumination of the scene, etc.). Though progresses were made at UAV level and at sensor level (i.e. optics), image deterioration is still an open problem. These deteriorations are mainly represented by motion blur that can be coupled with out-of-focus blur and observation noise on acquired images. In practice, deteriorations are unknown if no a priori information is available or dedicated additional instrumentation is set-up at UAV level. Image restoration processing is therefore required. This is a difficult problem [1-3] which has been intensively studied over last decades [4-12]. Image restoration can be addressed by following a blind approach or a myopic one. In both cases, it includes two processing steps that can be implemented in sequential or alternate mode. The first step carries out the identification of the blur impulse response and the second one makes use of this estimated blur kernel for performing the deconvolution of the acquired image. In the present work, different regularization methods, mainly based on the pseudo norm aforementioned Total Variation, are studied and analysed. The key point of their respective implementation, their properties and limits are investigated in this particular applicative context. References [1] J. Hadamard. Lectures on Cauchy's problem in linear partial differential equations. Yale University Press, 1923. [2] A. N. Tihonov. On the resolution of incorrectly posed problems and regularisation method (in Russian). Doklady A. N.SSSR, 151(3), 1963. [3] C. R. Vogel. Computational Methods for inverse problems, SIAM, 2002. [4] A. K. Katsaggelos, J. Biemond, R.W. Schafer, and R. M. Mersereau, "A regularized iterative image restoration algorithm," IEEE Transactions on Signal Processing, vol.39, no. 4, pp. 914-929, 1991. [5] J. Biemond, R. L. Lagendijk, and R. M. Mersereau, "Iterative methods for image deblurring," Proceedings of the IEEE, vol. 78, no. 5, pp. 856-883, 1990. [6] D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine, vol. 13, no. 3, pp. 43-64, 1996. [7] Y. L. You and M. Kaveh, "A regularization approach to joint blur identification and image restoration," IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 416-428, 1996. [8] T. F. Chan and C. K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998. [9] S. Chardon, B. Vozel, and K. Chehdi. Parametric Blur Estimation Using the GCV Criterion and a Smoothness Constraint on the Image. Multidimensional Systems and Signal Processing Journal, Kluwer Ed., 10:395-414, 1999 [10] B. Vozel, K. Chehdi, and J. Dumoulin. Myopic image restoration for civil structures inspection using UAV (in French). In GRETSI, 2005. [11] L. Bar, N. Sochen, and N. Kiryati. Semi-blind image restoration via Mumford-Shah regularization. IEEE Transactions on Image Processing, 15(2), 2006. [12] J. H. Money and S. H. Kang, "Total variation minimizing blind deconvolution with shock filter reference," Image and Vision Computing, vol. 26, no. 2, pp. 302-314, 2008.
Sparse Poisson noisy image deblurring.
Carlavan, Mikael; Blanc-Féraud, Laure
2012-04-01
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
Variability of breathing during wakefulness while using CPAP predicts adherence.
Fujita, Yukio; Yamauchi, Motoo; Uyama, Hiroki; Kumamoto, Makiko; Koyama, Noriko; Yoshikawa, Masanori; Strohl, Kingman P; Kimura, Hiroshi
2017-02-01
The standard therapy for obstructive sleep apnoea (OSA) is continuous positive airway pressure (CPAP) therapy. However, long-term adherence remains at ~50% despite improvements in behavioural and educational interventions. Based on prior work, we explored whether regularity of breathing during wakefulness might be a physiologic predictor of CPAP adherence. Of the 117 consecutive patients who were diagnosed with OSA and prescribed CPAP, 79 CPAP naïve patients were enrolled in this prospective study. During CPAP initiation, respiratory signals were collected using respiratory inductance plethysmography while wearing CPAP during wakefulness in a seated position. Breathing regularity was assessed by the coefficient of variation (CV) for breath-by-breath estimated tidal volume (V T ) and total duration of respiratory cycle (Ttot). In a derivation group (n = 36), we determined the cut-off CV value which predicted poor CPAP adherence at the first month of therapy, and verified the validity of this predetermined cut-off value in the remaining participants (validation group; n = 43). In the derivation group, the CV for estimated V T was significantly higher in patients with poor adherence than with good adherence (median (interquartile range): 44.2 (33.4-57.4) vs 26.0 (20.4-33.2), P < 0.001). The CV cut-off value for estimated V T for poor CPAP adherence was 34.0, according to a receiver-operating characteristic (ROC) curve. In the validation group, the CV value for estimated V T >34.0 confirmed to be predicting poor CPAP adherence (sensitivity, 0.78; specificity, 0.83). At the initiation of therapy, breathing regularity during wakefulness while wearing CPAP is an objective predictor of short-term CPAP adherence. © 2016 Asian Pacific Society of Respirology.
Towards the mechanical characterization of abdominal wall by inverse analysis.
Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E
2017-02-01
The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate
Littleton, Ashley C.; Cox, Leah M.; DeFreese, J.D.; Varangis, Eleanna; Lynall, Robert C.; Schmidt, Julianne D.; Marshall, Stephen W.; Guskiewicz, Kevin M.
2015-01-01
Abstract Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n=32 college football only, n=32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p<0.001). The HIEE measure was independent of concussion history (p=0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects. PMID:25603189
Estimating Contact Exposure in Football Using the Head Impact Exposure Estimate.
Kerr, Zachary Y; Littleton, Ashley C; Cox, Leah M; DeFreese, J D; Varangis, Eleanna; Lynall, Robert C; Schmidt, Julianne D; Marshall, Stephen W; Guskiewicz, Kevin M
2015-07-15
Over the past decade, there has been significant debate regarding the effect of cumulative subconcussive head impacts on short and long-term neurological impairment. This debate remains unresolved, because valid epidemiological estimates of athletes' total contact exposure are lacking. We present a measure to estimate the total hours of contact exposure in football over the majority of an athlete's lifespan. Through a structured oral interview, former football players provided information related to primary position played and participation in games and practice contacts during the pre-season, regular season, and post-season of each year of their high school, college, and professional football careers. Spring football for college was also included. We calculated contact exposure estimates for 64 former football players (n = 32 college football only, n = 32 professional and college football). The head impact exposure estimate (HIEE) discriminated between individuals who stopped after college football, and individuals who played professional football (p < 0.001). The HIEE measure was independent of concussion history (p = 0.82). Estimating total hours of contact exposure may allow for the detection of differences between individuals with variation in subconcussive impacts, regardless of concussion history. This measure is valuable for the surveillance of subconcussive impacts and their associated potential negative effects.
Code of Federal Regulations, 2010 CFR
2010-04-01
... week during which the individual works less than regular, full-time hours for the individual's regular... week of total unemployment is a week during which the individual performs no work and earns no wages... compensation payable to an individual for weeks of unemployment in an extended benefit period, under those...
Health risk assessment of arsenic from blended water in distribution systems.
Zhang, Hui; Zhou, Xue; Wang, Kai; Wang, Wen D
2017-12-06
In a water distribution system with different sources, water blending occurs, causing specific variations of the arsenic level. This study was undertaken to investigate the concentration and cancer risk of arsenic in blended water in Xi'an city. A total of 672 tap water samples were collected from eight sampling points in the blending zones for arsenic determination. The risk was evaluated through oral ingestion and dermal absorption, separately for males and females, as well as with respect to seasons and blending zones. Although the arsenic concentrations always fulfilled the requirements of the World Health Organization (WHO) (≤10 μg L -1 ), the total cancer risk value was higher than the general guidance risk value of 1.00 × 10 -6 . In the blending zone of the Qujiang and No.3 WTPs (Z2), the total cancer risk value was over 1.00 × 10 -5 , indicating that public health would be affected to some extent. More than 99% of the total cancer risk was from oral ingestion, and dermal absorption had a little contribution. With higher exposure duration and lower body weight, women had a higher cancer risk. In addition, due to several influential factors, the total cancer risk in the four blending zones reached the maximum in different seasons. The sensitivity analysis by the tornado chart proved that body weight, arsenic concentration and ingestion rate significantly contributed to cancer risk. This study suggests the regular monitoring of water blending zones for improving risk management.
Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities
NASA Astrophysics Data System (ADS)
Pankov, A. A.
1983-04-01
In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.
Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou
2018-02-08
The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.
Li, Shun-Xing; Chen, Li-Hui; Zheng, Feng-Ying; Huang, Xu-Guang
2014-07-23
Oysters (Crassostrea angulata) are often exposed to eutrophication. However, how these exposures influence metal bioaccumulation and oral bioavailability (OBA) in oysters is unknown. After a four month field experimental cultivation, bioaccumulation factors (BAF) of metals (Fe, Cu, As, Cd, and Pb) from seawater to oysters and metal oral bioavailability in oysters by bionic gastrointestinal tract were determined. A positive effect of macronutrient (nitrate N and total P) concentration in seawater on BAF of Cd in oysters was observed, but such an effect was not significant for Fe, Cu, Pb, and As. Only OBA of As was significantly positively correlated to N and P contents. For Fe, OBA was negatively correlated with N. The regular variation of the OBA of Fe and As may be due to the effect of eutrophication on the synthesis of metal granules and heat-stable protein in oysters, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yu; Gao, Kai; Huang, Lianjie
Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less
Spatially patterned matrix elasticity directs stem cell fate
NASA Astrophysics Data System (ADS)
Yang, Chun; DelRio, Frank W.; Ma, Hao; Killaars, Anouk R.; Basta, Lena P.; Kyburz, Kyle A.; Anseth, Kristi S.
2016-08-01
There is a growing appreciation for the functional role of matrix mechanics in regulating stem cell self-renewal and differentiation processes. However, it is largely unknown how subcellular, spatial mechanical variations in the local extracellular environment mediate intracellular signal transduction and direct cell fate. Here, the effect of spatial distribution, magnitude, and organization of subcellular matrix mechanical properties on human mesenchymal stem cell (hMSCs) function was investigated. Exploiting a photodegradation reaction, a hydrogel cell culture substrate was fabricated with regions of spatially varied and distinct mechanical properties, which were subsequently mapped and quantified by atomic force microscopy (AFM). The variations in the underlying matrix mechanics were found to regulate cellular adhesion and transcriptional events. Highly spread, elongated morphologies and higher Yes-associated protein (YAP) activation were observed in hMSCs seeded on hydrogels with higher concentrations of stiff regions in a dose-dependent manner. However, when the spatial organization of the mechanically stiff regions was altered from a regular to randomized pattern, lower levels of YAP activation with smaller and more rounded cell morphologies were induced in hMSCs. We infer from these results that irregular, disorganized variations in matrix mechanics, compared with regular patterns, appear to disrupt actin organization, and lead to different cell fates; this was verified by observations of lower alkaline phosphatase (ALP) activity and higher expression of CD105, a stem cell marker, in hMSCs in random versus regular patterns of mechanical properties. Collectively, this material platform has allowed innovative experiments to elucidate a novel spatial mechanical dosing mechanism that correlates to both the magnitude and organization of spatial stiffness.
Panazzolo, Diogo G; Sicuro, Fernando L; Clapauch, Ruth; Maranhão, Priscila A; Bouskela, Eliete; Kraemer-Aguiar, Luiz G
2012-11-13
We aimed to evaluate the multivariate association between functional microvascular variables and clinical-laboratorial-anthropometrical measurements. Data from 189 female subjects (34.0 ± 15.5 years, 30.5 ± 7.1 kg/m2), who were non-smokers, non-regular drug users, without a history of diabetes and/or hypertension, were analyzed by principal component analysis (PCA). PCA is a classical multivariate exploratory tool because it highlights common variation between variables allowing inferences about possible biological meaning of associations between them, without pre-establishing cause-effect relationships. In total, 15 variables were used for PCA: body mass index (BMI), waist circumference, systolic and diastolic blood pressure (BP), fasting plasma glucose, levels of total cholesterol, high-density lipoprotein cholesterol (HDL-c), low-density lipoprotein cholesterol (LDL-c), triglycerides (TG), insulin, C-reactive protein (CRP), and functional microvascular variables measured by nailfold videocapillaroscopy. Nailfold videocapillaroscopy was used for direct visualization of nutritive capillaries, assessing functional capillary density, red blood cell velocity (RBCV) at rest and peak after 1 min of arterial occlusion (RBCV(max)), and the time taken to reach RBCV(max) (TRBCV(max)). A total of 35% of subjects had metabolic syndrome, 77% were overweight/obese, and 9.5% had impaired fasting glucose. PCA was able to recognize that functional microvascular variables and clinical-laboratorial-anthropometrical measurements had a similar variation. The first five principal components explained most of the intrinsic variation of the data. For example, principal component 1 was associated with BMI, waist circumference, systolic BP, diastolic BP, insulin, TG, CRP, and TRBCV(max) varying in the same way. Principal component 1 also showed a strong association among HDL-c, RBCV, and RBCV(max), but in the opposite way. Principal component 3 was associated only with microvascular variables in the same way (functional capillary density, RBCV and RBCV(max)). Fasting plasma glucose appeared to be related to principal component 4 and did not show any association with microvascular reactivity. In non-diabetic female subjects, a multivariate scenario of associations between classic clinical variables strictly related to obesity and metabolic syndrome suggests a significant relationship between these diseases and microvascular reactivity.
5 CFR 610.111 - Establishment of workweeks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... administrative workweek. All work performed by an employee within the first 40 hours is considered regularly scheduled work for premium pay and hours of duty purposes. Any additional hours of officially ordered or... administrative workweek is the total number of regularly scheduled hours of duty a week. (2) When an employee has...
Variations in the rotation of the earth
NASA Astrophysics Data System (ADS)
Carter, W. E.; Robertson, D. S.; Pettey, J. E.; Tapley, B. D.; Schutz, B. E.; Eanes, R. J.; Miao, L.
Variations in the earth's rotation (UTI) and length of day have been tracked at the submillisecond level by astronomical radio interferometry and laser ranging to the LAGEOS satellite. Three years of regular measurements reveal complex patterns of variations including UTI fluctuations as large as 5 milliseconds in a few weeks. Comparison of the observed changes in length of day with variations in the global atmospheric angular momentum indicates that the dominant cause of changes in the earth's spin rate, on time scales from a week to several years, is the exchange of angular momentum between the atmosphere and the mantle. The unusually intense El Nino of 1982-1983 was marked by a strong peak in the length of day.
Estimating nonrigid motion from inconsistent intensity with robust shape features
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095
2013-12-15
Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less
Estimating nonrigid motion from inconsistent intensity with robust shape features.
Liu, Wenyang; Ruan, Dan
2013-12-01
To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.
Hübner, Tom R.
2012-01-01
Background Dysalotosaurus lettowvorbecki is a small ornithopod dinosaur known from thousands of bones and several ontogenetic stages. It was found in a single locality within the Tendaguru Formation of southeastern Tanzania, possibly representing a single herd. Dysalotosaurus provides an excellent case study for examining variation in bone microstructure and life history and helps to unravel the still mysterious growth pattern of small ornithopods. Methodology/Principal Findings Five different skeletal elements were sampled, revealing microstructural variation between individuals, skeletal elements, cross sectional units, and ontogenetic stages. The bone wall consists of fibrolamellar bone with strong variability in vascularization and development of growth cycles. Larger bones with a high degree of utilization have high relative growth rates and seldom annuli/LAGs, whereas small and less intensively used bones have lower growth rates and a higher number of these resting lines. Due to the scarcity of annuli/LAGs, the reconstruction of the life history of Dysalotosaurus was carried out using regularly developed and alternating slow and fast growing zones. Dysalotosaurus was a precocial dinosaur, which experienced sexual maturity at ten years, had an indeterminate growth pattern, and maximum growth rates comparable to a large kangaroo. Conclusions/Significance The variation in the bone histology of Dysalotosaurus demonstrates the influence of size, utilization, and shape of bones on relative growth rates. Annuli/LAGs are not the only type of annual growth cycles that can be used to reconstruct the life history of fossil vertebrates, but the degree of development of these lines may be of importance for the reconstruction of paleobehavior. The regular development of annuli/LAGs in subadults and adults of large ornithopods therefore reflects higher seasonal stress due to higher food demands, migration, and altricial breeding behavior. Small ornithopods often lack regularly developed annuli/LAGs due to lower food demands, no need for migration, and precocial behavior. PMID:22238683
Least squares reconstruction of non-linear RF phase encoded MR data.
Salajeghe, Somaie; Babyn, Paul; Sharp, Jonathan C; Sarty, Gordon E
2016-09-01
The numerical feasibility of reconstructing MRI signals generated by RF coils that produce B1 fields with a non-linearly varying spatial phase is explored. A global linear spatial phase variation of B1 is difficult to produce from current confined to RF coils. Here we use regularized least squares inversion, in place of the usual Fourier transform, to reconstruct signals generated in B1 fields with non-linear phase variation. RF encoded signals were simulated for three RF coil configurations: ideal linear, parallel conductors and, circular coil pairs. The simulated signals were reconstructed by Fourier transform and by regularized least squares. The Fourier reconstruction of simulated RF encoded signals from the parallel conductor coil set showed minor distortions over the reconstruction of signals from the ideal linear coil set but the Fourier reconstruction of signals from the circular coil set produced severe geometric distortion. Least squares inversion in all cases produced reconstruction errors comparable to the Fourier reconstruction of the simulated signal from the ideal linear coil set. MRI signals encoded in B1 fields with non-linearly varying spatial phase may be accurately reconstructed using regularized least squares thus pointing the way to the use of simple RF coil designs for RF encoded MRI. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Shin-Etsu super-high-flat substrate for FPD panel photomask
NASA Astrophysics Data System (ADS)
Ishitsuka, Youkou; Harada, Daijitsu; Watabe, Atsushi; Takeuchi, Masaki
2017-07-01
Recently, high-resolution exposure machine has been developed for production of high-definition (HD) panels, and higher-flat photomask substrates for FPD is being expected for panel makers to produce HD panels. In this presentation, we introduce about Shin-Etsu's advanced technique of producing super-high-flat photomask substrates. Shin-Etsu has developed surface polishing and planarization technology with No.1-quality-IC photomask substrates. Our most advanced IC photomask substrates have gained the highest estimation and appreciation from our customers because of their surface quality (non-defect surface without sub-0.1um size defects) and ultimate flatness (sub-0.1um order having achieved). By scaling up those IC photomask substrate technologies and developing unique large-size processing technologies, we have achieved creating high-flat large substrates, even G10-photomask size as well as regular G6-G8 photomask size. The core technology is that the surface shape of the substrate is completely controlled by the unique method. For example, we can regularly produce a substrate with its flatness of triple 5ums; front side flatness, back side flatness and total thickness variation are all less than 5μm. Furthermore, we are able to supply a substrate with its flatness of triple 3ums for G6-photomask size advanced grade, believed to be needed in near future.
Heat capacity of a self-gravitating spherical shell of radiations
NASA Astrophysics Data System (ADS)
Kim, Hyeong-Chan
2017-10-01
We study the heat capacity of a static system of self-gravitating radiations analytically in the context of general relativity. To avoid the complexity due to a conical singularity at the center, we excise the central part and replace it with a regular spherically symmetric distribution of matters of which specifications we are not interested in. We assume that the mass inside the inner boundary and the locations of the inner and the outer boundaries are given. Then, we derive a formula relating the variations of physical parameters at the outer boundary with those at the inner boundary. Because there is only one free variation at the inner boundary, the variations at the outer boundary are related, which determines the heat capacity. To get an analytic form for the heat capacity, we use the thermodynamic identity δ Srad=β δ Mrad additionally, which is derived from the variational relation of the entropy formula with the restriction that the mass inside the inner boundary does not change. Even if the radius of the inner boundary of the shell goes to zero, in the presence of a central conical singularity, the heat capacity does not go to the form of the regular sphere. An interesting discovery is that another legitimate temperature can be defined at the inner boundary which is different from the asymptotic one β-1.
Lewer, Dan; O'Reilly, Claire; Mojtabai, Ramin; Evans-Lacko, Sara
2015-09-01
Prescribing of antidepressants varies widely between European countries despite no evidence of difference in the prevalence of affective disorders. To investigate associations between the use of antidepressants, country-level spending on healthcare and country-level attitudes towards mental health problems. We used Eurobarometer 2010, a large general population survey from 27 European countries, to measure antidepressant use and regularity of use. We then analysed the associations with country-level spending on healthcare and country-level attitudes towards mental health problems. Higher country spending on healthcare was strongly associated with regular use of antidepressants. Beliefs that mentally ill people are 'dangerous' were associated with higher use, and beliefs that they 'never recover' or 'have themselves to blame' were associated with lower and less regular use of antidepressants. Contextual factors, such as healthcare spending and public attitudes towards mental illness, may partly explain variations in antidepressant use and regular use of these medications. © The Royal College of Psychiatrists 2015.
Drouin-Chartier, Jean-Philippe; Brassard, Didier; Tessier-Grenier, Maude; Côté, Julie Anne; Labonté, Marie-Ève; Desroches, Sophie; Couture, Patrick; Lamarche, Benoît
2016-01-01
The objective of this systematic review was to determine if dairy product consumption is detrimental, neutral, or beneficial to cardiovascular health and if the recommendation to consume reduced-fat as opposed to regular-fat dairy is evidence-based. A systematic review of meta-analyses of prospective population studies associating dairy consumption with cardiovascular disease (CVD), coronary artery disease (CAD), stroke, hypertension, metabolic syndrome (MetS), and type 2 diabetes (T2D) was conducted on the basis of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. Quality of evidence was rated by using the Grading of Recommendations Assessment, Development, and Evaluation scale. High-quality evidence supports favorable associations between total dairy intake and hypertension risk and between low-fat dairy and yogurt intake and the risk of T2D. Moderate-quality evidence suggests favorable associations between intakes of total dairy, low-fat dairy, cheese, and fermented dairy and the risk of stroke; intakes of low-fat dairy and milk and the risk of hypertension; total dairy and milk consumption and the risk of MetS; and total dairy and cheese and the risk of T2D. High- to moderate-quality evidence supports neutral associations between the consumption of total dairy, cheese, and yogurt and CVD risk; the consumption of any form of dairy, except for fermented, and CAD risk; the consumption of regular- and high-fat dairy, milk, and yogurt and stroke risk; the consumption of regular- and high-fat dairy, cheese, yogurt, and fermented dairy and hypertension risk; and the consumption of regular- and high-fat dairy, milk, and fermented dairy and T2D risk. Data from this systematic review indicate that the consumption of various forms of dairy products shows either favorable or neutral associations with cardiovascular-related clinical outcomes. The review also emphasizes that further research is urgently needed to compare the impact of low-fat with regular- and high-fat dairy on cardiovascular-related clinical outcomes in light of current recommendations to consume low-fat dairy. PMID:28140321
Drouin-Chartier, Jean-Philippe; Brassard, Didier; Tessier-Grenier, Maude; Côté, Julie Anne; Labonté, Marie-Ève; Desroches, Sophie; Couture, Patrick; Lamarche, Benoît
2016-11-01
The objective of this systematic review was to determine if dairy product consumption is detrimental, neutral, or beneficial to cardiovascular health and if the recommendation to consume reduced-fat as opposed to regular-fat dairy is evidence-based. A systematic review of meta-analyses of prospective population studies associating dairy consumption with cardiovascular disease (CVD), coronary artery disease (CAD), stroke, hypertension, metabolic syndrome (MetS), and type 2 diabetes (T2D) was conducted on the basis of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement. Quality of evidence was rated by using the Grading of Recommendations Assessment, Development, and Evaluation scale. High-quality evidence supports favorable associations between total dairy intake and hypertension risk and between low-fat dairy and yogurt intake and the risk of T2D. Moderate-quality evidence suggests favorable associations between intakes of total dairy, low-fat dairy, cheese, and fermented dairy and the risk of stroke; intakes of low-fat dairy and milk and the risk of hypertension; total dairy and milk consumption and the risk of MetS; and total dairy and cheese and the risk of T2D. High- to moderate-quality evidence supports neutral associations between the consumption of total dairy, cheese, and yogurt and CVD risk; the consumption of any form of dairy, except for fermented, and CAD risk; the consumption of regular- and high-fat dairy, milk, and yogurt and stroke risk; the consumption of regular- and high-fat dairy, cheese, yogurt, and fermented dairy and hypertension risk; and the consumption of regular- and high-fat dairy, milk, and fermented dairy and T2D risk. Data from this systematic review indicate that the consumption of various forms of dairy products shows either favorable or neutral associations with cardiovascular-related clinical outcomes. The review also emphasizes that further research is urgently needed to compare the impact of low-fat with regular- and high-fat dairy on cardiovascular-related clinical outcomes in light of current recommendations to consume low-fat dairy. © 2016 American Society for Nutrition.
34 CFR 690.8 - Enrollment status for students taking regular and correspondence courses.
Code of Federal Regulations, 2010 CFR
2010-07-01
... No. of credit hours regular work No. of credit hours correspondence Total course load in credit hours... institution, the correspondence work may be included in determining the student's enrollment status to the... section, the correspondence work that may be included in determining a student's enrollment status is that...
Self-Reported Obstacles to Regular Dental Care among Information Technology Professionals.
Reddy, L Swetha; Doshi, Dolar; Reddy, B Srikanth; Kulkarni, Suhas; Reddy, M Padma; Satyanarayana, D; Baldava, Pavan
2016-10-01
Good oral health is important for an individual as well as social well-being. Occupational stress and work exhaustion in Information Technology (IT) professionals may influence the oral health and oral health related quality of life. To assess and compare self-reported obstacles for regular dental care and dental visits among IT professionals based on age, gender, dental insurance and working days per week. A cross-sectional study was conducted among 1,017 IT professionals to assess the self-reported obstacles to regular oral health care in Hyderabad city, Telangana, India. The Dental Rejection of Innovation Scale (DRI-S) was employed in this study. Comparison between means of DRI-S based on variables was done using t-test and ANOVA. The association between variables and DRI-S was determined using Chi-square test. A total of 1017 participants comprising of 574 (56%) males and 443 (44%) females participated in the study. As age increased, a significant increase in mean DRI-S scores was seen for total and individual domains except for the "Situational" domain wherein higher mean score (9.42±2.5; p=0.0006) was observed among 30-39 years age group. Even though females reported higher mean scores for total and individual domains when compared to males, nevertheless significant difference was seen only for total (p=0.03) and "Lack of Knowledge" (p=0.001) domain. Self-reported obstacles to regular dental care was more with increasing age, increased number of working days per week, irregular dental visits and absence of dental insurance facility.
NASA Astrophysics Data System (ADS)
Bucekova, Marcela; Valachova, Ivana; Kohutova, Lenka; Prochazka, Emanuel; Klaudiny, Jaroslav; Majtan, Juraj
2014-08-01
Antibacterial properties of honey largely depend on the accumulation of hydrogen peroxide (H2O2), which is generated by glucose oxidase (GOX)-mediated conversion of glucose in diluted honey. However, honeys exhibit considerable variation in their antibacterial activity. Therefore, the aim of the study was to identify the mechanism behind the variation in this activity and in the H2O2 content in honeys associated with the role of GOX in this process. Immunoblots and in situ hybridization analyses demonstrated that gox is solely expressed in the hypopharyngeal glands of worker bees performing various tasks and not in other glands or tissues. Real-time PCR with reference genes selected for worker heads shows that the gox expression progressively increases with ageing of the youngest bees and nurses and reached the highest values in processor bees. Immunoblot analysis of honey samples revealed that GOX is a regular honey component but its content significantly varied among honeys. Neither botanical source nor geographical origin of honeys affected the level of GOX suggesting that some other factors such as honeybee nutrition and/or genetic/epigenetic factors may take part in the observed variation. A strong correlation was found between the content of GOX and the level of generated H2O2 in honeys except honeydew honeys. Total antibacterial activity of most honey samples against Pseudomonas aeruginosa isolate significantly correlated with the H2O2 content. These results demonstrate that the level of GOX can significantly affect the total antibacterial activity of honey. They also support an idea that breeding of novel honeybee lines expressing higher amounts of GOX could help to increase the antibacterial efficacy of the hypopharyngeal gland secretion that could have positive influence on a resistance of colonies against bacterial pathogens.
ERIC Educational Resources Information Center
Murphy, David T.
1984-01-01
Proposes a variation of the two-stem system of analyzing the Russian verb. The need for greater organization and systematization is stressed, as well as an increased focus on the great regularity of the Russian verb, and the relative simplicity of Russian verbal morphology. (SL)
Managing a closed-loop supply chain inventory system with learning effects
NASA Astrophysics Data System (ADS)
Jauhari, Wakhid Ahmad; Dwicahyani, Anindya Rachma; Hendaryani, Oktiviandri; Kurdhi, Nughthoh Arfawi
2018-02-01
In this paper, we propose a closed-loop supply chain model consisting of a retailer and a manufacturer. We intend to investigate the impact of learning in regular production, remanufacturing and reworking. The customer demand is assumed deterministic and will be satisfied from both regular production and remanufacturing process. The return rate of used items depends on quality. We propose a mathematical model with the objective is to maximize the joint total profit by simultaneously determining the length of ordering cycle for the retailer and the number of regular production and remanufacturing cycle. The algorithm is suggested for finding the optimal solution. A numerical example is presented to illustrate the application of using a proposed model. The results show that the integrated model performs better in reducing total cost compared to the independent model. The total cost is most affected by the changes in the values of unit production cost and acceptable quality level. In addition, the changes in the defective items proportion and the fraction of holding costs significantly influence the retailer's ordering period.
[Cervical tinnitus treated by acupuncture based on "jin" theory: a clinical observation].
Dong, Youkang; Wang, Yi
2016-04-01
To compare the efficacy among acupuncture based on "jin" theory, regular acupuncture and western medication. A total of 95 cases, by using incomplete randomization method, were divided into a "jin" theory acupuncture group (32 cases), a regular acupuncture group (31 cases) and a medication group (32 cases). Patients in the "jin" theory acupuncture group were treated with acupuncture based on "jin" theory which included the "gather" and "knot" points on the affected side: positive reacted points, Fengchi (GB 20), Tianrong (SI 17), Tianyou (TE16) and Yiming (EX-HN14) as the main acupoints, while the Ermen (TE 21), Tinggong (SI 19) and Tinghui (GB 2) and zhigou (TE 6) as the auxiliary acpoints; the treatment was given once a day. Patients in the regular acupuncture group were treated with regular acupuncture at Tinggong (SI 19), Tin- ghui (GB 2) and Ermen (TE 21) and other matched acupoints based on syndrome differentiation, once a day. Pa- tients in the medication group were treated with oral administration of betahistine mesylate, three times a day. Ten days of treatment were taken as one session in three groups, and totally 2 sessions were given. Visual analogue scale (VAS), tinnitus handicap inventory (THD), and tinnitus severity assessment scale (TSIS) were evaluated before and after treatment; also the clinical efficacy was compared among three groups. There are 5 drop-out cases du- ring the study. After the treatment, the VAS, THI and TSIS were improved in three groups (all P < 0.05); the VAS, THI and TSIS in the "jin" theory acupuncture group were lower than those in the regular acupuncture group and medication group (P < 0.05, P < 0.01). The total effective rate was 90.0% (27/30), 80.0% (24/30) and 63.3% (19/30), which was higher in the "jin" theory acupuncture group (P < 0.05, P < 0.01). The acupuncture based on "jin" theory is superior to regular acupuncture and western medication for cervical tinnitus.
NASA Astrophysics Data System (ADS)
Loganathan, K.; Ahamed, A. Jafar
2017-12-01
The study of groundwater in Amaravathi River basin of Karur District resulted in large geochemical data set. A total of 24 water samples were collected and analyzed for physico-chemical parameters, and the abundance of cation and anion concentrations was in the following order: Na+ > Ca2+ > Mg2+ > K+ = Cl- > HCO3 - > SO4 2-. Correlation matrix shows that the basic ionic chemistry is influenced by Na+, Ca2+, Mg2+, and Cl-, and also suggests that the samples contain Na+-Cl-, Ca2+-Cl- an,d mixed Ca2+-Mg2+-Cl- types of water. HCO3 -, SO4 2-, and F- association is less than that of other parameters due to poor or less available of bearing minerals. PCA extracted six components, which are accountable for the data composition explaining 81% of the total variance of the data set and allowed to set the selected parameters according to regular features as well as to evaluate the frequency of each group on the overall variation in water quality. Cluster analysis results show that groundwater quality does not vary extensively as a function of seasons, but shows two main clusters.
Hydrologic controls on aperiodic spatial organization of the ridge-slough patterned landscape
NASA Astrophysics Data System (ADS)
Casey, Stephen T.; Cohen, Matthew J.; Acharya, Subodh; Kaplan, David A.; Jawitz, James W.
2016-11-01
A century of hydrologic modification has altered the physical and biological drivers of landscape processes in the Everglades (Florida, USA). Restoring the ridge-slough patterned landscape, a dominant feature of the historical system, is a priority but requires an understanding of pattern genesis and degradation mechanisms. Physical experiments to evaluate alternative pattern formation mechanisms are limited by the long timescales of peat accumulation and loss, necessitating model-based comparisons, where support for a particular mechanism is based on model replication of extant patterning and trajectories of degradation. However, multiple mechanisms yield a central feature of ridge-slough patterning (patch elongation in the direction of historical flow), limiting the utility of that characteristic for discriminating among alternatives. Using data from vegetation maps, we investigated the statistical features of ridge-slough spatial patterning (ridge density, patch perimeter, elongation, patch size distributions, and spatial periodicity) to establish more rigorous criteria for evaluating model performance and to inform controls on pattern variation across the contemporary system. Mean water depth explained significant variation in ridge density, total perimeter, and length : width ratios, illustrating an important pattern response to existing hydrologic gradients. Two independent analyses (2-D periodograms and patch size distributions) provide strong evidence against regular patterning, with the landscape exhibiting neither a characteristic wavelength nor a characteristic patch size, both of which are expected under conditions that produce regular patterns. Rather, landscape properties suggest robust scale-free patterning, indicating genesis from the coupled effects of local facilitation and a global negative feedback operating uniformly at the landscape scale. Critically, this challenges widespread invocation of scale-dependent negative feedbacks for explaining ridge-slough pattern origins. These results help discern among genesis mechanisms and provide an improved statistical description of the landscape that can be used to compare among model outputs, as well as to assess the success of future restoration projects.
Frankowiak, Katarzyna; Kret, Sławomir; Mazur, Maciej; Meibom, Anders; Kitahara, Marcelo V; Stolarski, Jarosław
2016-01-01
Understanding the evolution of scleractinian corals on geological timescales is key to predict how modern reef ecosystems will react to changing environmental conditions in the future. Important to such efforts has been the development of several skeleton-based criteria to distinguish between the two major ecological groups of scleractinians: zooxanthellates, which live in symbiosis with dinoflagellate algae, and azooxanthellates, which lack endosymbiotic dinoflagellates. Existing criteria are based on overall skeletal morphology and bio/geo-chemical indicators-none of them being particularly robust. Here we explore another skeletal feature, namely fine-scale growth banding, which differs between these two groups of corals. Using various ultra-structural imaging techniques (e.g., TEM, SEM, and NanoSIMS) we have characterized skeletal growth increments, composed of doublets of optically light and dark bands, in a broad selection of extant symbiotic and asymbiotic corals. Skeletons of zooxanthellate corals are characterized by regular growth banding, whereas in skeletons of azooxanthellate corals the growth banding is irregular. Importantly, the regularity of growth bands can be easily quantified with a coefficient of variation obtained by measuring bandwidths on SEM images of polished and etched skeletal surfaces of septa and/or walls. We find that this coefficient of variation (lower values indicate higher regularity) ranges from ~40 to ~90% in azooxanthellate corals and from ~5 to ~15% in symbiotic species. With more than 90% (28 out of 31) of the studied corals conforming to this microstructural criterion, it represents an easy and robust method to discriminate between zooxanthellate and azooxanthellate corals. This microstructural criterion has been applied to the exceptionally preserved skeleton of the Triassic (Norian, ca. 215 Ma) scleractinian Volzeia sp., which contains the first example of regular, fine-scale banding of thickening deposits in a fossil coral of this age. The regularity of its growth banding strongly suggests that the coral was symbiotic with zooxanthellates.
Frankowiak, Katarzyna; Kret, Sławomir; Mazur, Maciej; Meibom, Anders; Kitahara, Marcelo V.; Stolarski, Jarosław
2016-01-01
Understanding the evolution of scleractinian corals on geological timescales is key to predict how modern reef ecosystems will react to changing environmental conditions in the future. Important to such efforts has been the development of several skeleton-based criteria to distinguish between the two major ecological groups of scleractinians: zooxanthellates, which live in symbiosis with dinoflagellate algae, and azooxanthellates, which lack endosymbiotic dinoflagellates. Existing criteria are based on overall skeletal morphology and bio/geo-chemical indicators—none of them being particularly robust. Here we explore another skeletal feature, namely fine-scale growth banding, which differs between these two groups of corals. Using various ultra-structural imaging techniques (e.g., TEM, SEM, and NanoSIMS) we have characterized skeletal growth increments, composed of doublets of optically light and dark bands, in a broad selection of extant symbiotic and asymbiotic corals. Skeletons of zooxanthellate corals are characterized by regular growth banding, whereas in skeletons of azooxanthellate corals the growth banding is irregular. Importantly, the regularity of growth bands can be easily quantified with a coefficient of variation obtained by measuring bandwidths on SEM images of polished and etched skeletal surfaces of septa and/or walls. We find that this coefficient of variation (lower values indicate higher regularity) ranges from ~40 to ~90% in azooxanthellate corals and from ~5 to ~15% in symbiotic species. With more than 90% (28 out of 31) of the studied corals conforming to this microstructural criterion, it represents an easy and robust method to discriminate between zooxanthellate and azooxanthellate corals. This microstructural criterion has been applied to the exceptionally preserved skeleton of the Triassic (Norian, ca. 215 Ma) scleractinian Volzeia sp., which contains the first example of regular, fine-scale banding of thickening deposits in a fossil coral of this age. The regularity of its growth banding strongly suggests that the coral was symbiotic with zooxanthellates. PMID:26751803
Moeller, David A
2005-01-01
The structure of diverse floral visitor assemblages and the nature of spatial variation in plant-pollinator interactions have important consequences for floral evolution and reproductive interactions among pollinator-sharing plant species. In this study, I use surveys of floral visitor communities across the geographic range of Clarkia xantiana ssp. xantiana (hereafter C. x. xantiana) (Onagraceae) to examine the structure of visitor communities, the specificity of the pollination system, and the role of variation in the abiotic vs. biotic environment in contributing to spatial variation in pollinator abundance and community composition. Although the assemblage of bee visitors to C. x. xantiana is very diverse (49 species), few were regular visitors and likely to act as pollinators. Seventy-four percent of visitor species accounted for only 11% of total visitor abundance and 69% were collected in three or fewer plant populations (of ten). Of the few reliable visitors, Clarkia pollen specialist bees were the most frequent visitors, carried more Clarkia pollen compared to generalist foragers, and were less likely to harbor foreign pollen. Overall, the core group of pollinators was obscured by high numbers of incidental visitors that are unlikely to contribute to pollination. In a geographic context, the composition of specialist pollinator assemblages varied considerably along the abiotic gradient spanning the subspecies' range. However, the overall abundance of specialist pollinators in plant populations was not influenced by the broad-scale abiotic gradient but strongly affected by local plant community associations. C. x. xantiana populations sympatric with pollinator-sharing congeners were visited twice as often by specialists compared to populations occurring alone. These positive indirect interactions among plant species may promote population persistence and species coexistence by enhancing individual reproductive success.
Annual and Seasonal Global Variation in Total Ozone and Layer-Mean Ozone, 1958-1987 (1991)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angell, J. K.; Korshover, J.; Planet, W. G.
For 1958 through 1987, this data base presents total ozone variations and layer mean ozone variations expressed as percent deviations from the 1958 to 1977 mean. The total ozone variations were derived from mean monthly ozone values published in Ozone Data for the World by the Atmospheric Environment Service in cooperation with the World Meteorological Organization. The layer mean ozone variations are derived from ozonesonde and Umkehr observations. The data records include year, seasonal and annual total ozone variations, and seasonal and annual layer mean ozone variations. The total ozone data are for four regions (Soviet Union, Europe, North America,more » and Asia); five climatic zones (north and south polar, north and south temperate, and tropical); both hemispheres; and the world. Layer mean ozone data are for four climatic zones (north and south temperate and north and south polar) and for the stratosphere, troposphere, and tropopause layers. The data are in two files [seasonal and year-average total ozone (13.4 kB) and layer mean ozone variations (24.2 kB)].« less
A regularized clustering approach to brain parcellation from functional MRI data
NASA Astrophysics Data System (ADS)
Dillon, Keith; Wang, Yu-Ping
2017-08-01
We consider a data-driven approach for the subdivision of an individual subject's functional Magnetic Resonance Imaging (fMRI) scan into regions of interest, i.e., brain parcellation. The approach is based on a computational technique for calculating resolution from inverse problem theory, which we apply to neighborhood selection for brain connectivity networks. This can be efficiently calculated even for very large images, and explicitly incorporates regularization in the form of spatial smoothing and a noise cutoff. We demonstrate the reproducibility of the method on multiple scans of the same subjects, as well as the variations between subjects.
NASA Technical Reports Server (NTRS)
Lean, J.
1990-01-01
Enhanced emission from bright solar faculae is a source of significant variation in the sun's total irradiance. Relative to the emission from the quiet sun, facular emission is known to be considerably greater at UV wavelengths than at visible wavelengths. Determining the spectral dependence of facular emission is of interest for the physical insight this may provide to the origin of the sun's irradiance variations. It is also of interest because solar radiation at lambda less than 300 nm is almost totally absorbed in the Earth's atmosphere. Depending on the magnitude of the UV irradiance variations, changes in the sun's irradiance that penetrates to the Earth's surface may not be equivalent to total irradiance variations measured above the Earth's atmosphere. Using an empirical model of total irradiance variations which accounts separately for changes caused by bright faculae from those associated with dark sunspots, the contribution of UV irradiance variations to changes in the sun's total irradiance is estimated during solar cycles 12 to 21.
Compressed modes for variational problems in mathematics and physics
Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-01-01
This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861
Compressed modes for variational problems in mathematics and physics.
Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley
2013-11-12
This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.
Cercamondi, Colin I.; Egli, Ines M.; Mitchikpe, Evariste; Tossou, Felicien; Zeder, Christophe; Hounhouigan, Joseph D.; Hurrell, Richard F.
2013-01-01
Iron biofortification of pearl millet (Pennisetum glaucum) is a promising approach to combat iron deficiency (ID) in the millet-consuming communities of developing countries. To evaluate the potential of iron-biofortified millet to provide additional bioavailable iron compared with regular millet and post-harvest iron-fortified millet, an iron absorption study was conducted in 20 Beninese women with marginal iron status. Composite test meals consisting of millet paste based on regular-iron, iron-biofortified, or post-harvest iron-fortified pearl millet flour accompanied by a leafy vegetable sauce or an okra sauce were fed as multiple meals for 5 d. Iron absorption was measured as erythrocyte incorporation of stable iron isotopes. Fractional iron absorption from test meals based on regular-iron millet (7.5%) did not differ from iron-biofortified millet meals (7.5%; P = 1.0), resulting in a higher quantity of total iron absorbed from the meals based on iron-biofortified millet (1125 vs. 527 μg; P < 0.0001). Fractional iron absorption from post-harvest iron-fortified millet meals (10.4%) was higher than from regular-iron and iron-biofortified millet meals (P < 0.05 and P < 0.01, respectively), resulting in a higher quantity of total iron absorbed from the post-harvest iron-fortified millet meals (1500 μg; P < 0.0001 and P < 0.05, respectively). Results indicate that consumption of iron-biofortified millet would double the amount of iron absorbed and, although fractional absorption of iron from biofortification is less than that from fortification, iron-biofortified millet should be highly effective in combatting ID in millet-consuming populations. PMID:23884388
Correlations of multiscale entropy in the FX market
NASA Astrophysics Data System (ADS)
Stosic, Darko; Stosic, Dusan; Ludermir, Teresa; Stosic, Tatijana
2016-09-01
The regularity of price fluctuations in exchange rates plays a crucial role in FX market dynamics. Distinct variations in regularity arise from economic, social and political events, such as interday trading and financial crisis. This paper applies a multiscale time-dependent entropy method on thirty-three exchange rates to analyze price fluctuations in the FX. Correlation matrices of entropy values, termed entropic correlations, are in turn used to describe global behavior of the market. Empirical results suggest a weakly correlated market with pronounced collective behavior at bi-weekly trends. Correlations arise from cycles of low and high regularity in long-term trends. Eigenvalues of the correlation matrix also indicate a dominant European market, followed by shifting American, Asian, African, and Pacific influences. As a result, we find that entropy is a powerful tool for extracting important information from the FX market.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
Seasonal variation in the occurrence of planktic bivalve larvae in the Schleswig-Holstein Wadden Sea
NASA Astrophysics Data System (ADS)
Pulfrich, Andrea
1997-03-01
In the late 1980s, recruitment failures of the mussel Mytilus edulis led to economic problems in the mussel fishing and cultivation industries of northwestern Europe. As part of a collaborative study to gain a better understanding of the mechanisms affecting recruitment processes of mussels, plankton samples were collected regularly over a four-year period (1990-1993) from three stations in the Schleswig-Holstein Wadden Sea. The bivalve component of the plankton was dominated by the Solenidae, which was almost exclusively represented by Ensis americanus (= directus). M. edulis was the second most abundant species. Abundances of mussel larvae peaked 2 to 4 weeks after spawning maxima in the adult populations. Although variations in timing and amplitude of the total larval densities occurred, annual abundances of M. edulis larvae remained stable during the study period, and regional abundance differences were insignificant. A close relationship was found between peaks in larval abundance and phytoplankton blooms. Differences in larval concentrations in the ebb and the flow currents were insignificant. Planktic mussel larvae measured between 200 μm and 300 μm, and successive cohorts were recognizable in the majority of samples. Most larvae were found to originate from local stocks, although imports from outside the area do occur.
NASA Astrophysics Data System (ADS)
Shi, Zhenhua; Yu, Hui; Sun, Yongyan; Yang, Chuanjun; Lian, Huiyong; Cai, Peng
2015-02-01
A literal mountain of documentation generated in the past five decades showing unmistakable health hazards associated with extremely low-frequency electromagnetic fields (ELF-EMFs) exposure. However, the relation between energy mechanism and ELF-EMF exposure is poorly understood. In this study, Caenorhabditis elegans was exposed to 50 Hz ELF-EMF at intensities of 0.5, 1, 2, and 3 mT, respectively. Their metabolite variations were analyzed by GC-TOF/MS-based metabolomics. Although minimal metabolic variations and no regular pattern were observed, the contents of energy metabolism-related metabolites such as pyruvic acid, fumaric acid, and L-malic acid were elevated in all the treatments. The expressions of nineteen related genes that encode glycolytic enzymes were analyzed by using quantitative real-time PCR. Only genes encoding GAPDH were significantly upregulated (P < 0.01), and this result was further confirmed by western blot analysis. The enzyme activity of GAPDH was increased (P < 0.01), whereas the total intracellular ATP level was decreased. While no significant difference in lifespan, hatching rate and reproduction, worms exposed to ELF-EMF exhibited less food consumption compared with that of the control (P < 0.01). In conclusion, C. elegans exposed to ELF-EMF have enhanced energy metabolism and restricted dietary, which might contribute to the resistance against exogenous ELF-EMF stress.
Consistencies Far beyond Chance: An Analysis of Learner Preconceptions of Reflective Symmetry
ERIC Educational Resources Information Center
Mhlolo, Michael Kainose; Schafer, Marc
2013-01-01
This article reports on regularities observed in learners' preconceptions of reflective symmetry. Literature suggests that the very existence of such regularities indicates a gap between what learners know and what they need to know. Such a gap inhibits further understanding and application, and hence needed to be investigated. A total of 235…
Project Physics Handbook 1, Concepts of Motion.
ERIC Educational Resources Information Center
Harvard Univ., Cambridge, MA. Harvard Project Physics.
Thirteen experiments and 15 activities are presented in this unit handbook for student use. The experiment sections are concerned with naked-eye observation in astronomy, regularity and time, variations in data, uniform motion, gravitational acceleration, Galileo's experiments, Netson's laws, inertial and gravitational mass, trajectories, and…
Imitation, Awareness, and Folk Linguistic Artifacts
ERIC Educational Resources Information Center
Brunner, Elizabeth Gentry
2010-01-01
Imitations are sophisticated performances displaying regular patterns. The study of imitation allows linguists to understand speakers' perceptions of sociolinguistic variation. In this dissertation, I analyze imitations of non-native accents in order to answer two questions: what can imitation reveal about perception, and how are "folk linguistic…
TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, D; Thomas, D; Cao, M
2016-06-15
Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field wasmore » calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments and treatment time. Varian Medical Systems; NIH grant R01CA188300; NIH grant R43CA183390.« less
Lack of nucleotide variability in a beetle pest with extreme inbreeding.
Andreev, D; Breilid, H; Kirkendall, L; Brun, L O; ffrench-Constant, R H
1998-05-01
The coffee berry borer beetle Hypothenemus hampei (Ferrari) (Curculionidae: Scolytinae) is the major insect pest of coffee and has spread to most of the coffee-growing countries of the world. This beetle also displays an unusual life cycle, with regular sibling mating. This regular inbreeding and the population bottlenecks occurring on colonization of new regions should lead to low levels of genetic diversity. We were therefore interested in determining the level of nucleotide variation in nuclear and mitochondrial genomes of this beetle worldwide. Here we show that two nuclear loci (Resistance to dieldrin and ITS2) are completely invariant, whereas some variability is maintained at a mitochondrial locus (COI), probably corresponding to a higher mutation rate in the mitochondrial genome. Phylogenetic analysis of the mitochondrial data shows only two clades of beetle haplotypes outside of Kenya, the proposed origin of the species. These data confirm that inbreeding greatly reduces nucleotide variation and suggest the recent global spread of only two inbreeding lines of this bark beetle.
Contributions from the data samples in NOC technique on the extracting of the Sq variation
NASA Astrophysics Data System (ADS)
Wu, Yingyan; Xu, Wenyao
2015-04-01
The solar quiet daily variation, Sq, a rather regular variation is usually observed at mid-low latitudes on magnetic quiet days or less-disturbed days. It is mainly resulted from the dynamo currents in the ionospheric E region, which are driven by the atmospheric tidal wind and different processes and flow as two current whorls in each of the northern and southern hemispheres[1]. The Sq exhibits a conspicuous day-to-day (DTD) variability in daily range (or strength), shape (or phase) and its current focus. This variability is mainly attributed to changes in the ionospheric conductivity and tidal winds, varying with solar radiation and ionospheric conditions. Furthermore, it presents a seasonal variation and solar cycle variation[2-4]. In generally, Sq is expressed with the average value of the five international magnetic quiet days. Using data from global magnetic stations, equivalent current system of daily variation can be constructed to reveal characteristics of the currents[5]. In addition, using the differences of H component at two stations on north and south side of the Sq currents of focus, Sq is extracted much better[6]. Recently, the method of Natural Orthoganal Components (NOC) is used to decompose the magnetic daily variation and express it as the summation of eigenmodes, and indicate the first NOC eigenmode as the solar quiet daily variation, the second as the disturbance daily variation[7-9]. As we know, the NOC technique can help reveal simpler patterns within a complex set of variables, without designed basic-functions such as FFT technique. But the physical explanation of the NOC eigenmodes is greatly depends on the number of data samples and data regular-quality. Using the NOC method, we focus our present study on the analysis of the hourly means of the H component at BMT observatory in China from 2001 to 2008. The contributions of the number and the regular-quality of the data samples on which eigenmode corresponds to the Sq are analyzed, by using different number of data sample from 5 to 365. The result shows the first eigenmode expresses the Sq in most cases. 1.Campbell, W, Introduction to Geomagnetic Fields, Cambridge Univ. Press, New York. 1997 2.Hasegawa, M, Geomagnetic Sq current system, J. Geophys. Res., 1960, 65: 1437~ 1447 3.Tarpley J D. The Ionospheric wind dynanmo 2 solar tides. Planet. Space Sci., 1970, 18: 1091~ 1103 4.Richmond A D. Modeling the ionospheric wind dynamo a review. Pure Appl. Geophys., 1989, 131: 413 ~ 435 5.Suzuki, A., and H. Maeda (1978), Equivalent current systems of the daily geomagnetic variations in December 1964, Data Book No. 1, World Data Center C2 for Geomagnetic. 6.Hibberd, F H. Day-to-day variability of the Sq geomagnetic field variation, Aust. J. Phys., 1981, 34: 81~ 90 7.Xu, W.-Y., and Y. Kamide (2004), Decomposition of daily geomagnetic variation by using method of natural orthogonal component, J. Geophys. Res., 109(A5), A05218, doi:10.1029/2003JA010216. 8.Chen G X, Xu W Y, Du A M, and et al, Statistical characteristics of the day-to-day variability in the geomagnetic Sq field, J. Geophys. Res.,2007, 112, A06320, doi:10.1029/2006JA012059 9.Michelis P. De. Principal components' features of mid-latitude geomagnetic daily variation. Ann. Geophys., 2010,28: 1-14
38 CFR 4.30 - Convalescent ratings.
Code of Federal Regulations, 2010 CFR
2010-07-01
... RATING DISABILITIES General Policy in Rating § 4.30 Convalescent ratings. A total disability rating (100... by report at hospital discharge (regular discharge or release to non-bed care) or outpatient release... total ratings will not be subject to § 3.105(e) of this chapter. Such total rating will be followed by...
Nonlinear refraction and reflection travel time tomography
Zhang, Jiahua; ten Brink, Uri S.; Toksoz, M.N.
1998-01-01
We develop a rapid nonlinear travel time tomography method that simultaneously inverts refraction and reflection travel times on a regular velocity grid. For travel time and ray path calculations, we apply a wave front method employing graph theory. The first-arrival refraction travel times are calculated on the basis of cell velocities, and the later refraction and reflection travel times are computed using both cell velocities and given interfaces. We solve a regularized nonlinear inverse problem. A Laplacian operator is applied to regularize the model parameters (cell slownesses and reflector geometry) so that the inverse problem is valid for a continuum. The travel times are also regularized such that we invert travel time curves rather than travel time points. A conjugate gradient method is applied to minimize the nonlinear objective function. After obtaining a solution, we perform nonlinear Monte Carlo inversions for uncertainty analysis and compute the posterior model covariance. In numerical experiments, we demonstrate that combining the first arrival refraction travel times with later reflection travel times can better reconstruct the velocity field as well as the reflector geometry. This combination is particularly important for modeling crustal structures where large velocity variations occur in the upper crust. We apply this approach to model the crustal structure of the California Borderland using ocean bottom seismometer and land data collected during the Los Angeles Region Seismic Experiment along two marine survey lines. Details of our image include a high-velocity zone under the Catalina Ridge, but a smooth gradient zone between. Catalina Ridge and San Clemente Ridge. The Moho depth is about 22 km with lateral variations. Copyright 1998 by the American Geophysical Union.
Wallisch, Pascal; Ostojic, Srdjan
2016-01-01
Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. SIGNIFICANCE STATEMENT Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. PMID:27807166
Pituitary tumor-transforming gene 1 regulates the patterning of retinal mosaics
Keeley, Patrick W.; Zhou, Cuiqi; Lu, Lu; Williams, Robert W.; Melmed, Shlomo; Reese, Benjamin E.
2014-01-01
Neurons are commonly organized as regular arrays within a structure, and their patterning is achieved by minimizing the proximity between like-type cells, but molecular mechanisms regulating this process have, until recently, been unexplored. We performed a forward genetic screen using recombinant inbred (RI) strains derived from two parental A/J and C57BL/6J mouse strains to identify genomic loci controlling spacing of cholinergic amacrine cells, which is a subclass of retinal interneuron. We found conspicuous variation in mosaic regularity across these strains and mapped a sizeable proportion of that variation to a locus on chromosome 11 that was subsequently validated with a chromosome substitution strain. Using a bioinformatics approach to narrow the list of potential candidate genes, we identified pituitary tumor-transforming gene 1 (Pttg1) as the most promising. Expression of Pttg1 was significantly different between the two parental strains and correlated with mosaic regularity across the RI strains. We identified a seven-nucleotide deletion in the Pttg1 promoter in the C57BL/6J mouse strain and confirmed a direct role for this motif in modulating Pttg1 expression. Analysis of Pttg1 KO mice revealed a reduction in the mosaic regularity of cholinergic amacrine cells, as well as horizontal cells, but not in two other retinal cell types. Together, these results implicate Pttg1 in the regulation of homotypic spacing between specific types of retinal neurons. The genetic variant identified creates a binding motif for the transcriptional activator protein 1 complex, which may be instrumental in driving differential expression of downstream processes that participate in neuronal spacing. PMID:24927528
Zendegui, Elaina A; West, Julia A; Zandberg, Laurie J
2014-04-01
Cognitive behavioral guided self-help (CBTgsh) is an evidence-based, brief, and cost-effective treatment for eating disorders characterized by recurrent binge eating. However, more research is needed to improve patient outcomes and clarify treatment components most associated with symptom change. A main component of CBTgsh is establishing a regular pattern of eating to disrupt dietary restriction, which prior research has implicated in the maintenance of binge eating. The present study used session-by-session assessments of regular eating adherence and weekly binge totals to examine the association between binge frequency and regular eating in a sample of participants (n = 38) receiving 10 sessions of CBTgsh for recurrent binge eating. Analyses were conducted using Hierarchical Linear Modeling (HLM) to allow for data nesting, and a likelihood ratio test determined which out of three regression models best fit the data. Results demonstrated that higher regular eating adherence (3 meals and 2-3 planned snacks daily) was associated with lower weekly binge frequency in this sample, and both the magnitude and direction of the association were maintained after accounting for individual participant differences in binge and adherent day totals. Findings provide additional empirical support for the cognitive behavioral model informing CBTgsh. Possible clinical implications for treatment emphasis and sequencing in CBTgsh are discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Contextual Cueing Effects across the Lifespan
ERIC Educational Resources Information Center
Merrill, Edward C.; Conners, Frances A.; Roskos, Beverly; Klinger, Mark R.; Klinger, Laura Grofer
2013-01-01
The authors evaluated age-related variations in contextual cueing, which reflects the extent to which visuospatial regularities can facilitate search for a target. Previous research produced inconsistent results regarding contextual cueing effects in young children and in older adults, and no study has investigated the phenomenon across the life…
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
Kralikova, Eva; Novak, Jan; West, Oliver; Kmetova, Alexandra; Hajek, Peter
2013-11-01
Electronic cigarettes (ECs) are becoming increasingly popular globally. If they were to replace conventional cigarettes, it could have a substantial impact on public health. To evaluate EC's potential for competing with conventional cigarettes as a consumer product, we report the first data, to our knowledge, on the proportion of smokers who try ECs and become regular users. A total of 2,012 people seen smoking or buying cigarettes in the Czech Republic were approached to answer questions about smoking, with no mention made of ECs to avoid the common bias in surveys of EC users. During the interview, the volunteers' experience with ECs was then discussed. A total of 1,738 smokers (86%) participated. One-half reported trying ECs at least once. Among those who tried ECs, 18.3% (95% CI, 0.15.7%-20.9%) reported using them regularly, and 14% (95% CI, 11.6%-16.2%) used them daily. On average, regular users used ECs daily for 7.1 months. The most common reason for using ECs was to reduce consumption of conventional cigarettes; 60% of regular EC users reported that ECs helped them to achieve this. Being older and having a more favorable initial experience with ECs explained 19% of the variance in progressing to regular EC use. Almost one-fifth of smokers who try ECs once go on to become regular users. ECs may develop into a genuine competitor to conventional cigarettes. Government agencies preparing to regulate ECs need to ensure that such moves do not create a market monopoly for conventional cigarettes.
29 CFR 778.326 - Reduction of regular overtime workweek without reduction of take-home pay.
Code of Federal Regulations, 2010 CFR
2010-07-01
... working long hours. In arrangements of this type, no additional financial pressure would fall upon the... take-home pay. 778.326 Section 778.326 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR... was hired at an hourly rate of $5 an hour and regularly worked 50 hours, earning $275 as his total...
Zhao, Hong-Quan; Kasai, Seiya; Shiratori, Yuta; Hashizume, Tamotsu
2009-06-17
A two-bit arithmetic logic unit (ALU) was successfully fabricated on a GaAs-based regular nanowire network with hexagonal topology. This fundamental building block of central processing units can be implemented on a regular nanowire network structure with simple circuit architecture based on graphical representation of logic functions using a binary decision diagram and topology control of the graph. The four-instruction ALU was designed by integrating subgraphs representing each instruction, and the circuitry was implemented by transferring the logical graph structure to a GaAs-based nanowire network formed by electron beam lithography and wet chemical etching. A path switching function was implemented in nodes by Schottky wrap gate control of nanowires. The fabricated circuit integrating 32 node devices exhibits the correct output waveforms at room temperature allowing for threshold voltage variation.
Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions
ERIC Educational Resources Information Center
Noll, Jennifer; Shaughnessy, J. Michael
2012-01-01
Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…
Restrictive Measures for Young, Beginning Drivers.
ERIC Educational Resources Information Center
Williams, Allan F.
Worldwide there is great variation in how licensing young people to drive is handled. The minimum age for regular licensure varies, generally from 15 to 18 years. Prerequisites and conditions for licensure vary. Some licensing policies are more effective than others in controlling injuries associated with youthful driving; crashes involving young…
A statistical study of variations of internal gravity wave energy characteristics in meteor zone
NASA Technical Reports Server (NTRS)
Gavrilov, N. M.; Kalov, E. D.
1987-01-01
Internal gravity wave (IGW) parameters obtained by the radiometer method have been considered by many other researchers. The results of the processing of regular radiometeor measurements taken during 1979 to 1980 in Obninsk (55.1 deg N, 36.6 deg E) are presented.
Guidelines for Using the "Q" Test in Meta-Analysis
ERIC Educational Resources Information Center
Maeda, Yukiko; Harwell, Michael R.
2016-01-01
The "Q" test is regularly used in meta-analysis to examine variation in effect sizes. However, the assumptions of "Q" are unlikely to be satisfied in practice prompting methodological researchers to conduct computer simulation studies examining its statistical properties. Narrative summaries of this literature are available but…
NASA Technical Reports Server (NTRS)
Fujiwara, M.; Voemel, H.; Hasebe, F.; Shiotani, M.; Ogino, S.-Y.; Iwasaki, S.; Nishi, N.; Shibata, T.; Shimizu, K.; Nishimoto, E.;
2010-01-01
We investigated water vapor variations in the tropical lower stratosphere on seasonal, quasi-biennial oscillation (QBO), and decadal time scales using balloon-borne cryogenic frost point hygrometer data taken between 1993 and 2009 during various campaigns including the Central Equatorial Pacific Experiment (March 1993), campaigns once or twice annually during the Soundings of Ozone and Water in the Equatorial Region (SOWER) project in the eastern Pacific (1998-2003) and in the western Pacific and Southeast Asia (2001-2009), and the Ticosonde campaigns and regular sounding at Costa Rica (2005-2009). Quasi-regular sounding data taken at Costa Rica clearly show the tape recorder signal. The observed ascent rates agree well with the ones from the Halogen Occultation Experiment (HALOE) satellite sensor. Average profiles from the recent five SOWER campaigns in the equatorial western, Pacific in northern winter and from the three Ticosonde campaigns at Costa Rica (10degN) in northern summer clearly show two effects of the QBO. One is the vertical displacement of water vapor profiles associated with the QBO meridional circulation anomalies, and the other is the concentration variations associated with the QBO tropopause temperature variations. Time series of cryogenic frost point hygrometer data averaged in a lower stratospheric layer together with HALOE and Aura Microwave Limb Sounder data show the existence of decadal variations: The mixing ratios were higher and increasing in the 1990s, lower in the early 2000s, and probably slightly higher again or recovering after 2004. Thus linear trend analysis is not appropriate to investigate the behavior of the tropical lower stratospheric water vapor.
16 CFR 801.11 - Annual net sales and total assets.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Annual net sales and total assets. 801.11 Section 801.11 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENTS AND... person; and (2) The total assets of a person shall be as stated on the last regularly prepared balance...
Density of convex intersections and applications
Rautenberg, C. N.; Rösel, S.
2017-01-01
In this paper, we address density properties of intersections of convex sets in several function spaces. Using the concept of Γ-convergence, it is shown in a general framework, how these density issues naturally arise from the regularization, discretization or dualization of constrained optimization problems and from perturbed variational inequalities. A variety of density results (and counterexamples) for pointwise constraints in Sobolev spaces are presented and the corresponding regularity requirements on the upper bound are identified. The results are further discussed in the context of finite-element discretizations of sets associated with convex constraints. Finally, two applications are provided, which include elasto-plasticity and image restoration problems. PMID:28989301
Quantitative Evaluation of the Environmental Impact Quotient (EIQ) for Comparing Herbicides
Kniss, Andrew R.; Coburn, Carl W.
2015-01-01
Various indicators of pesticide environmental risk have been proposed, and one of the most widely known and used is the environmental impact quotient (EIQ). The EIQ has been criticized by others in the past, but it continues to be used regularly in the weed science literature. The EIQ is typically considered an improvement over simply comparing the amount of herbicides applied by weight. Herbicides are treated differently compared to other pesticide groups when calculating the EIQ, and therefore, it is important to understand how different risk factors affect the EIQ for herbicides. The purpose of this work was to evaluate the suitability of the EIQ as an environmental indicator for herbicides. Simulation analysis was conducted to quantify relative sensitivity of the EIQ to changes in risk factors, and actual herbicide EIQ values were used to quantify the impact of herbicide application rate on the EIQ Field Use Rating. Herbicide use rate was highly correlated with the EIQ Field Use Rating (Spearman’s rho >0.96, P-value <0.001) for two herbicide datasets. Two important risk factors for herbicides, leaching and surface runoff potential, are included in the EIQ calculation but explain less than 1% of total variation in the EIQ. Plant surface half-life was the risk factor with the greatest relative influence on herbicide EIQ, explaining 26 to 28% of the total variation in EIQ for actual and simulated EIQ values, respectively. For herbicides, the plant surface half-life risk factor is assigned values without any supporting quantitative data, and can result in EIQ estimates that are contrary to quantitative risk estimates for some herbicides. In its current form, the EIQ is a poor measure of herbicide environmental impact. PMID:26121252
Quantitative Evaluation of the Environmental Impact Quotient (EIQ) for Comparing Herbicides.
Kniss, Andrew R; Coburn, Carl W
2015-01-01
Various indicators of pesticide environmental risk have been proposed, and one of the most widely known and used is the environmental impact quotient (EIQ). The EIQ has been criticized by others in the past, but it continues to be used regularly in the weed science literature. The EIQ is typically considered an improvement over simply comparing the amount of herbicides applied by weight. Herbicides are treated differently compared to other pesticide groups when calculating the EIQ, and therefore, it is important to understand how different risk factors affect the EIQ for herbicides. The purpose of this work was to evaluate the suitability of the EIQ as an environmental indicator for herbicides. Simulation analysis was conducted to quantify relative sensitivity of the EIQ to changes in risk factors, and actual herbicide EIQ values were used to quantify the impact of herbicide application rate on the EIQ Field Use Rating. Herbicide use rate was highly correlated with the EIQ Field Use Rating (Spearman's rho >0.96, P-value <0.001) for two herbicide datasets. Two important risk factors for herbicides, leaching and surface runoff potential, are included in the EIQ calculation but explain less than 1% of total variation in the EIQ. Plant surface half-life was the risk factor with the greatest relative influence on herbicide EIQ, explaining 26 to 28% of the total variation in EIQ for actual and simulated EIQ values, respectively. For herbicides, the plant surface half-life risk factor is assigned values without any supporting quantitative data, and can result in EIQ estimates that are contrary to quantitative risk estimates for some herbicides. In its current form, the EIQ is a poor measure of herbicide environmental impact.
Wolbachia and DNA barcoding insects: patterns, potential, and problems.
Smith, M Alex; Bertrand, Claudia; Crosby, Kate; Eveleigh, Eldon S; Fernandez-Triana, Jose; Fisher, Brian L; Gibbs, Jason; Hajibabaei, Mehrdad; Hallwachs, Winnie; Hind, Katharine; Hrcek, Jan; Huang, Da-Wei; Janda, Milan; Janzen, Daniel H; Li, Yanwei; Miller, Scott E; Packer, Laurence; Quicke, Donald; Ratnasingham, Sujeevan; Rodriguez, Josephine; Rougerie, Rodolphe; Shaw, Mark R; Sheffield, Cory; Stahlhut, Julie K; Steinke, Dirk; Whitfield, James; Wood, Monty; Zhou, Xin
2012-01-01
Wolbachia is a genus of bacterial endosymbionts that impacts the breeding systems of their hosts. Wolbachia can confuse the patterns of mitochondrial variation, including DNA barcodes, because it influences the pathways through which mitochondria are inherited. We examined the extent to which these endosymbionts are detected in routine DNA barcoding, assessed their impact upon the insect sequence divergence and identification accuracy, and considered the variation present in Wolbachia COI. Using both standard PCR assays (Wolbachia surface coding protein--wsp), and bacterial COI fragments we found evidence of Wolbachia in insect total genomic extracts created for DNA barcoding library construction. When >2 million insect COI trace files were examined on the Barcode of Life Datasystem (BOLD) Wolbachia COI was present in 0.16% of the cases. It is possible to generate Wolbachia COI using standard insect primers; however, that amplicon was never confused with the COI of the host. Wolbachia alleles recovered were predominantly Supergroup A and were broadly distributed geographically and phylogenetically. We conclude that the presence of the Wolbachia DNA in total genomic extracts made from insects is unlikely to compromise the accuracy of the DNA barcode library; in fact, the ability to query this DNA library (the database and the extracts) for endosymbionts is one of the ancillary benefits of such a large scale endeavor--which we provide several examples. It is our conclusion that regular assays for Wolbachia presence and type can, and should, be adopted by large scale insect barcoding initiatives. While COI is one of the five multi-locus sequence typing (MLST) genes used for categorizing Wolbachia, there is limited overlap with the eukaryotic DNA barcode region.
Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B
2010-04-01
Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.
NASA Technical Reports Server (NTRS)
Bond, Victor R.; Fraietta, Michael F.
1991-01-01
In 1961, Sperling linearized and regularized the differential equations of motion of the two-body problem by changing the independent variable from time to fictitious time by Sundman's transformation (r = dt/ds) and by embedding the two-body energy integral and the Laplace vector. In 1968, Burdet developed a perturbation theory which was uniformly valid for all types of orbits using a variation of parameters approach on the elements which appeared in Sperling's equations for the two-body solution. In 1973, Bond and Hanssen improved Burdet's set of differential equations by embedding the total energy (which is a constant when the potential function is explicitly dependent upon time.) The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In the process, another element which is proportional to a component of the angular momentum was introduced. Recently trajectories computed during numerical studies of atmospheric entry from circular orbits and low thrust beginning in near-circular orbits exhibited numerical instability when solved by the method of Bond and Gottlieb (1989) for long time intervals. It was found that this instability was due to secular terms which appear on the righthand sides of the differential equations of some of the elements. In this paper, this instability is removed by the introduction of another vector integral called the delta integral (which replaces the Laplace Vector) and another scalar integral which removes the secular terms. The introduction of these integrals requires a new derivation of the differential equations for most of the elements. For this rederivation, the Lagrange method of variation of parameters is used, making the development more concise. Numerical examples of this improvement are presented.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Pharmacokinetics of insulin following intravenous and subcutaneous administration in canines.
Ravis, W R; Comerci, C; Ganjam, V K
1986-01-01
Studies were conducted to examine the absorption and disposition kinetics of insulin in dogs following intravenous (IV) and subcutaneous (SC) administration of commercial preparations. After IV and SC dosing, the plasma levels were described by models which considered basal insulin level contributions. Intersubject variation in the disposition kinetics was small with half-lives of 0.52 +/- 0.05 h and total body clearances of 16.21 +/- 2.08 ml min-1 kg-1. Calculated insulin plasma secretion rates in the canines were 14.4 +/- 3.3 mUh-1 kg-1. Following SC injection of regular insulin, the rate and extent of absorption were noted to be quite variable. The absorption process appeared first-order with half-life values of 2.3 +/- 1.3 h and extents of absorption of 78 +/- 15 per cent with a range of 55-101 per cent. Insulin absorption from SC NPH preparations was evaluated as being composed of two zero-order release phases, a rapid and a slow release phase. With a dose of 1.65 U kg-1, the rapid release phase had an average duration of 1.5 h and a rate of 580 +/- 269 mUh-1 (4.2 per cent of dose) while the slow phase had a zero-order rate of 237 +/- 92 mU h-1 which continued beyond 12 h. The extent of absorption from the NPH preparation was 23.6 +/- 5.1 per cent and was significantly lower than that for the regular injection.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Choi, Kihwan; Li, Ruijiang; Nam, Haewon; Xing, Lei
2014-06-21
As a solution to iterative CT image reconstruction, first-order methods are prominent for the large-scale capability and the fast convergence rate [Formula: see text]. In practice, the CT system matrix with a large condition number may lead to slow convergence speed despite the theoretically promising upper bound. The aim of this study is to develop a Fourier-based scaling technique to enhance the convergence speed of first-order methods applied to CT image reconstruction. Instead of working in the projection domain, we transform the projection data and construct a data fidelity model in Fourier space. Inspired by the filtered backprojection formalism, the data are appropriately weighted in Fourier space. We formulate an optimization problem based on weighted least-squares in the Fourier space and total-variation (TV) regularization in image space for parallel-beam, fan-beam and cone-beam CT geometry. To achieve the maximum computational speed, the optimization problem is solved using a fast iterative shrinkage-thresholding algorithm with backtracking line search and GPU implementation of projection/backprojection. The performance of the proposed algorithm is demonstrated through a series of digital simulation and experimental phantom studies. The results are compared with the existing TV regularized techniques based on statistics-based weighted least-squares as well as basic algebraic reconstruction technique. The proposed Fourier-based compressed sensing (CS) method significantly improves both the image quality and the convergence rate compared to the existing CS techniques.
NASA Astrophysics Data System (ADS)
Büyükyıldız, Mehmet
2017-04-01
Radiation interaction parameters such as total stopping power, projected range (longitudinal and lateral) straggling, mass attenuation coefficient, effective atomic number (Zeff) and electron density (Neff) of some shielding materials were investigated for photon and heavy charged particle interactions. The ranges, stragglings and mass attenuation coefficients were calculated for the high-density polyethylene(HDPE), borated polyethylene (BPE), brick (common silica), concrete (regular), wood, water, stainless steel (304), aluminum (alloy 6061-O), lead and bismuth using SRIM Monte Carlo software and WinXCom program. In addition, effective atomic numbers (Zeff) and electron densities (Neff) of HDPE, BPE, brick (common silica), concrete (regular), wood, water, stainless steel (304) and aluminum (alloy 6061-O) were calculated in the energy region 10 keV-100 MeV using mass stopping powers and mass attenuation coefficients. Two different methods namely direct and interpolation procedures were used to calculate Zeff for comparison and significant differences were determined between the methods. Variations of the ranges, longitudinal and lateral stragglings of water, concrete and stainless steel (304) were compared with each other in the continuous kinetic energy region and discussed with respect to their Zeffs. Moreover, energy absorption buildup factors (EABF) and exposure buildup factors (EBF) of the materials were determined for gamma rays as well and were compared with each other for different photon energies and different mfps in the photon energy region 0.015-15 MeV.
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
Two different phenomena in basic motor speech performance in premanifest Huntington disease.
Skodda, Sabine; Grönheit, Wenke; Lukas, Carsten; Bellenberg, Barbara; von Hein, Sarah M; Hoffmann, Rainer; Saft, Carsten
2016-03-09
Dysarthria is a common feature in Huntington disease (HD). The aim of this cross-sectional pilot study was the description and objective analysis of different speech parameters with special emphasis on the aspect of speech timing of connected speech and nonspeech verbal utterances in premanifest HD (preHD). A total of 28 preHD mutation carriers and 28 age- and sex-matched healthy speakers had to perform a reading task and several syllable repetition tasks. Results of computerized acoustic analysis of different variables for the measurement of speech rate and regularity were correlated with clinical measures and MRI-based brain atrophy assessment by voxel-based morphometry. An impaired capacity to steadily repeat single syllables with higher variations in preHD compared to healthy controls was found (variance 1: Cohen d = 1.46). Notably, speech rate was increased compared to controls and showed correlations to the volume of certain brain areas known to be involved in the sensory-motor speech networks (net speech rate: Cohen d = 1.19). Furthermore, speech rate showed correlations to disease burden score, probability of disease onset, the estimated years to onset, and clinical measures like the cognitive score. Measurement of speech rate and regularity might be helpful additional tools for the monitoring of subclinical functional disability in preHD. As one of the possible causes for higher performance in preHD, we discuss huntingtin-dependent temporarily advantageous development processes of the brain. © 2016 American Academy of Neurology.
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Nowak, Judyta; Borkowska, Barbara; Pawlowski, Boguslaw
2016-09-10
Total leukocyte count (white blood cells-WBC) and the count of each subpopulation vary across the menstrual cycle, but results of studies examining the time and direction of these changes are inconsistent and methodologically flawed. Besides, no previous study focused on leukocyte count on the day of ovulation. Blood samples were obtained from 37 healthy and regularly cycling women aged 19.8-36.1 years. Samples were taken three times: during menstruation (M), ovulation (O), and in the mid-luteal phase (ML). WBC, neutrophils, lymphocytes, mixed cells, progesterone (P,) and estradiol (E) were measured in each of the three target phases of the cycle. Compared to menstruation, WBC (P = 0.002) and neutrophils (P < 0.001) increased around ovulation and remained stable in the mid-luteal phase, whereas lymphocyte and mixed cell counts did not change throughout the menstrual cycle. There were some correlations of sex hormone variation with leukocyte changes between M and O (positive for E and WBC, negative for P and WBC and for P and neutrophil count; P < 0.05), but not between O and ML. Peripheral leukocyte changes taking place in the second half of the cycle are already observable on the day of ovulation and they are associated with sex hormone variation. We speculate that these changes may lead to increased immune protection against pathogens at a time when fertilization and implantation typically occur. Am. J. Hum. Biol. 28:721-728, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Semenov, A. I.; Medvedeva, I. V.; Perminov, V. I.; Khomich, V. Yu.
2016-09-01
Rocket and balloon measurement data on atomic-oxygen (λ 63 µm) emission in the upper atmosphere are presented. The data from the longest (1989-2003) period of measurements of the atomic-oxygen (λ 63 µm) emission intensity obtained by spectral instruments on sounding balloons at an altitude of 38 km at midlatitudes have been systematized and analyzed. Regularities in diurnal and seasonal variations in the intensity of this emission, as well as in its relation with solar activity, have been revealed.
Geochemical peculiarities of sediments in the northeastern Black Sea
NASA Astrophysics Data System (ADS)
Rozanov, A. G.; Gursky, Yu. N.
2016-11-01
We present the results of chemical determinations of Al, Fe, Mn, Cu, Ni, Co, Cr, Pb, Sb, and As in Black Sea sediments over a profile from the Kerch Strait to the eastern part of a deep depression (2210 m). The lithological and geochemical variations were studied in the horizontal and vertical profiles of sediments up to 3 m thick. The tendencies in the distributions of the studied metals during Pleistocene and Holocene sedimentation were analyzed beginning from Neoeuxinian freshwater deposits via the overlaying Drevnechernomorian beds with elevated contents of sapropel to modern clayey carbonate deposits with coccolithophorids. Statistical factor analysis isolated five factors: two main factors (75% of the total dispersion) and three subordinate factors. The first leading biogenic factor (47% of dispersion) reflects the correlation between Corg, Cu, and Ni; the second terrigenous factor (28% of dispersion) combimes Fe, Al, Cr, and Sb. The chemical composition of the sediments reflects the manifestation of diagenesis of landslide processes and mud volcanism along with sedimentation regularities.
Anomalous variation in GPS based TEC measurements prior to the 30 September 2009 Sumatra Earthquake
NASA Astrophysics Data System (ADS)
Karia, Sheetal; Pathak, Kamlesh
This paper investigates the features of pre-earthquake ionospheric anomalies in the total elec-tron content (TEC) data obtained on the basis of regular GPS observations from the GPS receiver at SVNIT Surat (21.16 N, 72.78 E Geog) located at the northern crest of equatorial anomaly region. The data has been analysed for 5 different earthquakes that occurred during 2009 in India and its neighbouring regions. Our observation shows that for the cases of the earthquake, in which the preparation area lies between the crests of the equatorial anomaly close to the geomagnetic equator the enhancement in TEC was followed by a depletion in TEC on the day of earthquake, which may be connected to the equatorial anomaly shape distortions. For the analysis of the ionospheric effects of one of such case-the 30 September 2009 Sumatra earthquake, Global Ionospheric Maps of TEC were used. The possible influence of the earth-quake preparation processes on the main low-latitude ionosphere peculiarity—the equatorial anomaly—is discussed.
Low-illumination image denoising method for wide-area search of nighttime sea surface
NASA Astrophysics Data System (ADS)
Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang
2018-05-01
In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
NASA Astrophysics Data System (ADS)
Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi
2017-03-01
Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin-Osher-Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.
3D and 4D magnetic susceptibility tomography based on complex MR images
Chen, Zikuan; Calhoun, Vince D
2014-11-11
Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.
IUE observations of Comet Halley: Evolution of the UV spectrum between September 1985 and July 1986
NASA Technical Reports Server (NTRS)
Feldman, P. D.; Festou, Michael C.; Ahearn, M. F.; Arpigny, C.; Butterworth, P. S.; Cosmovici, C. B.; Danks, A. C.; Gilmozzi, R.; Jackson, W. M.; Mcfadden, L. A.
1986-01-01
The ultraviolet spectrum of comet P/Halley was monitored with the IUE between 12 September 1985 and 8 July 1986 (r <2.6 AU pre and post-perihelion) at regular time intervals except for a two-month period around the time of perihelion. A complete characterization of the UV spectrum of the comet was obtained to derive coma abundances and to study the light emission mechanisms of the observed species. The Fine Error Sensor (FES) camera of the IUE was used to photometrically investigate the coma brightness variation on time scales of the order of hours. Spectroscopic observations as well as FES measurements show that the activity of the nucleus is highly variable, particularly at the end of December 1985 and during March and April 1986. The production rates of OH, CS and dust are derived for the entire period of the observations. The total water loss rate for this period is estimated to be 150 million metric tons.
Image-guided filtering for improving photoacoustic tomographic image reconstruction.
Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K
2018-06-01
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Prevalence of Autism Spectrum Disorders in Ecuador: A Pilot Study in Quito
ERIC Educational Resources Information Center
Dekkers, Laura M.; Groot, Norbert A.; Díaz Mosquera, Elena N.; Andrade Zúñiga, Ivonne P.; Delfos, Martine F.
2015-01-01
This research presents the results of the first phase of the study on the prevalence of pupils with Autism Spectrum Disorder (ASD) in regular education in Quito, Ecuador. One-hundred-and-sixty-one regular schools in Quito were selected with a total of 51,453 pupils. Prevalence of ASD was assessed by an interview with the rector of the school or…
ERIC Educational Resources Information Center
Tekin, Ali; Tekin, Gülcan; Çalisir, Melih
2017-01-01
The aim of this study is to determine the locus of control (LC) and sensation seeking (SS) levels of university female students according to regular exercise participation (REP) and gender (G). This descriptive study was initiated in 2016 and finished in 2017. A total of 623 students, 306 females and 317 males, from different academic departments…
Markham, Francis; Young, Martin; Doran, Bruce; Sugden, Mark
2017-05-23
Many jurisdictions regularly conduct surveys to estimate the prevalence of problem gambling in their adult populations. However, the comparison of such estimates is problematic due to methodological variations between studies. Total consumption theory suggests that an association between mean electronic gaming machine (EGM) and casino gambling losses and problem gambling prevalence estimates may exist. If this is the case, then changes in EGM losses may be used as a proxy indicator for changes in problem gambling prevalence. To test for this association this study examines the relationship between aggregated losses on electronic gaming machines (EGMs) and problem gambling prevalence estimates for Australian states and territories between 1994 and 2016. A Bayesian meta-regression analysis of 41 cross-sectional problem gambling prevalence estimates was undertaken using EGM gambling losses, year of survey and methodological variations as predictor variables. General population studies of adults in Australian states and territory published before 1 July 2016 were considered in scope. 41 studies were identified, with a total of 267,367 participants. Problem gambling prevalence, moderate-risk problem gambling prevalence, problem gambling screen, administration mode and frequency threshold were extracted from surveys. Administrative data on EGM and casino gambling loss data were extracted from government reports and expressed as the proportion of household disposable income lost. Money lost on EGMs is correlated with problem gambling prevalence. An increase of 1% of household disposable income lost on EGMs and in casinos was associated with problem gambling prevalence estimates that were 1.33 times higher [95% credible interval 1.04, 1.71]. There was no clear association between EGM losses and moderate-risk problem gambling prevalence estimates. Moderate-risk problem gambling prevalence estimates were not explained by the models (I 2 ≥ 0.97; R 2 ≤ 0.01). The present study adds to the weight of evidence that EGM losses are associated with the prevalence of problem gambling. No patterns were evident among moderate-risk problem gambling prevalence estimates, suggesting that this measure is either subject to pronounced measurement error or lacks construct validity. The high degree of residual heterogeneity raises questions about the validity of comparing problem gambling prevalence estimates, even after adjusting for methodological variations between studies.
Pega, Frank; Gilsanz, Paola; Kawachi, Ichiro; Wilson, Nick; Blakely, Tony
2017-04-01
The effect of anti-poverty tax credit interventions on tobacco consumption is unclear. Previous studies have estimated short-term effects, did not isolate the effects of cumulative dose of tax credits, produced conflicting results, and used methods with limited control for some time-varying confounders (e.g., those affected by prior treatment) and treatment regimen (i.e., study participants' tax credit receipt pattern over time). We estimated the longer-term, cumulative effect of New Zealand's Family Tax Credit (FTC) on tobacco consumption, using a natural experiment (administrative errors leading to exogenous variation in FTC receipt) and methods specifically for controlling confounding, reverse causation, and treatment regimen. We extracted seven waves (2002-2009) of the nationally representative Survey of Family, Income and Employment including 4404 working-age (18-65 years) parents in families. The exposure was the total numbers of years of receiving FTC. The outcomes were regular smoking and the average daily number of cigarettes usually smoked at wave 7. We estimated average treatment effects using inverse probability of treatment weighting and marginal structural modelling. Each additional year of receiving FTC affected neither the odds of regular tobacco smoking among all parents (odds ratio 1.02, 95% confidence interval 0.94-1.11), nor the number of cigarettes smoked among parents who smoked regularly (rate ratio 1.01, 95% confidence interval 0.99-1.03). We found no evidence for an association between the cumulative number of years of receiving an anti-poverty tax credit and tobacco smoking or consumption among parents. The assumptions of marginal structural modelling are quite demanding, and we therefore cannot rule out residual confounding. Nonetheless, our results suggest that tax credit programme participation will not increase tobacco consumption among poor parents, at least in this high-income country. Copyright © 2017 Elsevier Ltd. All rights reserved.
High-sensitivity cryogenic temperature sensors using pressurized fiber Bragg gratings
NASA Technical Reports Server (NTRS)
Wu, Meng-Chou; DeHaven, Stanton L.
2006-01-01
Cryogenic temperature sensing was studied using a pressurized fiber Bragg grating (PFBG). The PFBG was obtained by simply applying a small diametric load to a regular fiber Bragg grating (FBG), which was coated with polyimide of a thickness of 11 micrometers. The Bragg wavelength of the PFBG was measured at temperatures from 295 to 4.2 K. A pressure-induced transition occurred at 200 K during the cooling cycle. As a result the temperature sensitivity of the PFBG was found to be nonlinear but reach 24 pm/K below 200 K, more than three times the regular FBG. For the temperature change from 80 K to 10 K, the PFBG has a total Bragg wavelength shift of about 470 pm, 10 times more than the regular FBG. From room temperature to liquid helium temperature the PFBG gives a total wavelength shift of 3.78 nm, compared to the FBG of 1.51 nm. The effect of the coating thickness on the temperature sensitivity of the gratings is also discussed.
High-sensitivity Cryogenic Temperature Sensors using Pressurized Fiber Bragg Gratings
NASA Technical Reports Server (NTRS)
Wu, Meng-Chou; DeHaven, Stanton L.
2006-01-01
Cryogenic temperature sensing was studied using a pressurized fiber Bragg grating (PFBG). The PFBG was obtained by simply applying a small diametric load to a regular fiber Bragg grating (FBG), which was coated with polyimide of a thickness of 11 micrometers. The Bragg wavelength of the PFBG was measured at temperatures from 295 to 4.2 K. A pressure-induced transition occurred at 200 K during the cooling cycle. As a result the temperature sensitivity of the PFBG was found to be nonlinear but reach 24 pm/K below 200 K, more than three times the regular FBG. For the temperature change from 80 K to 10 K, the PFBG has a total Bragg wavelength shift of about 470 pm, 10 times more than the regular FBG. From room temperature to liquid helium temperature the PFBG gives a total wavelength shift of 3.78 nm, compared to the FBG of 1.51 nm. The effect of the coating thickness on the temperature sensitivity of the gratings is also discussed.
Weng, Shengbei; Liu, Manli; Yang, Xiaonan; Liu, Fang; Zhou, Yugui; Lin, Haiqin; Liu, Quan
2018-01-01
To evaluate the surface characteristics of lenticules created by small-incision lenticule extraction (SMILE) with different cap thicknesses. This prospective study included 20 consecutive patients who underwent bilateral SMILE. Surface regularity of the extracted corneal lenticule was analyzed using scanning electron microscopy (SEM) combined with 2 methods: qualitative and quantitative regularity. Qualitative regularity of SEM images was graded by masked observers using an established scoring system. Quantitative regularity of SEM images was assessed by counting the total number and areas of tissue bridges using Image-Pro Plus software. Four different cap thickness of 120, 130, 140, and 150 μm were compared. Refractive outcomes of patients were measured at baseline and 1 month after surgery. As 10 specimens were not analyzable, only 30 eyes were included. Postoperatively, all eyes had postoperative uncorrected distance visual acuity of 20/20 or better; 43% had an unchanged corrected distance visual acuity; 43% gained 1 line; 10% lost 1 line. Ultrastructurally, surface irregularity was primarily caused by tissue bridges. The average surface regularity score obtained was 10.87 ± 2.40 for 120 μm, 10.78 ± 2.60 for 130 μm, 8.76 ± 2.16 for 140 μm, and 8.70 ± 2.66 for 150 μm (P < 0.001). The total number and areas of tissue bridges of 120 to 130 μm were significantly less than 140 to 150 μm (P < 0.05). Surface regularity decreased as cap thickness increased (P < 0.05). There is smoother appearance of the lenticular surface as seen through SEM when a thin cap is created compared with a thick cap qualitatively and quantitatively.
Hoebel, Jens; Finger, Jonas D; Kuntz, Benjamin; Lampert, Thomas
2016-02-01
Regular physical activity has positive effects on health at all ages. This study aims to investigate how far physical activity and regular sports engagement, as a more specific type of physical activity, are associated with socioeconomic factors in the middle-aged working population. Data were obtained from 21,699 working men and women aged between 30 and 64 years who participated in the 2009 and 2010 population-based national German Health Update (GEDA) surveys conducted by the Robert Koch Institute. Besides a multi-dimensional index of socioeconomic status (SES), three single dimensions of SES (education, occupation, and income) were used to analyse socioeconomic differences in total physical activity and regular sports engagement. While the prevalence of total physical activity increased with lower SES, the proportion of people with regular sports engagement decreased with lower SES. These associations remained after adjusting for age in men and women. After mutual adjustment of the three single socioeconomic dimensions, physical activity was independently associated with lower education and lower occupational status. Regular sports engagement was observed to be independently associated with higher education, higher occupational status, as well as higher income after mutual adjustment. This study demonstrates significant socioeconomic differences in physical and sports activity in the middle-aged working population. Education, occupation, and income show varying independent associations with physical activity behaviour. Such differences need to be considered when identifying target groups for health-enhancing physical activity interventions.
Ismail, Maznah; Mariod, Abdalbasit; Pin, Sia Soh
2013-01-01
The effect of preparation methods (raw, half-boiled and hard-boiled) on protein and amino acid contents, as well as the protein quality (amino acid score) of regular, kampung and nutrient enriched Malaysian eggs was investigated. The protein content was determined using a semi-micro Kjeldahl method whereas the amino acid composition was determined using HPLC. The protein content of raw regular, kampung and nutrient enriched eggs were 49.9 ±0.2%, 55.8 ±0.2% and 56.5 ±0.5%, respectively. The protein content of hard-boiled eggs of regular, kampung and nutrient enriched eggs was 56.8 ±0.1%, 54.7 ±0.1%, and 53.7 ±0.5%, while that for half-boiled eggs of regular, kampung and nutrient enriched eggs was 54.7 ±0.6%, 53.4 ±0.4%, and 55.1 ±0.7%, respectively. There were significant differences (p < 0.05) in protein and amino acid contents of half-boiled, hard-boiled as compared with raw samples, and valine was found as the limiting amino acid. It was found that there were significant differences (p < 0.05) of total amino score in regular, kampung and nutrient enriched eggs after heat treatments.Furthermore, hard-boiling (100°C) for 10 minutes and half-boiling (100°C) for 5 minutes affects the total amino score, which in turn alter the protein quality of the egg.
Study on the Geomagnetic Short Period Variations of the Northwestern Yunnan
NASA Astrophysics Data System (ADS)
Yuan, Y.; Li, Q.; Cai, J.
2015-12-01
The Northwestern Yunnan is located in the interaction area between the Eurasian plate and the India plate. This area has been the ideal place for the research of continental dynamics and the prediction for risk region of strong earthquake for its complex tectonic environment and frequent seismic activity. Therefore the study on the geomagnetic short period variations is of great significance in the exploration of deep electrical structure, analysis of the seismic origin and deep geodynamics in the Northwestern Yunnan of China . This paper is based on the geomagnetic data from the magnetometer array with 8 sites built in the northwestern Yunnan to explore the deep electrical structure by the method of geomagnetic depth sounding. Firstly, we selected a total of 183 geomagnetic short period events at the range of 6min to 120min period. And we found a north northwest dividing line, of which two sides has the opposite value in the vertical component variation amplitude, which indicates the obvious conductivity anomaly underground. Secondly, the contour maps of the ratio of vertical component and horizontal component variation amplitude ΔZ/ΔH in different periods reflects the changes of a high conductivity belt's direction and position. In addition, the induction arrows maps within the period of 2 - 256min also shows that on the two sides of the dividing line the induction vectors deviate from each other, and the amplitude and direction of vectors varies with periods regularly. In the light of this, we infer that a high conductivity belt probably exists, which stretches from the deep crust to uppermost mantle and changes with depth constantly with the reference of magnetotelluric sounding. In the end of this paper, the staggered grid finite difference method is used to model the simplified three-dimensional high conductivity anomaly, and the result shows magnetic field distributions are consistent with the observed geomagnetic short period variations characteristics in different periods, which confirms the existence of the high conductivity belt. According to the characteristics of the short period geomagnetic variation above, in combination with the results of previous studies, the synthetic action of partial melting and fluid might be the origin of the belt.
Ulven, Stine M; Leder, Lena; Elind, Elisabeth; Ottestad, Inger; Christensen, Jacob J; Telle-Hansen, Vibeke H; Skjetne, Anne J; Raael, Ellen; Sheikh, Navida A; Holck, Marianne; Torvik, Kristin; Lamglait, Amandine; Thyholt, Kari; Byfuglien, Marte G; Granlund, Linda; Andersen, Lene F; Holven, Kirsten B
2016-10-01
The healthy Nordic diet has been previously shown to have health beneficial effects among subjects at risk of CVD. However, the extent of food changes needed to achieve these effects is less explored. The aim of the present study was to investigate the effects of exchanging a few commercially available, regularly consumed key food items (e.g. spread on bread, fat for cooking, cheese, bread and cereals) with improved fat quality on total cholesterol, LDL-cholesterol and inflammatory markers in a double-blind randomised, controlled trial. In total, 115 moderately hypercholesterolaemic, non-statin-treated adults (25-70 years) were randomly assigned to an experimental diet group (Ex-diet group) or control diet group (C-diet group) for 8 weeks with commercially available food items with different fatty acid composition (replacing SFA with mostly n-6 PUFA). In the Ex-diet group, serum total cholesterol (P<0·001) and LDL-cholesterol (P<0·001) were reduced after 8 weeks, compared with the C-diet group. The difference in change between the two groups at the end of the study was -9 and -11 % in total cholesterol and LDL-cholesterol, respectively. No difference in change in plasma levels of inflammatory markers (high-sensitive C-reactive protein, IL-6, soluble TNF receptor 1 and interferon-γ) was observed between the groups. In conclusion, exchanging a few regularly consumed food items with improved fat quality reduces total cholesterol, with no negative effect on levels of inflammatory markers. This shows that an exchange of a few commercially available food items was easy and manageable and led to clinically relevant cholesterol reduction, potentially affecting future CVD risk.
Suppression of spontaneous nystagmus during different visual fixation conditions.
Hirvonen, Timo P; Juhola, Martti; Aalto, Heikki
2012-07-01
Analysis of spontaneous nystagmus is important in the evaluation of dizzy patients. The aim was to measure how different visual conditions affect the properties of nystagmus using three-dimensional video-oculography (VOG). We compared prevalence, frequency and slow phase velocity (SPV) of the spontaneous nystagmus with gaze fixation allowed, with Frenzel's glasses, and in total darkness. Twenty-five patients (35 measurements) with the peripheral vestibular pathologies were included. The prevalence of nystagmus with the gaze fixation was 40%, and it increased significantly to 66% with Frenzel's glasses and regular room lights on (p < 0.01). The prevalence increased significantly to 83% when the regular room lights were switched off (p = 0.014), and further to 100% in total darkness (p = 0.025). The mean SPV of nystagmus with visual fixation allowed was 1.0°/s. It increased to 2.4°/s with Frenzel's glasses and room lights on, and additionally to 3.1°/s, when the regular room lights were switched off. The mean SPV in total darkness was 6.9°/s. The difference was highly significant between all test conditions (p < 0.01). The frequency of nystagmus was 0.7 beats/s with gaze fixation, 0.8 beats/s in both the test conditions with Frenzel's glasses on, and 1.2 beats/s in total darkness. The frequency in total darkness was significantly higher (p < 0.05) than with Frenzel's glasses, and more so than with visual fixation (p = 0.003). The VOG in total darkness is superior in detecting nystagmus, since Frenzel's glasses allow visual suppression to happen, and this effect is reinforced with gaze fixation allowed. Strict control of visual surroundings is essential in interpreting peripheral nystagmus.
Shirakawa, Toru; Yamagishi, Kazumasa; Yatsuya, Hiroshi; Tanabe, Naohito; Tamakoshi, Akiko; Iso, Hiroyasu
2017-11-01
Only a few population-based prospective studies have examined the association between alcohol consumption and abdominal aortic aneurysm, and the results are inconsistent. Moreover, no evidence exists for aortic dissection. We examined the effect of alcohol consumption on risk of mortality from aortic diseases. A total of 34,720 men from the Japan Collaborative Cohort study, aged 40-79 years, without history of cardiovascular disease and cancer at baseline 1988 and 1990 were followed up until the end of 2009 for their mortality and its underlying cause. Hazard ratios of mortality from aortic diseases were estimated according to alcohol consumption categories of never-drinkers, ex-drinkers, regular drinkers of ≤30 g, and >30 g ethanol per day. During the median 17.9-year follow-up period, 45 men died of aortic dissection and 41 men died of abdominal aortic aneurysm. Light to moderate drinkers of ≤30 g ethanol per day had lower risk of mortality from total aortic disease and aortic dissection compared to never-drinkers. The respective multivariable hazard ratios (95% confidence intervals) were 0.46 (0.28-0.76) for total aortic disease and 0.16 (0.05-0.50) for aortic dissection. Heavy drinkers of >30 g ethanol per day did not have reduced risk of mortality from total aortic disease, albeit had risk variation between aortic dissection and abdominal aortic aneurysm. Light to moderate alcohol consumption was associated with reduced mortality from aortic disease among Japanese men. Copyright © 2017. Published by Elsevier B.V.
Drinking Level, Drinking Pattern, and Twenty-Year Total Mortality Among Late-Life Drinkers.
Holahan, Charles J; Schutte, Kathleen K; Brennan, Penny L; Holahan, Carole K; Moos, Rudolf H
2015-07-01
Research on moderate drinking has focused on the average level of drinking. Recently, however, investigators have begun to consider the role of the pattern of drinking, particularly heavy episodic drinking, in mortality. The present study examined the combined roles of average drinking level (moderate vs. high) and drinking pattern (regular vs. heavy episodic) in 20-year total mortality among late-life drinkers. The sample comprised 1,121 adults ages 55-65 years. Alcohol consumption was assessed at baseline, and total mortality was indexed across 20 years. We used multiple logistic regression analyses controlling for a broad set of sociodemographic, behavioral, and health status covariates. Among individuals whose high level of drinking placed them at risk, a heavy episodic drinking pattern did not increase mortality odds compared with a regular drinking pattern. Conversely, among individuals who engage in a moderate level of drinking, prior findings showed that a heavy episodic drinking pattern did increase mortality risk compared with a regular drinking pattern. Correspondingly, a high compared with a moderate drinking level increased mortality risk among individuals maintaining a regular drinking pattern, but not among individuals engaging in a heavy episodic drinking pattern, whose pattern of consumption had already placed them at risk. Findings highlight that low-risk drinking requires that older adults drink low to moderate average levels of alcohol and avoid heavy episodic drinking. Heavy episodic drinking is frequent among late-middle-aged and older adults and needs to be addressed along with average consumption in understanding the health risks of late-life drinkers.
Ali, M A; Louche, J; Legname, E; Duchemin, M; Plassard, C
2009-12-01
Young seedlings of maritime pine (Pinus pinaster Soland in Aït.) were grown in rhizoboxes using intact spodosol soil samples from the southwest of France, in Landes of Gascogne, presenting a large variation of phosphorus (P) availability. Soils were collected from a 93-year-old unfertilized stand and a 13-year-old P. pinaster stand with regular annual fertilization of either only P or P and nitrogen (N). After 6 months of culture in controlled conditions, different morphotypes of ectomycorrhiza (ECM) were used for the measurements of acid phosphatase activity and molecular identification of fungal species using amplification of the ITS region. Total biomass, N and P contents were measured in roots and shoots of plants. Bicarbonate- and NaOH-available inorganic P (Pi), organic P (Po) and ergosterol concentrations were measured in bulk and rhizosphere soil. The results showed that bulk soil from the 93-year-old forest stand presented the highest Po levels, but relatively higher bicarbonate-extractable Pi levels compared to 13-year-old unfertilized stand. Fertilizers significantly increased the concentrations of inorganic P fractions in bulk soil. Ergosterol contents in rhizosphere soil were increased by fertilizer application. The dominant fungal species was Rhizopogon luteolus forming 66.6% of analysed ECM tips. Acid phosphatase activity was highly variable and varied inversely with bicarbonate-extractable Pi levels in the rhizosphere soil. Total P or total N in plants was linearly correlated with total plant biomass, but the slope was steep only between total P and biomass in fertilized soil samples. In spite of high phosphatase activity in ECM tips, P availability remained a limiting nutrient in soil samples from unfertilized stands. Nevertheless young P. pinaster seedlings showed a high plasticity for biomass production at low P availability in soils.
ERIC Educational Resources Information Center
Litchfield, Daniel C.; Goldenheim, David A.
1997-01-01
Describes the solution to a geometric problem by two ninth-grade mathematicians using The Geometer's Sketchpad computer software program. The problem was to divide any line segment into a regular partition of any number of parts, a variation on a problem by Euclid. The solution yielded two constructions, one a GLaD construction and the other using…
Estimating allowable-cut by area-scheduling
William B. Leak
2011-01-01
Estimation of the regulated allowable-cut is an important step in placing a forest property under management and ensuring a continued supply of timber over time. Regular harvests also provide for the maintenance of needed wildlife habitat. There are two basic approaches: (1) volume, and (2) area/volume regulation, with many variations of each. Some require...
The Evolutionist, the Creationist, and the "Unsure": Picking-Up the Wrong Fight?
ERIC Educational Resources Information Center
Kampourakis, Kostas; Strasser, Bruno J.
2015-01-01
The public acceptance of evolution is under constant scrutiny. Surveys and polls regularly measure whether the public accepts evolutionist "or" creationist views. The differences between groups, such as people from various countries, are then explained by variations in religious views. But what is often overlooked, is that the data also…
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION
Allen, Genevera I.; Tibshirani, Robert
2015-01-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable, meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal, in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility. PMID:26877823
TRANSPOSABLE REGULARIZED COVARIANCE MODELS WITH AN APPLICATION TO MISSING DATA IMPUTATION.
Allen, Genevera I; Tibshirani, Robert
2010-06-01
Missing data estimation is an important challenge with high-dimensional data arranged in the form of a matrix. Typically this data matrix is transposable , meaning that either the rows, columns or both can be treated as features. To model transposable data, we present a modification of the matrix-variate normal, the mean-restricted matrix-variate normal , in which the rows and columns each have a separate mean vector and covariance matrix. By placing additive penalties on the inverse covariance matrices of the rows and columns, these so called transposable regularized covariance models allow for maximum likelihood estimation of the mean and non-singular covariance matrices. Using these models, we formulate EM-type algorithms for missing data imputation in both the multivariate and transposable frameworks. We present theoretical results exploiting the structure of our transposable models that allow these models and imputation methods to be applied to high-dimensional data. Simulations and results on microarray data and the Netflix data show that these imputation techniques often outperform existing methods and offer a greater degree of flexibility.
NASA Astrophysics Data System (ADS)
Schachtschneider, R.; Rother, M.; Lesur, V.
2013-12-01
We introduce a method that enables us to account for existing correlations between Gauss coefficients in core field modelling. The information about the correlations are obtained from a highly accurate field model based on CHAMP data, e.g. the GRIMM-3 model. We compute the covariance matrices of the geomagnetic field, the secular variation, and acceleration up to degree 18 and use these in the regularization scheme of the core field inversion. For testing our method we followed two different approaches by applying it to two different synthetic satellite data sets. The first is a short data set with a time span of only three months. Here we test how the information about correlations help to obtain an accurate model when only very little information are available. The second data set is a large one covering several years. In this case, besides reducing the residuals in general, we focus on the improvement of the model near the boundaries of the data set where the accerelation is generally more difficult to handle. In both cases the obtained covariance matrices are included in the damping scheme of the regularization. That way information from scales that could otherwise not be resolved by the data can be extracted. We show that by using this technique we are able to improve the models of the field and the secular variation for both, the short and the long term data set, compared to approaches using more conventional regularization techniques.
34 CFR 690.8 - Enrollment status for students taking regular and correspondence courses.
Code of Federal Regulations, 2011 CFR
2011-07-01
... work taken by the student to help in his or her course of study; (2) Is completed within the period of... credits per term. Under § 690.8 No. of credit hours regular work No. of credit hours correspondence Total... correspondence work that is greater than 0, but less than 6 hours. (Authority: 20 U.S.C. 1070a) [52 FR 45735, Dec...
Chlorogenic acids and lactones in regular and water-decaffeinated arabica coffees.
Farah, Adriana; de Paulis, Tomas; Moreira, Daniel P; Trugo, Luiz C; Martin, Peter R
2006-01-25
The market for decaffeinated coffees has been increasingly expanding over the years. Caffeine extraction may result in losses of other compounds such as chlorogenic acids (CGA) and, consequently, their 1,5-gamma-quinolactones (CGL) in roasted coffee. These phenolic compounds are important for flavor formation as well as the health effects of coffee; therefore, losses due to decaffeination need to be investigated. The present study evaluates the impact of decaffeination processing on CGA and CGL levels of green and roasted arabica coffees. Decaffeination produced a 16% average increase in the levels of total CGA in green coffee (dry matter), along with a 237% increase in CGL direct precursors. Different degrees of roasting showed average increments of 5.5-18% in CGL levels of decaffeinated coffee, compared to regular, a change more consistent with observed levels of total CGA than with those of CGL direct precursors in green samples. On the other hand, CGA levels in roasted coffee were 3-9% lower in decaffeinated coffee compared to regular coffee. Although differences in CGA and CGL contents of regular and decaffeinated roasted coffees appear to be relatively small, they may be enough to affect flavor characteristics as well as the biopharmacological properties of the final beverage, suggesting the need for further study.
Low/No Calorie Sweetened Beverage Consumption in the National Weight Control Registry
Catenacci, Victoria A.; Pan, Zhaoxing; Thomas, J. Graham; Ogden, Lorraine G.; Roberts, Susan A.; Wyatt, Holly R.; Wing, Rena R.; Hill, James O.
2015-01-01
Objective The aim of this cross-sectional study was to evaluate prevalence of and strategies behind low/no calorie sweetened beverage (LNCSB) consumption in successful weight loss maintainers. Methods An online survey was administered to 434 members of the National Weight Control Registry (NWCR, individuals who have lost ≥13.6 kg and maintained weight loss for > 1 year). Results While few participants (10%) consume sugar-sweetened beverages on a regular basis, 53% regularly consume LNCSB. The top five reasons for choosing LNCSB were for taste (54%), to satisfy thirst (40%), part of routine (27%), to reduce calories (22%) and to go with meals (21%). The majority who consume LNCSB (78%) felt they helped control total calorie intake. Many participants considered changing patterns of beverage consumption to be very important in weight loss (42%) and maintenance (40%). Increasing water was by far the most common strategy, followed by reducing regular calorie beverages. Conclusions Regular consumption of LNCSB is common in successful weight loss maintainers for various reasons including helping individuals to limit total energy intake. Changing beverage consumption patterns was felt to be very important for weight loss and maintenance by a substantial percentage of successful weight loss maintainers in the NWCR. PMID:25044563
Low/no calorie sweetened beverage consumption in the National Weight Control Registry.
Catenacci, Victoria A; Pan, Zhaoxing; Thomas, J Graham; Ogden, Lorraine G; Roberts, Susan A; Wyatt, Holly R; Wing, Rena R; Hill, James O
2014-10-01
The aim of this cross-sectional study was to evaluate prevalence of and strategies behind low/no calorie sweetened beverage (LNCSB) consumption in successful weight loss maintainers. An online survey was administered to 434 members of the National Weight Control Registry (NWCR, individuals who have lost ≥13.6 kg and maintained weight loss for > 1 year). While few participants (10%) consume sugar-sweetened beverages on a regular basis, 53% regularly consume LNCSB. The top five reasons for choosing LNCSB were for taste (54%), to satisfy thirst (40%), part of routine (27%), to reduce calories (22%) and to go with meals (21%). The majority who consume LNCSB (78%) felt they helped control total calorie intake. Many participants considered changing patterns of beverage consumption to be very important in weight loss (42%) and maintenance (40%). Increasing water was by far the most common strategy, followed by reducing regular calorie beverages. Regular consumption of LNCSB is common in successful weight loss maintainers for various reasons including helping individuals to limit total energy intake. Changing beverage consumption patterns was felt to be very important for weight loss and maintenance by a substantial percentage of successful weight loss maintainers in the NWCR. Copyright © 2014 The Obesity Society.
Updating ARI Educational Benefits Usage Data Bases for Army Regular, Reserve, and Guard: 2005 - 2006
2007-09-01
22 0 0 0 0 22 291059 38868 46373 57138 142379 433438 5 MGIB Regular Army Data As of September 2005 ***** 3 Table 3: Percent users MGIB 2YR 3YR 4YR...Post HS 5.68% 10.37% 8.82% Total 100.0% 100.0% 100.0% Frequency Missing = 7158 6 MGIB Regular Army Data As of September 2005 ***** 5 Table 5 : Time...167 112 8787 1994 233 2616 1117 629 340 297 182 132 65 46 5657 1995 223 2711 1278 698 409 252 163 98 50 22 5904 1996 197 2894 1357 668 377 237 131 58 12
FAST TRACK COMMUNICATION: Regularized Kerr-Newman solution as a gravitating soliton
NASA Astrophysics Data System (ADS)
Burinskii, Alexander
2010-10-01
The charged, spinning and gravitating soliton is realized as a regular solution of the Kerr-Newman (KN) field coupled with a chiral Higgs model. A regular core of the solution is formed by a domain wall bubble interpolating between the external KN solution and a flat superconducting interior. An internal electromagnetic (em) field is expelled to the boundary of the bubble by the Higgs field. The solution reveals two new peculiarities: (i) the Higgs field is oscillating, similar to the known oscillon models; (ii) the em field forms on the edge of the bubble a Wilson loop, resulting in quantization of the total angular momentum.
Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev
2016-01-01
The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.
Vehicle Detection of Aerial Image Using TV-L1 Texture Decomposition
NASA Astrophysics Data System (ADS)
Wang, Y.; Wang, G.; Li, Y.; Huang, Y.
2016-06-01
Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.
Genetic influences of sports participation in Portuguese families.
Seabra, André F; Mendonça, Denisa M; Göring, Harald H H; Thomis, Martine A; Maia, José A
2014-01-01
To estimate familial aggregation and quantify the genetic and environmental contribution to the phenotypic variation on sports participation (SP) among Portuguese families. The sample consisted of 2375 nuclear families (parents and two offspring each) from different regions of Portugal with a total of 9500 subjects. SP assessment was based on a psychometrically established questionnaire. Phenotypes used were based on the participation in sports (yes/no), intensity of sport, weekly amount of time in SP and the proportion of the year in which a sport was regularly played. Familial correlations were calculated using family correlations (FCOR) in the SAGE software. Heritability was estimated using variance-components methods implemented in Sequential Oligogenic Linkage Analysis Routines (SOLAR) software. Subjects of the same generation tend to be more similar in their SP habits than the subjects of different generations. In all SP phenotypes studied, adjusted for the effects of multiple covariates, the proportion of phenotypic variance due to additive genetic factors ranged between 40% and 50%. The proportion of variance attributable to environmental factors ranged from 50% for the participation in sports to 60% for intensity of sport. In this large population-based family study, there was significant familial aggregation on SP. These results highlight that the variation on SP phenotypes have a significant genetic contribution although environmental factors are also important in the familial resemblance of SP.
Estimation of ozone dry deposition over Europe for the period 2071-2100
NASA Astrophysics Data System (ADS)
Komjáthy, Eszter; Gelybó, Györgyi; László Lagzi, István.; Mészáros, Róbert
2010-05-01
Ozone in the lower troposphere is a phytotoxic air pollutant which can cause injury to plant tissues, causing reduction in plant growth and productivity. In the last decades, several investigations have been carried out for the purpose to estimate ozone load over different surface types. At the same time, the changes of atmospheric variables as well as surface/vegetation parameters due to the global climate change could also strongly modify both temporal and spatial variations of ozone load over Europe. In this study, the possible effects of climate change on ozone deposition are analyzed. Using a sophisticated deposition model, ozone deposition was estimated on a regular grid over Europe for the period 2071-2100. Our aim is to determine the uncertainties and the possible degree of change in ozone deposition velocity as an important predictor of total ozone load using climate data from multiple climate models and runs. For these model calculations, results of the PRUDENCE (Predicting of Regional Scenarios and Uncertainties for Defining European Climate Change Risks and Effects) climate prediction project were used. As a first step, seasonal variations of ozone deposition over different vegetation types in case of different climate scenarios are presented in this study. Besides model calculations, in the frame of a sensitivity analyses, the effects of surface/vegetation parameters (e.g. leaf area index or stomatal resistance) on ozone deposition under a modified climate regime have also been analyzed.
How Accurately Can We Predict Eclipses for Algol? (Poster abstract)
NASA Astrophysics Data System (ADS)
Turner, D.
2016-06-01
(Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?
Juvenile zebra finches learn the underlying structural regularities of their fathers’ song
Menyhart, Otília; Kolodny, Oren; Goldstein, Michael H.; DeVoogd, Timothy J.; Edelman, Shimon
2015-01-01
Natural behaviors, such as foraging, tool use, social interaction, birdsong, and language, exhibit branching sequential structure. Such structure should be learnable if it can be inferred from the statistics of early experience. We report that juvenile zebra finches learn such sequential structure in song. Song learning in finches has been extensively studied, and it is generally believed that young males acquire song by imitating tutors (Zann, 1996). Variability in the order of elements in an individual’s mature song occurs, but the degree to which variation in a zebra finch’s song follows statistical regularities has not been quantified, as it has typically been dismissed as production error (Sturdy et al., 1999). Allowing for the possibility that such variation in song is non-random and learnable, we applied a novel analytical approach, based on graph-structured finite-state grammars, to each individual’s full corpus of renditions of songs. This method does not assume syllable-level correspondence between individuals. We find that song variation can be described by probabilistic finite-state graph grammars that are individually distinct, and that the graphs of juveniles are more similar to those of their fathers than to those of other adult males. This grammatical learning is a new parallel between birdsong and language. Our method can be applied across species and contexts to analyze complex variable learned behaviors, as distinct as foraging, tool use, and language. PMID:26005428
Masi, U.; O'Neil, J.R.; Kistler, R.W.
1981-01-01
18O, D, and H2O+ contents were measured for whole-rock specimens of granitoid rocks from 131 localitics in California and southwestern Oregon. With 41 new determinations in the Klamath Mountains and Sierra Nevada, initial strontium isotope ratios are known for 104 of these samples. Large variations in ??18O (5.5 to 12.4), ??D (-130 to -31), water contents (0.14 to 2.23 weight percent) and initial strontium isotope ratios (0.7028 to 0.7095) suggest a variety of source materials and identify rocks modified by secondary processes. Regular patterns of variation in each isotopic ratio exist over large geographical regions, but correlations between the ratios are generally absent except in restricted areas. For example, the regular decrease in ??D values from west to east in the Sierra Nevada batholith is not correlative with a quite complex pattern of ??18O values, implying that different processes were responsible for the isotopic variations in these two elements. In marked contrast to a good correlation between (87Sr/86Sr)o and ??18O observed in the Peninsular Ranges batholith to the south, such correlations are lacking except in a few areas. ??D values, on the other hand, correlate well with rock types, chemistry, and (87Sr/86Sr)o except in the Coast Ranges where few of the isotopic signatures are primary. The uniformly low ??D values of samples from the Mojave Desert indicate that meteoric water contributed much of the hydrogen to the rocks in that area. Even so, the ??18O values and 18O fractionations between quartz and feldspar are normal in these same rocks. This reconnaissance study has identified regularities in geochemical parameters over enormous geographical regions. These patterns are not well understood but merit more detailed examination because they contain information critical to our understanding of the development of granitoid batholiths. ?? 1981 Springer-Verlag.
Analysis of borehole expansion and gallery tests in anisotropic rock masses
Amadei, B.; Savage, W.Z.
1991-01-01
Closed-form solutions are used to show how rock anisotropy affects the variation of the modulus of deformation around the walls of a hole in which expansion tests are conducted. These tests include dilatometer and NX-jack tests in boreholes and gallery tests in tunnels. The effects of rock anisotropy on the modulus of deformation are shown for transversely isotropic and regularly jointed rock masses with planes of transverse isotropy or joint planes parallel or normal to the hole longitudinal axis for plane strain or plane stress condition. The closed-form solutions can also be used when determining the elastic properties of anisotropic rock masses (intact or regularly jointed) in situ. ?? 1991.
Diel mercury-concentration variations in streams affected by mining and geothermal discharge
Nimick, D.A.; McCleskey, R. Blaine; Gammons, C.H.; Cleasby, T.E.; Parker, S.R.
2007-01-01
Diel variations of concentrations of unfiltered and filtered total Hg and filtered methyl Hg were documented during 24-h sampling episodes in water from Silver Creek, which drains a historical gold-mining district near Helena, Montana, and the Madison River, which drains the geothermal system of Yellowstone National Park. The concentrations of filtered methyl Hg had relatively large diel variations (increases of 68 and 93% from morning minima) in both streams. Unfiltered and filtered (0.1-??m filtration) total Hg in Silver Creek had diel concentration increases of 24% and 7%, respectively. In the Madison River, concentrations of unfiltered and filtered total Hg did not change during the sampling period. The concentration variation of unfiltered total Hg in Silver Creek followed the diel variation in suspended-particle concentration. The concentration variation of filtered total and methyl Hg followed the solar photocycle, with highest concentrations during the early afternoon and evening and lowest concentrations during the morning. None of the diel Hg variations correlated with diel variation in streamflow or major ion concentrations. The diel variation in filtered total Hg could have been produced by adsorption-desorption of Hg2+ or by reduction of Hg(II) to Hg0 and subsequent evasion of Hg0. The diel variation in filtered methyl Hg could have been produced by sunlight- and temperature-dependent methylation. This study is the first to examine diel Hg cycling in streams, and its results reinforce previous conclusions that diel trace-element cycling in streams is widespread but often not recognized and that parts of the biogeochemical Hg cycle respond quickly to the daily photocycle. ?? 2006 Elsevier B.V. All rights reserved.
A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.
Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong
2015-12-01
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
The internal-external respiratory motion correlation is unaffected by audiovisual biofeedback.
Steel, Harry; Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho
2014-03-01
This study evaluated if an audiovisual (AV) biofeedback causes variation in the level of external and internal correlation due to its interactive intervention in natural breathing. The internal (diaphragm) and external (abdominal wall) respiratory motion signals of 15 healthy human subjects under AV biofeedback and free breathing (FB) were analyzed and measures of correlation and regularity taken. Regularity metrics (root mean square error and spectral power dispersion metric) were obtained and the correlation between these metrics and the internal and external correlation was investigated. For FB and AV biofeedback assisted breathing the mean correlations found between internal and external respiratory motion were 0.96±0.02 and 0.96±0.03, respectively. This means there is no evidence to suggest (p-value=0.88) any difference in the correlation between internal and external respiratory motion with the use of AV biofeedback. Our results confirmed the hypothesis that the internal-external correlation with AV biofeedback is the same as for free breathing. Should this correlation be maintained for patients, AV biofeedback can be implemented in the clinic with confidence as regularity improvements using AV biofeedback with an external signal will be reflected in increased internal motion regularity.
Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui
2015-01-19
Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
Regularity of a renewal process estimated from binary data.
Rice, John D; Strawderman, Robert L; Johnson, Brent A
2017-10-09
Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.
RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING
Liu, Meizhu; Vemuri, Baba C.
2011-01-01
Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643
Local Variation of Hashtag Spike Trains and Popularity in Twitter
Sanlı, Ceyda; Lambiotte, Renaud
2015-01-01
We draw a parallel between hashtag time series and neuron spike trains. In each case, the process presents complex dynamic patterns including temporal correlations, burstiness, and all other types of nonstationarity. We propose the adoption of the so-called local variation in order to uncover salient dynamical properties, while properly detrending for the time-dependent features of a signal. The methodology is tested on both real and randomized hashtag spike trains, and identifies that popular hashtags present regular and so less bursty behavior, suggesting its potential use for predicting online popularity in social media. PMID:26161650
Xia, Depeng; Chen, Peifang; Du, Peixue; Ding, Lijun; Liu, Anli
2017-08-12
To observe the efficacy differences between acupoint catgut embedding combined with ginger-partitioned moxibustion and regular acupuncture on chronic fatigue syndrome (CFS) of spleen-kidney yang deficiency syndrome, and to explore its effects on T lymphocyte subsets and activity of NK cell. A total of 60 patients with CFS of spleen-kidney yang deficiency syndrome were randomly divided into a catgut embedding combined with ginger-partitioned moxibustion (CECGP) group and a regular acupuncture group, 30 cases in each one. The patients in the CECGP group were treated with acupoint catgut embedding combined with ginger-partitioned moxibustion; the acupoint catgut embedding was applied at Guanyuan (CV 4), Shenshu (BL 23), Pishu (BL 20), Zusanli (ST 36), Qihai (CV 6), once a week, while the ginger-partitioned moxibustion was applied at Guanyuan (CV 4), Qihai (CV 6) and Zusanli (ST 36), once every three days for consecutive one month. The patients in the regular acupuncture group were treated with regular acupuncture at Guanyuan (CV 4), Shenshu (BL 23), Pishu (BL 20), Zusanli (ST 36), Qihai (CV 6), once a day, 6 treatments per week (one day for rest) for consecutive one month. The clinical symptom scores, fatigue scale-14 (FS-14), fatigue assessment instrument (FAI), laboratory test results and total effective rate were compared between the two groups before and after treatment. (1) After treatment, the clinical symptom scores, FS-14 and FAI were reduced in the two groups (all P <0.05); after treatment, the clinical symptom scores, FS-14 and FAI in the CECGP group were significantly lower than those in the regular acupuncture group (all P <0.05). (2) After treatment, the CD 4 + /CD 8 + , natural killer cell% (NK%), CD 3 + %, CD% were all increased in the two groups (all +4 P <0.05); the CD 4 + /CD 8 + , CD 3 + %, CD% in the CECGP group were significantly higher than those in the regular acupuncture group (all P <0.05). (3) After treatment, the total effective rate was 96.7% (29/30) in the CECGP group, which was similar to 93.3% (28/30) in the regular acupuncture group ( P >0.05). The acupoint catgut embedding combined with ginger-partitioned moxibustion, which could effectively relieve the symptoms, regulate T lymphocyte subsets and the activity of NK cell, is an effective method for CFS of spleen-kidney yang deficiency syndrome.
Mumford, Sunni L.; Schisterman, Enrique F.; Siega-Riz, Anna Maria; Gaskins, Audrey J.; Steiner, Anne Z.; Daniels, Julie L.; Olshan, Andrew F.; Hediger, Mary L.; Hovey, Kathleen; Wactawski-Wende, Jean; Trevisan, Maurizio; Bloom, Michael S.
2011-01-01
BACKGROUND Sporadic anovulation among regularly menstruating women is not well understood. It is hypothesized that cholesterol abnormalities may lead to hormone imbalances and incident anovulation. The objective was to evaluate the association between lipoprotein cholesterol levels and endocrine and metabolic disturbances and incident anovulation among ovulatory and anovulatory women reporting regular menstruation. METHODS The BioCycle Study was a prospective cohort study conducted at the University at Buffalo from September 2005 to 2007, which followed 259 self-reported regularly menstruating women aged 18–44 years, for one or two complete menstrual cycles. Sporadic anovulation was assessed across two menstrual cycles. RESULTS Mean total and low-density lipoprotein cholesterol and triglycerides levels across the menstrual cycles were higher during anovulatory cycles (mean difference: 4.6 (P = 0.01), 3.0 (P = 0.06) and 6.4 (P = 0.0002) mg/dl, respectively, adjusted for age and BMI). When multiple total cholesterol (TC) measures prior to expected ovulation were considered, we observed a slight increased risk of anovulation associated with increased levels of TC (odds ratio per 5 mg/dl increase, 1.07; 95% confidence interval, 0.99, 1.16). Sporadic anovulation was associated with an increased LH:FSH ratio (P = 0.002), current acne (P = 0.02) and decreased sex hormone-binding globulin levels (P = 0.005). CONCLUSIONS These results do not support a strong association between lipoprotein cholesterol levels and sporadic anovulation. However, sporadic anovulation among regularly menstruating women is associated with endocrine disturbances which are typically observed in women with polycystic ovary syndrome. PMID:21115506
Variational discretization of the nonequilibrium thermodynamics of simple systems
NASA Astrophysics Data System (ADS)
Gay-Balmaz, François; Yoshimura, Hiroaki
2018-04-01
In this paper, we develop variational integrators for the nonequilibrium thermodynamics of simple closed systems. These integrators are obtained by a discretization of the Lagrangian variational formulation of nonequilibrium thermodynamics developed in (Gay-Balmaz and Yoshimura 2017a J. Geom. Phys. part I 111 169–93 Gay-Balmaz and Yoshimura 2017b J. Geom. Phys. part II 111 194–212) and thus extend the variational integrators of Lagrangian mechanics, to include irreversible processes. In the continuous setting, we derive the structure preserving property of the flow of such systems. This property is an extension of the symplectic property of the flow of the Euler–Lagrange equations. In the discrete setting, we show that the discrete flow solution of our numerical scheme verifies a discrete version of this property. We also present the regularity conditions which ensure the existence of the discrete flow. We finally illustrate our discrete variational schemes with the implementation of an example of a simple and closed system.
Multilinear Graph Embedding: Representation and Regularization for Images.
Chen, Yi-Lei; Hsu, Chiou-Ting
2014-02-01
Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.
Monitoring variable X-ray sources in nearby galaxies
NASA Astrophysics Data System (ADS)
Kong, A. K. H.
2010-12-01
In the last decade, it has been possible to monitor variable X-ray sources in nearby galaxies. In particular, since the launch of Chandra, M31 has been regularly observed. It is perhaps the only nearby galaxy which is observed by an X-ray telescope regularly throughout operation. With 10 years of observations, the center of M31 has been observed with Chandra for nearly 1 Msec and the X-ray skies of M31 consist of many transients and variables. Furthermore, the X-ray Telescope of Swift has been monitoring several ultraluminous X-ray sources in nearby galaxies regularly. Not only can we detect long-term X-ray variability, we can also find spectral variation as well as possible orbital period. In this talk, I will review some of the important Chandra and Swift monitoring observations of nearby galaxies in the past 10 years. I will also present a "high-definition" movie of M31 and discuss the possibility of detecting luminous transients in M31 with MAXI.
A regularized vortex-particle mesh method for large eddy simulation
NASA Astrophysics Data System (ADS)
Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.
2017-11-01
We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.
1995-08-14
seismic network. At large range, infrasound signals are oscillatory acoustic signals detected as small pressure variations about the ambient value... Infrasound Review and Background Infrasound signals are regular acoustic signals in that they are longitudinal pressure waves albeit at rather low frequency...energy is concentrated at higher frequency than that for higher yield sources. Infrasound can be generated by natural and manmade processes; moreover
ERIC Educational Resources Information Center
Lee, Eunjeong; Oliveira-Ferreira, Ana I.; de Water, Ed; Gerritsen, Hans; Bakker, Mattijs C.; Kalwij, Jan A. W.; van Goudoever, Tjerk; Buster, Wietze H.; Pennartz, Cyriel M. A.
2009-01-01
To meet an increasing need to examine the neurophysiological underpinnings of behavior in rats, we developed a behavioral system for studying sensory processing, attention and discrimination learning in rats while recording firing patterns of neurons in one or more brain areas of interest. Because neuronal activity is sensitive to variations in…
Visualization of Sound Waves Using Regularly Spaced Soap Films
ERIC Educational Resources Information Center
Elias, F.; Hutzler, S.; Ferreira, M. S.
2007-01-01
We describe a novel demonstration experiment for the visualization and measurement of standing sound waves in a tube. The tube is filled with equally spaced soap films whose thickness varies in response to the amplitude of the sound wave. The thickness variations are made visible based on optical interference. The distance between two antinodes is…
Calculation of gas turbine characteristic
NASA Astrophysics Data System (ADS)
Mamaev, B. I.; Murashko, V. L.
2016-04-01
The reasons and regularities of vapor flow and turbine parameter variation depending on the total pressure drop rate π* and rotor rotation frequency n are studied, as exemplified by a two-stage compressor turbine of a power-generating gas turbine installation. The turbine characteristic is calculated in a wide range of mode parameters using the method in which analytical dependences provide high accuracy for the calculated flow output angle and different types of gas dynamic losses are determined with account of the influence of blade row geometry, blade surface roughness, angles, compressibility, Reynolds number, and flow turbulence. The method provides satisfactory agreement of results of calculation and turbine testing. In the design mode, the operation conditions for the blade rows are favorable, the flow output velocities are close to the optimal ones, the angles of incidence are small, and the flow "choking" modes (with respect to consumption) in the rows are absent. High performance and a nearly axial flow behind the turbine are obtained. Reduction of the rotor rotation frequency and variation of the pressure drop change the flow parameters, the parameters of the stages and the turbine, as well as the form of the characteristic. In particular, for decreased n, nonmonotonic variation of the second stage reactivity with increasing π* is observed. It is demonstrated that the turbine characteristic is mainly determined by the influence of the angles of incidence and the velocity at the output of the rows on the losses and the flow output angle. The account of the growing flow output angle due to the positive angle of incidence for decreased rotation frequencies results in a considerable change of the characteristic: poorer performance, redistribution of the pressure drop at the stages, and change of reactivities, growth of the turbine capacity, and change of the angle and flow velocity behind the turbine.
Cartier, Louis-Jacques; Collins, Charlene; Lagacé, Mathieu; Douville, Pierre
2018-02-01
To compare the fasting and non-fasting lipid profile including ApoB in a cohort of patients from a community setting. Our purpose was to determine the proportion of results that could be explained by the known biological variation in the fasting state and to examine the additional impact of non-fasting on these same lipid parameters. 1093 adult outpatients with fasting lipid requests were recruited from February to September 2016 at the blood collection sites of the Moncton Hospital. Participants were asked to come back in the next 3-4days after having eaten a regular breakfast to have their blood drawn for a non-fasting lipid profile. 91.6% of patients in this study had a change in total cholesterol that fell within the biological variation expected for this parameter. Similar results were seen for HDL-C (94.3%) non-HDL-C (88.8%) and ApoB (93.0%). A smaller number of patients fell within the biological variation expected for TG (78.8%) and LDL-C (74.6%). An average TG increase of 0.3mmol/L was observed in fed patients no matter the level of fasting TG. A gradual widening in the range of change in TG concentration was observed as fasting TG increased. Similar results were seen in diabetic patients. Outside of LDL-C and TG, little changes were seen in lipid parameters in the postprandial state. A large part of these changes could be explained by the biological variation. We observed a gradual widening in the range of increase in TG for patients with higher fasting TG. Non-HDL-C and ApoB should be the treatment target of choice for patients in the non-fasting state. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Comparison of variational real-space representations of the kinetic energy operator
NASA Astrophysics Data System (ADS)
Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.
2002-08-01
We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.
[Longitudinal analysis of vaginal microbiota in women with recurrent vulvovaginal candidiasis].
Ma, Xiao; Cai, Hui-Hua; He, Yan; Zheng, Hui-Min; Kang, Ling; Zhou, Hong-Wei; Liu, Mu-Biao
2016-02-20
To investigate the vaginal flora in patients with recurrent vulvovaginal candidiasis (RVVC). Vaginal swabs were collected at different time points from 6 RVVC patients and 5 healthy women of child-bearing age. The dynamic changes, microbiota composition, alpha diversity and beta diversity in the two groups were assessed by analyzing the 16S rRNA V4 hypervariable region amplified from the total genomic DNA from the swabs. Lactobacillus was the predominant species in healthy women with similar proportions of L.iners and L.crispatus; small proportions of Gardnerella, Prevotella and other genus were also detected. In some healthy women, the vaginal flora showed a high relative abundance of anaerobic bacteria such as Gardnerella, Prevotella, Atopobium, Sneathia. Compared with the healthy women, patients with RVVC showed a significantly reduced diversity of vaginal flora, where L.iners was the predominant species and the content of L.crispatus decreased significantly. In healthy women, the vaginal flora fluctuated with the menstrual cycle, and the fluctuation was the most prominent during menstruation; the dominant species either alternated regularly or maintain an absolute superiority in the menstrual cycle. The vaginal flora showed attenuated fluctuation in women with RVVC, were highly conserved within the menstrual cycle, and maintained a similar composition in the episodes and intermittent periods. The vaginal flora of RVVC patients do not undergo regular variations with the menstrual cycle and shows a similar composition between the episodes and intermittent periods. Promoting the production of L.iners or inhibiting the colonization of L.crispatus to restore the composition of the vaginal flora may help in the treatment of RVVC.
Can captive orangutans (Pongo pygmaeus abelii) be coaxed into cumulative build-up of techniques?
Lehner, Stephan R; Burkart, Judith M; Schaik, Carel P van
2011-11-01
While striking cultural variation in behavior from one site to another has been described in chimpanzees and orangutans, cumulative culture might be unique to humans. Captive chimpanzees were recently found to be rather conservative, sticking to the technique they had mastered, even after more effective alternatives were demonstrated. Behavioral flexibility in problem solving, in the sense of acquiring new solutions after having learned another one earlier, is a vital prerequisite for cumulative build-up of techniques. Here, we experimentally investigate whether captive orangutans show such flexibility, and if so, whether they show techniques that cumulatively build up (ratchet) on previous ones after conditions of the task are changed. We provided nine Sumatran orangutans (Pongo pygmaeus abelii) with two types of transparent tubes partly filled with syrup, along with potential tools such as sticks, twigs, wood wool and paper. In the first phase, the orangutans could reach inside the tubes with their hands (Regular Condition), but in the following phase, tubes had been made too narrow for their hands to fit in (Restricted Condition 1), or in addition the setup lacked their favorite materials (Restricted Condition 2). The orangutans showed high behavioral flexibility, applying nine different techniques under the regular condition in total. Individuals abandoned preferred techniques and switched to different techniques under restricted conditions when this was advantageous. We show for two of these techniques how they cumulatively built up on earlier ones. This suggests that the near-absence of cumulative culture in wild orangutans is not due to a lack of flexibility when existing solutions to tasks are made impossible.
Sousa, André Silva Guimarães; Argolo, Poliane Sá; Gondim, Manoel Guedes Correa; de Moraes, Gilberto José; Oliveira, Anibal Ramadan
2017-08-01
The coconut mite, Aceria guerreronis Keifer (Acari: Eriophyidae), is one of the main coconut pests in the American, African and parts of the Asian continents, reaching densities of several thousand mites per fruit. Diagrammatic scales have been developed to standardize the estimation of the population densities of A. guerreronis according to the estimated percentage of damage, but these have not taken into account the possible effect of fruit age, although previous studies have already reported the variation in mite numbers with fruit age. The objective of this study was to re-construct the relation between damage and mite density at different fruit ages collected in an urban coconut plantation containing the green dwarf variety ranging from the beginning to nearly the end of the infestation, as regularly seen under field conditions in northeast Brazil, in order to improve future estimates with diagrammatic scales. The percentage of damage was estimated with two diagrammatic scales on a total of 470 fruits from 1 to 5 months old, from a field at Ilhéus, Bahia, Brazil, determining the respective number of mites on each fruit. The results suggested that in estimates with diagrammatic scales: (1) fruit age has a major effect on the estimation of A. guerreronis densities, (2) fruits of different ages should be analyzed separately, and (3) regular evaluation of infestation levels should be done preferably on fruits of about 3-4 months old, which show the highest densities.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
Exercise behavior and related factors in career women - the case of a bank in Taipei City.
Chen, Chen-Mei; Chang, Mei
2004-09-01
With the trend of premature aging of physiological functions on the rise and a variety of chronic diseases continuing to spread, health promotion has become the top concern among public health experts. Regular exercise plays a pivotal role in both health promotion and disease prevention. This study aims to investigate the exercise behavior of career women and related factors. The samples were drawn from the female employees of a bank in Taipei, totaling 361 persons, all aged between 20 and 56. The result shows that only 8.6 % of the respondents exercise regularly and that among the reasons for not doing any exercise, " Don ' t have time for it " tops the list. Self-efficacy in exercise is found to be the common factor for predicting both exercise regularity and total exercise amount. Exercise intervention programs thus must be developed on the basis of female self-efficacy with a " family-oriented " activity design. It is therefore suggested that employers promote exercise and encourage exercise behaviors to help enhance employee self-efficacy as well as employee health.
Solar Irradiance Variations on Active Region Time Scales
NASA Technical Reports Server (NTRS)
Labonte, B. J. (Editor); Chapman, G. A. (Editor); Hudson, H. S. (Editor); Willson, R. C. (Editor)
1984-01-01
The variations of the total solar irradiance is an important tool for studying the Sun, thanks to the development of very precise sensors such as the ACRIM instrument on board the Solar Maximum Mission. The largest variations of the total irradiance occur on time scales of a few days are caused by solar active regions, especially sunspots. Efforts were made to describe the active region effects on total and spectral irradiance.
Effect of supercritical carbon dioxide decaffeination on volatile components of green teas.
Lee, S; Park, M K; Kim, K H; Kim, Y-S
2007-09-01
Volatile components in regular and decaffeinated green teas were isolated by simultaneous steam distillation and solvent extraction (SDE), and then analyzed by GC-MS. A total of 41 compounds, including 8 alcohols, 15 terpene-type compounds, 10 carbonyls, 4 N-containing compounds, and 4 miscellaneous compounds, were found in regular and decaffeinated green teas. Among them, linalool and phenylacetaldehyde were quantitatively dominant in both regular and decaffeinated green teas. By a decaffeination process using supercritical carbon dioxide, most volatile components decreased. The more caffeine was removed, the more volatile components were reduced in green teas. In particular, relatively nonpolar components such as terpene-type compounds gradually decreased according to the decaffeination process. Aroma-active compounds in regular and decaffeinated green teas were also determined and compared by aroma extract dilution analysis (AEDA). Most greenish and floral flavor compounds such as hexanal, (E)-2-hexenal, and some unknown compounds disappeared or decreased after the decaffeination process.
Seasonal and interannual temperature variations in the tropical stratosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reid, G.C.
1994-09-20
Temperature variations in the tropical lower and middle stratosphere are influenced by at least five distinct driving forces. These are (1) the mechanism of the regular seasonal cycle, (2) the quasi-biennial oscillation (QBO) in zonal winds, (3) the semiannual zonal wind oscillation (SAO) at higher levels, (4) El Nino-Southern Oscillation (ENSO) effects driven by the underlying troposphere, and (5) radiative effects, including volcanic aerosol heating. Radiosonde measurements of temperatures from a number of tropical stations, mostly in the western Pacific region, are used in this paper to examine the characteristic annual and interannual temperature variability in the stratosphere below themore » 10-hPa pressure level ({approximately} 31 km) over a time period of 17 years, chosen to eliminate or at least minimize the effect of volcanic eruptions. Both annual and interannual variations are found to show a fairly distinct transition between the lower and the middle stratosphere at about the 35-hPa level ({approximately} 23 km). The lower stratosphere, below this transition level, is strongly influenced by the ENSO cycle as well as by the QBO. The overall result of the interaction is to modulate the amplitude of the normal stratospheric seasonal cycle and to impose a biennial component on it, so that alternate seasonal cycles are stronger or weaker than normal. Additional modulation by the ENSO cycle occurs at its quasi-period of 3-5 years, giving rise to a complex net behavior. In the middle stratosphere above the transition level, there is no discernible ENSO influence, and departures from the regular semiannual seasonal cycle are dominated by the QBO. Recent ideas on the underlying physical mechanisms governing these variations are discussed, as is the relationship of the radiosonde measurements to recent satellite remote-sensing observations. 37 refs., 8 figs., 1 tab.« less
Cavusoglu, M; Ciloglu, T; Serinagaoglu, Y; Kamasak, M; Erogul, O; Akcam, T
2008-08-01
In this paper, 'snore regularity' is studied in terms of the variations of snoring sound episode durations, separations and average powers in simple snorers and in obstructive sleep apnoea (OSA) patients. The goal was to explore the possibility of distinguishing among simple snorers and OSA patients using only sleep sound recordings of individuals and to ultimately eliminate the need for spending a whole night in the clinic for polysomnographic recording. Sequences that contain snoring episode durations (SED), snoring episode separations (SES) and average snoring episode powers (SEP) were constructed from snoring sound recordings of 30 individuals (18 simple snorers and 12 OSA patients) who were also under polysomnographic recording in Gülhane Military Medical Academy Sleep Studies Laboratory (GMMA-SSL), Ankara, Turkey. Snore regularity is quantified in terms of mean, standard deviation and coefficient of variation values for the SED, SES and SEP sequences. In all three of these sequences, OSA patients' data displayed a higher variation than those of simple snorers. To exclude the effects of slow variations in the base-line of these sequences, new sequences that contain the coefficient of variation of the sample values in a 'short' signal frame, i.e., short time coefficient of variation (STCV) sequences, were defined. The mean, the standard deviation and the coefficient of variation values calculated from the STCV sequences displayed a stronger potential to distinguish among simple snorers and OSA patients than those obtained from the SED, SES and SEP sequences themselves. Spider charts were used to jointly visualize the three parameters, i.e., the mean, the standard deviation and the coefficient of variation values of the SED, SES and SEP sequences, and the corresponding STCV sequences as two-dimensional plots. Our observations showed that the statistical parameters obtained from the SED and SES sequences, and the corresponding STCV sequences, possessed a strong potential to distinguish among simple snorers and OSA patients, both marginally, i.e., when the parameters are examined individually, and jointly. The parameters obtained from the SEP sequences and the corresponding STCV sequences, on the other hand, did not have a strong discrimination capability. However, the joint behaviour of these parameters showed some potential to distinguish among simple snorers and OSA patients.
2011-02-01
seakeeping was the transient wave technique, developed analytically by Davis and Zarnick (1964). At the David Taylor Model Basin, Davis and Zarnick, and...Gersten and Johnson (1969) applied the transient wave technique to regular wave model experiments for heave and pitch, at zero forward speed. These...tests demonstrated a potential reduction by an order of magnitude of the total necessary testing time. The transient wave technique was also applied to
Korhonen, K; Reijonen, T M; Remes, K; Malmström, K; Klaukka, T; Korppi, M
2001-12-01
The aims of this study were to examine the frequency of, and the reasons for, emergency hospitalization for asthma among children. In addition, the costs of hospital treatment, preventive medication, and productivity losses of the caregivers were evaluated in a population-based setting during 1 year. Data on purchases of regular asthma medication were obtained from the Social Insurance Institution. In total, 106 (2.3/1000) children aged up to 15 years were admitted 136 times for asthma exacerbation to the Kuopio University Hospital in 1998. This represented approximately 5% of all children with asthma in the area. The trigger for the exacerbation was respiratory infection in 63% of the episodes, allergen exposure in 24%, and unknown in 13%. The age-adjusted risk for admittance was 5.3% in children on inhaled steroids, 5.8% in those on cromones, and 7.9% in those with no regular medication for asthma. The mean direct cost for an admission was $1,209 (median $908; range $454-6,812) and the indirect cost was $358 ($316; $253-1,139). The cost of regular medication for asthma was, on average, $272 per admitted child on maintenance. The annual total cost as a result of asthma rose eight-fold if a child on regular medication was admitted for asthma.
Sugiyama, Takemi; Giles-Corti, Billie; Summers, Jacqui; du Toit, Lorinne; Leslie, Eva; Owen, Neville
2013-09-01
This study examined prospective relationships of green space attributes with adults initiating or maintaining recreational walking. Postal surveys were completed by 1036 adults living in Adelaide, Australia, at baseline (two time points in 2003-04) and follow-up (2007-08). Initiating or maintaining recreational walking was determined using self-reported walking frequency. Green space attributes examined were perceived presence, quality, proximity, and the objectively measured area (total and largest) and number of green spaces within a 1.6 km buffer drawn from the center of each study neighborhood. Multilevel regression analyses examined the odds of initiating or maintaining walking separately for each green space attribute. At baseline, participants were categorized into non-regular (n = 395), regular (n = 286), and irregular walkers (n = 313). Among non-regular walkers, 30% had initiated walking, while 70% of regular walkers had maintained walking at follow-up. No green space attributes were associated with initiating walking. However, positive perceptions of the presence of and proximity to green spaces and the total and largest areas of green space were significantly associated with a higher likelihood of walking maintenance over four years. Neighborhood green spaces may not assist adults to initiate walking, but their presence and proximity may facilitate them to maintain recreational walking over time. Copyright © 2013 Elsevier Inc. All rights reserved.
Hargreave, Marie; Andersen, Tina Veje; Nielsen, Ann; Munk, Christian; Liaw, Kai-Li; Kjaer, Susanne K
2010-01-01
Widespread use of and serious adverse effects associated with use of analgesics accentuates the need to consider factors related to analgesic use. The objective of this study was to describe continuous regular analgesics use and examine factors associated with a continuous regular analgesic use. The study was based on data from two surveys and included a random sample of women and men aged 18-45 years from the general Danish population. Information on analgesics use, self-rated health, demographic and lifestyle factors was collected using a self-administered questionnaire. A total of 28,000 women and 33 000 men were invited to participate and 22,199 women (response-rate 81.4%) and 23,080 men (response-rate 71.0%), respectively, were included in the study. Data were analyzed using multivariate logistic regression. We found that 27% of the women and 18% of the men reported a regular monthly use of at least seven analgesic tablets during the last year (continuous regular analgesics use). Besides poor self-rated health we found in both sexes that increasing age, poor self-rated fitness, and smoking were related to a continuous regular analgesics use. Nulliparity, low level of education, overweight/obesity, binge drinking, and abstinence were associated with a continuous regular analgesics use for women, while underweight and marital/cohabiting status were associated with a continuous regular analgesics use only for men. Regular monthly analgesic use during the last year was generally prevalent. Besides self-rated health, several socio-demographic and lifestyle factors were associated with a continuous regular analgesic use, although with some gender differences.
Capsaicinoids improve consequences of physical activity.
Sahin, Kazim; Orhan, Cemal; Tuzcu, Mehmet; Sahin, Nurhan; Erten, Fusun; Juturu, Vijaya
2018-01-01
The purpose of this study was to investigate the effects of capsaicinoids (CAPs) on lipid metabolism, inflammation, antioxidant status and the changes in gene products involved in these metabolic functions in exercised rats. A total of 28 male Wistar albino rats were randomly divided into four groups (n = 7) (i) No exercise and no CAPs, (ii) No exercise + CAPs (iii) Regular exercise, (iv) Regular exercise + CAPs. Rats were administered as 0.2 mg capsaicinoids from 10 mg/kg BW/day Capsimax ® daily for 8 weeks. A significant decrease in lactate and malondialdehyde (MDA) levels and increase in activities of antioxidant enzymes were observed in the combination of regular exercise and CAPs group ( P < 0.0001). Regular exercise + CAPs treated rats had greater nuclear factor-E2-related factor-2 (Nrf2) and heme oxygenase-1 (HO-1) levels in muscle than regular exercise and no exercise rats ( P < 0.001). Nevertheless, regular exercise + CAPs treated had lower nuclear factor kappa B (NF-κB) and IL-10 levels in muscle than regular exercise and control rats ( P < 0.001). Muscle sterol regulatory element-binding protein 1c (SREBP-1c), liver X receptors (LXR), ATP citrate lyase (ACLY) and fatty acid synthase (FAS) levels in the regular exercise + CAPs group were lower than all groups ( P < 0.05). However, muscle PPAR-γ level was higher in the regular exercise and CAPs alone than the no exercise rats. These results suggest CAPs with regular exercise may enhance lipid metabolism by regulation of gene products involved in lipid and antioxidant metabolism including SREBP-1c, PPAR-γ, and Nrf2 pathways in rats.
A theoretical approach to study the melting temperature of metallic nanowires
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arora, Neha; Joshi, Deepika P.
2016-05-23
The physical properties of any material change with the change of its size from bulk range to nano range. A theoretical study to account for the size and shape effect on melting temperature of metallic nanowires has been done. We have studied zinc (Zn), indium (In), lead (Pb) and tin (Sn) nanowires with three different cross sectional shapes like regular triangular, square and regular hexagonal. Variation of melting temperature with the size and shape is graphically represented with the available experimental data. It was found that melting temperature of the nanowires decreases with decrement in the size of nanowire, duemore » to surface effect and at very small size the most probable shape also varies with material.« less
ADAPTIVE FINITE ELEMENT MODELING TECHNIQUES FOR THE POISSON-BOLTZMANN EQUATION
HOLST, MICHAEL; MCCAMMON, JAMES ANDREW; YU, ZEYUN; ZHOU, YOUNGCHENG; ZHU, YUNRONG
2011-01-01
We consider the design of an effective and reliable adaptive finite element method (AFEM) for the nonlinear Poisson-Boltzmann equation (PBE). We first examine the two-term regularization technique for the continuous problem recently proposed by Chen, Holst, and Xu based on the removal of the singular electrostatic potential inside biomolecules; this technique made possible the development of the first complete solution and approximation theory for the Poisson-Boltzmann equation, the first provably convergent discretization, and also allowed for the development of a provably convergent AFEM. However, in practical implementation, this two-term regularization exhibits numerical instability. Therefore, we examine a variation of this regularization technique which can be shown to be less susceptible to such instability. We establish a priori estimates and other basic results for the continuous regularized problem, as well as for Galerkin finite element approximations. We show that the new approach produces regularized continuous and discrete problems with the same mathematical advantages of the original regularization. We then design an AFEM scheme for the new regularized problem, and show that the resulting AFEM scheme is accurate and reliable, by proving a contraction result for the error. This result, which is one of the first results of this type for nonlinear elliptic problems, is based on using continuous and discrete a priori L∞ estimates to establish quasi-orthogonality. To provide a high-quality geometric model as input to the AFEM algorithm, we also describe a class of feature-preserving adaptive mesh generation algorithms designed specifically for constructing meshes of biomolecular structures, based on the intrinsic local structure tensor of the molecular surface. All of the algorithms described in the article are implemented in the Finite Element Toolkit (FETK), developed and maintained at UCSD. The stability advantages of the new regularization scheme are demonstrated with FETK through comparisons with the original regularization approach for a model problem. The convergence and accuracy of the overall AFEM algorithm is also illustrated by numerical approximation of electrostatic solvation energy for an insulin protein. PMID:21949541
47 CFR 90.631 - Trunked systems loading, construction and authorization requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., the total number of mobile units and control stations operating in the wide-area system shall be counted with respect to the total number of base station frequencies assigned to the system. (h) Regional... fractionally over the number of base station facilities with which it communicates regularly. [47 FR 41032...
Low-FODMAP vs regular rye bread in irritable bowel syndrome: Randomized SmartPill® study.
Pirkola, Laura; Laatikainen, Reijo; Loponen, Jussi; Hongisto, Sanna-Maria; Hillilä, Markku; Nuora, Anu; Yang, Baoru; Linderborg, Kaisa M; Freese, Riitta
2018-03-21
To compare the effects of regular vs low-FODMAP rye bread on irritable bowel syndrome (IBS) symptoms and to study gastrointestinal conditions with SmartPill ® . Our aim was to evaluate if rye bread low in FODMAPs would cause reduced hydrogen excretion, lower intraluminal pressure, higher colonic pH, different transit times, and fewer IBS symptoms than regular rye bread. The study was a randomized, double-blind, controlled cross-over meal study. Female IBS patients ( n = 7) ate study breads at three consecutive meals during one day. The diet was similar for both study periods except for the FODMAP content of the bread consumed during the study day. Intraluminal pH, transit time, and pressure were measured by SmartPill, an indigestible motility capsule. Hydrogen excretion (a marker of colonic fermentation) expressed as area under the curve (AUC) (0-630 min) was [median (range)] 6300 (1785-10800) ppm∙min for low-FODMAP rye bread and 10 635 (4215-13080) ppm∙min for regular bread ( P = 0.028). Mean scores of gastrointestinal symptoms showed no statistically significant differences but suggested less flatulence after low-FODMAP bread consumption ( P = 0.063). Intraluminal pressure correlated significantly with total symptom score after regular rye bread (ρ = 0.786, P = 0.036) and nearly significantly after low-FODMAP bread consumption (ρ = 0.75, P = 0.052). We found no differences in pH, pressure, or transit times between the breads. Gastric residence of SmartPill was slower than expected. SmartPill left the stomach in less than 5 h only during one measurement (out of 14 measurements in total) and therefore did not follow on par with the rye bread bolus. Low-FODMAP rye bread reduced colonic fermentation vs regular rye bread. No difference was found in median values of intraluminal conditions of the gastrointestinal tract.
Tabachnick, W J; Mecham, J O
1991-03-01
An enzyme-linked immunoassay for detecting bluetongue virus in infected Culicoides variipennis was evaluated using a nested analysis of variance to determine sources of experimental error in the procedure. The major source of variation was differences among individual insects (84% of the total variance). Storing insects at -70 degrees C for two months contributed to experimental variation in the ELISA reading (14% of the total variance) and should be avoided. Replicate assays of individual insects were shown to be unnecessary, since variation among replicate wells and plates was minor (2% of the total variance).
[Variation of extralaryngeal furcation of the recurrent laryngeal nerve in total thyroidectomy].
Fan, Zhe; Zhang, Lin; Zhang, Yingyi
2015-12-01
To explore the extralaryngeal furcation variation of the recurrent laryngeal nerve (RLN) in total thyroidectomy. The clinical data of 216 RLNs from 108 patients undergone total thyroidectomy were retrospectively analyzed. RLN was found during every operation and exposed in whole course until access into larynx. Twenty (9.26%) pieces of RLNs showed bifurcated or trifurcated RLNs before access into larynx. Ratio of furcation is lower than that reported before internationally. Bifurcations of RLNs on the left were more than that on the right. The protection of RLN is important for thyroid operation, especially in total thyroidetomy. Variation of extralaryngeal furcation of RLN usually leads to injury of RLN. Understanding of variation of RLN could decrease nerve function related complication.
Development of a food frequency questionnaire for Sri Lankan adults
2012-01-01
Background Food Frequency Questionnaires (FFQs) are commonly used in epidemiologic studies to assess long-term nutritional exposure. Because of wide variations in dietary habits in different countries, a FFQ must be developed to suit the specific population. Sri Lanka is undergoing nutritional transition and diet-related chronic diseases are emerging as an important health problem. Currently, no FFQ has been developed for Sri Lankan adults. In this study, we developed a FFQ to assess the regular dietary intake of Sri Lankan adults. Methods A nationally representative sample of 600 adults was selected by a multi-stage random cluster sampling technique and dietary intake was assessed by random 24-h dietary recall. Nutrient analysis of the FFQ required the selection of foods, development of recipes and application of these to cooked foods to develop a nutrient database. We constructed a comprehensive food list with the units of measurement. A stepwise regression method was used to identify foods contributing to a cumulative 90% of variance to total energy and macronutrients. In addition, a series of photographs were included. Results We obtained dietary data from 482 participants and 312 different food items were recorded. Nutritionists grouped similar food items which resulted in a total of 178 items. After performing step-wise multiple regression, 93 foods explained 90% of the variance for total energy intake, carbohydrates, protein, total fat and dietary fibre. Finally, 90 food items and 12 photographs were selected. Conclusion We developed a FFQ and the related nutrient composition database for Sri Lankan adults. Culturally specific dietary tools are central to capturing the role of diet in risk for chronic disease in Sri Lanka. The next step will involve the verification of FFQ reproducibility and validity. PMID:22937734
Singh, Darshan; Murugaiyah, Vikneswaran; Hamid, Shahrul Bariyah Sahul; Kasinather, Vicknasingam; Chan, Michelle Su Ann; Ho, Eric Tatt Wei; Grundmann, Oliver; Chear, Nelson Jeng Yeou; Mansor, Sharif Mahsufi
2018-07-15
Mitragyna speciosa (Korth.) also known as kratom, is a native medicinal plant of Southeast Asia with opioid-like effects. Kratom tea/juice have been traditionally used as a folk remedy and for controlling opiate withdrawal in Malaysia. Long-term opioid use is associated with depletion in testosterone levels. Since kratom is reported to deform sperm morphology and reduce sperm motility, we aimed to clinically investigate the testosterone levels following long-term kratom tea/juice use in regular kratom users. A total of 19 regular kratom users were recruited for this cross-sectional study. A full-blood test was conducted including determination of testosterone level, follicle stimulating hormone (FSH) and luteinizing hormone (LH) profile, as well as hematological and biochemical parameters of participants. We found long-term kratom tea/juice consumption with a daily mitragynine dose of 76.23-94.15 mg did not impair testosterone levels, or gonadotrophins, hematological and biochemical parameters in regular kratom users. Regular kratom tea/juice consumption over prolonged periods (>2 years) was not associated with testosterone impairing effects in humans. Copyright © 2018 Elsevier B.V. All rights reserved.
Optical diffraction by the microstructure of the wing of a moth
NASA Astrophysics Data System (ADS)
Brink, D. J.; Smit, J. E.; Lee, M. E.; Möller, A.
1995-09-01
On the wing of the moth Trichoplusia orichalcea a prominent, apparently highly reflective, golden spot can be seen. Scales from this area of the wing exhibit a regular microstructure resembling a submicrometer herringbone pattern. We show that a diffraction process from this structure is responsible for the observed optical properties, such as directionality, brightness variations, polarization, and color.
ERIC Educational Resources Information Center
Beerkens, Maarja; Souto-Otero, Manuel; de Wit, Hans; Huisman, Jeroen
2016-01-01
Increasing participation in the Erasmus study abroad program in Europe is a clear policy goal, and student-reported barriers and drivers are regularly monitored. This article uses student survey data from seven countries to examine the extent to which student-level barriers can explain the considerable cross-country variation in Erasmus…
USDA-ARS?s Scientific Manuscript database
Fourteen organic dairy farms were used to 1) evaluate seasonal variation of bioactive fatty acids in milk from 2012 to 2015, and 2) evaluate supplementation of ground whole flaxseed to maintain levels of bioactive fatty acids during the non-grazing season. During regular farm visits, milk, feed, and...
2016-04-01
in 2017. Recall from the previous retention section that there is nearly a 50% drop in the total AC fighter pilot inventory available to separate...8 Figure 3. Total Fighter Pilots by Year Group ...................................................................11 Figure 4...important for the Total Force to find an equitable balance and refine the forcing functions to produce, absorb, and sustain the dwindling fighter
Wang, Xiaoyuan; Xie, Bing; Wu, Dong; Hassan, Muhammad; Huang, Changying
2015-09-01
The generation and seasonal variations of secondary pollutants were investigated during three municipal solid waste (MSW) compression and transfer in Shanghai, China. The results showed that the raw wastewater generated from three MSW transfer stations had pH of 4.2-6.0, COD 40,000-70,000mg/L, BOD5 15,000-25,000mg/L, ammonia nitrogen (NH3-N) 400-700mg/L, total nitrogen (TN) 600-1500mg/L, total phosphorus (TP) 50-200mg/L and suspended solids (SS) 1000-80,000mg/L. The pH, COD, BOD5 and NH3-N did not show regular change throughout the year while the concentration of TN, TP and SS were higher in summer and autumn. The animal and vegetable oil content was extremely high. The average produced raw wastewater of three transfer stations ranged from 2.3% to 8.4% of total refuse. The major air pollutants of H2S 0.01-0.17mg/m(3), NH3 0.75-1.8mg/m(3) in transfer stations, however, the regular seasonal change was not discovered. During the transfer process, the generated leachate in container had pH of 5.7-6.4, SS of 9120-32,475mg/L. The COD and BOD5 were 41,633-89,060mg/L and 18,116-34,130mg/L respectively, higher than that in the compress process. The concentration of NH3-N and TP were 587-1422mg/L and 80-216mg/L, respectively, and both increased during transfer process. H2S, VOC, CH4 and NH3 were 0.4-4mg/m(3), 7-19mg/m(3), 0-3.4% and 1-4mg/m(3), respectively. The PCA analysis showed that the production of secondary pollutants is closely related to temperature, especially CH4. Therefore, avoiding high temperature is a key means of reducing the production of gaseous pollutants. And above all else, refuse classification in source, deodorization and anti-acid corrosion are the important processes to control the secondary pollutants during compression and transfer of MSW. Copyright © 2015 Elsevier Ltd. All rights reserved.
Brown, Allen W; Leibson, Cynthia L; Mandrekar, Jay; Ransom, Jeanine E; Malec, James F
2014-01-01
To examine the contribution of co-occurring nonhead injuries to hazard of death after traumatic brain injury (TBI). A random sample of Olmsted County, Minnesota, residents with confirmed TBI from 1987 through 1999 was identified. Each case was assigned an age- and sex-matched, non-TBI "regular control" from the population. For "special cases" with accompanying nonhead injuries, 2 matched "special controls" with nonhead injuries of similar severity were assigned. Vital status was followed from baseline (ie, injury date for cases, comparable dates for controls) through 2008. Cases were compared first with regular controls and second with regular or special controls, depending on case type. In total, 1257 cases were identified (including 221 special cases). For both cases versus regular controls and cases versus regular or special controls, the hazard ratio was increased from baseline to 6 months (10.82 [2.86-40.89] and 7.13 [3.10-16.39], respectively) and from baseline through study end (2.92 [1.74-4.91] and 1.48 [1.09-2.02], respectively). Among 6-month survivors, the hazard ratio was increased for cases versus regular controls (1.43 [1.06-2.15]) but not for cases versus regular or special controls (1.05 [0.80-1.38]). Among 6-month survivors, accounting for nonhead injuries resulted in a nonsignificant effect of TBI on long-term mortality.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Van Dillen, Linda R.; Bloom, Nancy J.; Gombatto, Sara P.; Susco, Thomas M.
2008-01-01
Objective To examine whether passive hip rotation motion was different between people with and without low back pain (LBP) who regularly participate in sports that require repeated rotation of the trunk and hips. We hypothesized that people with LBP would have less total hip rotation motion and more asymmetry of motion between sides than people without LBP. Design Two group, case-control. Setting University-based musculoskeletal analysis laboratory. Participants Forty-eight subjects (35 males, 13 females; mean age: 26.56±7.44 years) who reported regular participation in a rotation-related sport participated. Two groups were compared; people with LBP (N=24) and people without LBP (N=24; NoLBP). Main outcome measures Data were collected on participant-related, LBP-related, sport-related and activity-related variables. Measures of passive hip rotation range of motion were obtained. The differences between the LBP and NoLBP groups were examined. Results People with and without a history of LBP were the same with regard to all participant-related, sport-related and activity-related variables. The LBP group had significantly less total rotation (P=.035) and more asymmetry of total rotation, right hip versus left hip, (P=.022) than the NoLBP group. Left total hip rotation was more limited than right total hip rotation in the LBP group (P=.004). There were no significant differences in left and right total hip rotation for the NoLBP group (P=.323). Conclusions Among people who participate in rotation-related sports, those with LBP had less overall passive hip rotation motion and more asymmetry of rotation between sides than people without LBP. These findings suggest that the specific directional demands imposed on the hip and trunk during regularly performed activities may be an important consideration in deciding which impairments may be most relevant to test and to consider in prevention and intervention strategies. PMID:19081817
Jawale, Bhushan Arun; Bendgude, Vikas; Mahuli, Amit V; Dave, Bhavana; Kulkarni, Harshal; Mittal, Simpy
2012-03-01
A high incidence of dental caries and dental erosion associated with frequent consumption of soft drinks has been reported. The purpose of this study was to evaluate the pH response of dental plaque to a regular, diet and high energy drink. Twenty subjects were recruited for this study. All subjects were between the ages of 20 and 25 and had at least four restored tooth surfaces present. The subjects were asked to refrain from brushing for 48 hours prior to the study. At baseline, plaque pH was measured from four separate locations using harvesting method. Subjects were asked to swish with 15 ml of the respective soft drink for 1 minute. Plaque pH was measured at the four designated tooth sites at 5, 10 and 20 minutes intervals. Subjects then repeated the experiment using the other two soft drinks. pH was minimum for regular soft drink (2.65 ± 0.026) followed by high energy drink (3.39 ± 0.026) and diet soft drink (3.78 ± 0.006). The maximum drop in plaque pH was seen with regular soft drink followed by high energy drink and diet soft drink. Regular soft drink possesses a greater acid challenge potential on enamel than diet and high energy soft drinks. However, in this clinical trial, the pH associated with either soft drink did not reach the critical pH which is expected for enamel demineralization and dissolution.
Financing the World Health Organisation: global importance of extrabudgetary funds.
Vaughan, J P; Mogedal, S; Kruse, S; Lee, K; Walt, G; de Wilde, K
1996-03-01
From 1948, when WHO was established, the Organisation has relied on the assessed contributions of its member states for its regular budget. However, since the early 1980s the WHO World Health Assembly has had a policy of zero real growth for the regular budget and has had to rely increasingly, therefore, on attracting additional voluntary contributions, called extrabudgetary funds (EBFs). Between 1984-85 and 1992-93 the real value of the EBFs apparently increased by more than 60% and in the 1990-91 biennium expenditure of extrabudgetary funds exceeded the regular budget for the first time. All WHO programmes, except the Assembly and the Executive Board, receive some EBFs. However, three cosponsored and six large regular programmes account for about 70% of these EBFs, mainly for vertically managed programmes in the areas of disease control, health promotion and human reproduction. Eighty percent of all EBFs received by WHO for assisted activities have been contributed by donor governments, with the top 10 countries (in Europe, North America and Japan) contributing about 90% of this total, whereas the UN funds and the World Bank have donated only about 6% of the total to date. By contrast, about 70% of the regular budget expenditure has been for organisational expenses and for the support of programmes in the area of health systems. Despite the fact that the more successful programmes are heavily reliant on EBFs, there are strong indications that donors, particularly donor governments, are reluctant to maintain the current level of funding without major reforms in the leadership and management of the Organisation. This has major implications for WHO's international role as the leading UN specialised agency for health.
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
NASA Astrophysics Data System (ADS)
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
Sleep duration and regularity are associated with behavioral problems in 8-year-old children.
Pesonen, Anu-Katriina; Räikkönen, Katri; Paavonen, E Juulia; Heinonen, Kati; Komsi, Niina; Lahti, Jari; Kajantie, Eero; Järvenpää, Anna-Liisa; Strandberg, Timo
2010-12-01
Relatively little is known about the significance of normal variation in objectively assessed sleep duration and its regularity in children's psychological well-being. We explored the associations between sleep duration and regularity and behavioral and emotional problems in 8-year-old children. A correlational design was applied among an epidemiological sample of children born in 1998. Sleep was registered with an actigraph for seven nights (range 3 to 14) in 2006. Mothers (n = 280) and fathers (n = 190) rated their child's behavioral problems with the Child Behavior Checklist. Children with short sleep duration had an increased risk for behavioral problems, thought problems, and Diagnostic and Statistical Manual of Mental Disorders, 4th Edition-based attention-deficit hyperactivity problems according to maternal ratings. Based on paternal ratings, short sleep duration was associated with more rule-breaking and externalizing symptoms. Irregularity in sleep duration from weekdays to weekends was associated with an increased risk for specifically internalizing symptoms in paternal ratings. The results highlight the importance of sufficient sleep duration and regular sleep patterns from weekdays to weekends. Short sleep duration was associated specifically with problems related to attentional control and externalizing behaviors, whereas irregularity in sleep duration was, in particular, associated with internalizing problems.
Kräuchi, Kurt; Konieczka, Katarzyna; Roescheisen-Weich, Corina; Gompper, Britta; Hauenstein, Daniela; Schoetzau, Andreas; Fraenkl, Stephan; Flammer, Josef
2014-02-01
Diurnal cycle variations in body-heat loss and heat production, and their resulting core body temperature (CBT), are relatively well investigated; however, little is known about their variations across the menstrual cycle under ambulatory conditions. The main purpose of this study was to determine whether menstrual cycle variations in distal and proximal skin temperatures exhibit similar patterns to those of diurnal variations, with lower internal heat conductance when CBT is high, i.e. during the luteal phase. Furthermore, we tested these relationships in two groups of women, with and without thermal discomfort of cold extremities (TDCE). In total, 19 healthy eumenorrheic women with regular menstrual cycles (28-32 days), 9 with habitual TDCE (ages 29 ± 1.5 year; BMI 20.1 ± 0.4) and 10 controls without these symptoms (CON: aged 27 ± 0.8 year; BMI 22.7 ± 0.6; p < 0.004 different to TDCE) took part in the study. Twenty-eight days continuous ambulatory skin temperature measurements of distal (mean of hands and feet) and proximal (mean of sternum and infraclavicular regions) skin regions, thighs, and calves were carried out under real-life, ambulatory conditions (i-Buttons® skin probes, sampling rate: 2.5 min). The distal minus proximal skin temperature gradient (DPG) provided a valuable measure for heat redistribution from the core to the shell, and, hence, for internal heat conduction. Additionally, basal body temperature was measured sublingually directly after waking up in bed. Mean diurnal amplitudes in skin temperatures increased from proximal to distal skin regions and the 24-h mean values were inversely related. TDCE compared to CON showed significantly lower hand skin temperatures and DPG during daytime. However, menstrual cycle phase did not modify these diurnal patterns, indicating that menstrual and diurnal cycle variations in skin temperatures reveal additive effects. Most striking was the finding that all measured skin temperatures, together with basal body temperature, revealed a similar menstrual cycle variation (independent of BMI), with highest and lowest values during the luteal and follicular phases, respectively. These findings lead to the conclusion that in contrast to diurnal cycle, variations in CBT variation across the menstrual cycle cannot be explained by changes in internal heat conduction under ambulatory conditions. Although no measurements of metabolic heat production were carried out increased metabolic heat generation during the luteal phase seems to be the most plausible explanation for similar body temperature increases.
Analysis of Geomagnetic Field Variations during Total Solar Eclipses Using INTERMAGNET Data
NASA Astrophysics Data System (ADS)
KIM, J. H.; Chang, H. Y.
2017-12-01
We investigate variations of the geomagnetic field observed by INTERMAGNET geomagnetic observatories over which the totality path passed during a solar eclipse. We compare results acquired by 6 geomagnetic observatories during the 4 total solar eclipses (11 August 1999, 1 August 2008, 11 July 2010, and 20 March 2015) in terms of geomagnetic and solar ecliptic parameters. These total solar eclipses are the only total solar eclipse during which the umbra of the moon swept an INTERMAGNET geomagnetic observatory and simultaneously variations of the geomagnetic field are recorded. We have confirmed previous studies that increase BY and decreases of BX, BZ and F are conspicuous. Interestingly, we have noted that variations of geomagnetic field components observed during the total solar eclipse at Isla de Pascua Mataveri (Easter Island) in Chile (IPM) in the southern hemisphere show distinct decrease of BY and increases of BX and BZ on the contrary. We have found, however, that variations of BX, BY, BZ and F observed at Hornsund in Norway (HRN) seem to be dominated by other geomagnetic occurrence. In addition, we have attempted to obtain any signatures of influence on the temporal behavior of the variation in the geomagnetic field signal during the solar eclipse by employing the wavelet analysis technique. Finally, we conclude by pointing out that despite apparent success a more sophisticate and reliable algorithm is required before implementing to make quantitative comparisons.
Density-functional theory for internal magnetic fields
NASA Astrophysics Data System (ADS)
Tellgren, Erik I.
2018-01-01
A density-functional theory is developed based on the Maxwell-Schrödinger equation with an internal magnetic field in addition to the external electromagnetic potentials. The basic variables of this theory are the electron density and the total magnetic field, which can equivalently be represented as a physical current density. Hence, the theory can be regarded as a physical current density-functional theory and an alternative to the paramagnetic current density-functional theory due to Vignale and Rasolt. The energy functional has strong enough convexity properties to allow a formulation that generalizes Lieb's convex analysis formulation of standard density-functional theory. Several variational principles as well as a Hohenberg-Kohn-like mapping between potentials and ground-state densities follow from the underlying convex structure. Moreover, the energy functional can be regarded as the result of a standard approximation technique (Moreau-Yosida regularization) applied to the conventional Schrödinger ground-state energy, which imposes limits on the maximum curvature of the energy (with respect to the magnetic field) and enables construction of a (Fréchet) differentiable universal density functional.
Agarwal, Suresh K; Kriel, Robert L; Cloyd, James C; Coles, Lisa D; Scherkenbach, Lisa A; Tobin, Michael H; Krach, Linda E
2015-01-01
Our objective was to characterize baclofen pharmacokinetics and safety given orally and intravenously. Twelve healthy subjects were enrolled in a randomized, open-label, crossover study and received single doses of baclofen: 3 or 5 mg given intravenously and 5 or 10 mg taken orally with a 48-hour washout. Blood samples for baclofen analysis were collected pre-dose and at regular intervals up to 24 hours post-dose. Clinical response was assessed by sedation scores, ataxia, and nystagmus. Mean absolute bioavailability of oral baclofen was 74%. Dose-adjusted areas under the curve between the oral and intravenous arms were statistically different (P = .0024), whereas area under the curve variability was similar (coefficient of variation: 18%-24%). Adverse effects were mild in severity and not related to either dose or route of administration. Three- and 5-mg intravenous doses of baclofen were well tolerated. Seventy-four percent oral bioavailability indicates that smaller doses of intravenous baclofen are needed to attain comparable total drug exposures. © The Author(s) 2014.
A storm-time plasmasphere evolution study using data assimilation
NASA Astrophysics Data System (ADS)
Nikoukar, R.; Bust, G. S.; Bishop, R. L.; Coster, A. J.; Lemon, C.; Turner, D. L.; Roeder, J. L.
2017-12-01
In this work, we study the evolution of the Earth's plasmasphere during geomagnetic active periods using the Plasmasphere Data Assimilation (PDA) model. The total electron content (TEC) measurements from an extensive network of global ground-based GPS receivers as well as GPS receivers on-board Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) satellites and Communications/Navigation Outage Forecasting System (C/NOFS) satellite are ingested into the model. Global Core Plasma model, which is an empirical plasmasphere model, is utilized as the background model. Based on the 3D-VAR optimization, the PDA assimilative model benefits from incorporation of regularization techniques to prevent non-physical altitudinal variation in density estimates due to the limited-angle observational geometry. This work focuses on the plasmapause location, plasmasphere erosion time scales and refilling rates during the main and recovery phases of geomagnetic storms as estimated from the PDA 3-dimensional global maps of electron density in the ionosphere/plasmasphere. The comparison between the PDA results with in-situ density measurements from THEMIS and Van Allen Probes, and the RCM-E first-principle model will be also presented.
Volumes and surface areas of pendular rings
Rose, W.
1958-01-01
A packing of spheres is taken as a suitable model of porous media. The packing may be regular and the sphere size may be uniform, but in general, both should be random. Approximations are developed to give the volumes and surface areas of pendular rings that exist at points of sphere contact. From these, the total free volume and interfacial specific surface area are derived as expressive of the textural character of the packing. It was found that the log-log plot of volumes and surface areas of pendular rings vary linearly with the angle made by the line joining the sphere centers and the line from the center of the largest sphere to the closest edge of the pendular ring. The relationship, moreover, was found not to be very sensitive to variation in the size ratio of the spheres in contact. It also was found that the addition of pendular ring material to various sphere packings results in an unexpected decrease in the surface area of the boundaries that confine the resulting pore space. ?? 1958 The American Institute of Physics.
Effects of volcanic ash on the forest canopy insects of Montserrat, West Indies.
Marske, Katharine A; Ivie, Michael A; Hilton, Geoff M
2007-08-01
The impact of ash deposition levels on canopy arthropods was studied on the West Indian island of Montserrat, the site of an ongoing volcanic eruption since 1995. Many of the island's natural habitats have been buried by volcanic debris, and remaining forests regularly receive volcanic ash deposition. To test the effect of ash on canopy arthropods, four study sites were sampled over a 15-mo period. Arthropod samples were obtained using canopy fogging, and ash samples were taken from leaf surfaces. Volcanic ash has had a significant negative impact on canopy arthropod populations, but the decline is not shared equally by all taxa present, and total population variation is within the variance attributed to other aboitic and biotic factors. The affected populations do not differ greatly from those of the neighboring island of St. Kitts, which has not been subject to recent volcanic activity. This indicates that observed effects on Montserrat's arthropod fauna have a short-term acute response to recent ash deposition rather than a chronic depression caused by repeated exposure to ash over the last decade.
Point spread functions and deconvolution of ultrasonic images.
Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten
2015-03-01
This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.
George-Nascimento, Mario; Oliva, Marcelo
2015-01-01
Research using parasites in fish population studies in the South Eastern Pacific (SEP) is summarized. There are 27 such studies (snapshots mainly) in single host species sampled at different geographic localities and at somewhat similar times. They have been devoted mainly to economically important species, though others on coastal and intertidal fish or on less- or non-commercial species provide insights on scales of temporal and spatial variation of parasite infracommunities. Later, we assess whether the probability of harbouring parasites depends on the host species body size. Our results indicate that a stronger tool for fish population studies may be developed under regular (long term) scrutiny of parasite communities, especially of small fish host species, due to their larger variability in richness, abundance and total biomass, than in large fish species. Finally, it might also be necessary to consider the effects of fishing on parasite communities as well as the natural oscillations (coupled or not) of host and parasite populations.
THE EFFECTS OF CURRENT FLOW ON BIOELECTRIC POTENTIAL
Blinks, L. R.
1936-01-01
String galvanometer records show the effect of current flow upon the bioelectric potential of Nitella cells. Three classes of effects are distinguished. 1. Counter E.M.F'S, due either to static or polarization capacity, probably the latter. These account for the high effective resistance of the cells. They record as symmetrical charge and discharge curves, which are similar for currents passing inward or outward across the protoplasm, and increase in magnitude with increasing current density. The normal positive bioelectric potential may be increased by inward currents some 100 or 200 mv., or to a total of 300 to 400 mv. The regular decrease with outward current flow is much less (40 to 50 mv.) since larger outward currents produce the next characteristic effect. 2. Stimulation. This occurs with outward currents of a density which varies somewhat from cell to cell, but is often between 1 and 2 µa/cm.2 of cell surface. At this threshold a regular counter E.M.F. starts to develop but passes over with an inflection into a rapid decrease or even disappearance of positive P.D., in a sigmoid curve with a cusp near its apex. If the current is stopped early in the curve regular depolarization occurs, but if continued a little longer beyond the first inflection, stimulation goes on to completion even though the current is then stopped. This is the "action current" or negative variation which is self propagated down the cell. During the most profound depression of P.D. in stimulation, current flow produces little or no counter E.M.F., the resistance of the cell being purely ohmic and very low. Then as the P.D. begins to recover, after a second or two, counter E.M.F. also reappears, both becoming nearly normal in 10 or 15 seconds. The threshold for further stimulation remains enhanced for some time, successively larger current densities being needed to stimulate after each action current. The recovery process is also powerful enough to occur even though the original stimulating outward current continues to flow during the entire negative variation; recovery is slightly slower in this case however. Stimulation may be produced at the break of large inward currents, doubtless by discharge of the enhanced positive P.D. (polarization). 3. Restorative Effects.—The flow of inward current during a negative variation somewhat speeds up recovery. This effect is still more strikingly shown in cells exposed to KCl solutions, which may be regarded as causing "permanent stimulation" by inhibiting recovery from a negative variation. Small currents in either direction now produce no counter E.M.F., so that the effective resistance of the cells is very low. With inward currents at a threshold density of some 10 to 20 µa/cm.2, however, there is a counter E.M.F. produced, which builds up in a sigmoid curve to some 100 to 200 mv. positive P.D. This usually shows a marked cusp and then fluctuates irregularly during current flow, falling off abruptly when the current is stopped. Further increases of current density produce this P.D. more rapidly, while decreased densities again cease to be effective below a certain threshold. The effects in Nitella are compared with those in Valonia and Halicystis, which display many of the same phenomena under proper conditions. It is suggested that the regular counter E.M.F.'S (polarizations) are due to the presence of an intact surface film or other structure offering differential hindrance to ionic passage. Small currents do not affect this structure, but it is possibly altered or destroyed by large outward currents, restored by large inward currents. Mechanisms which might accomplish the destruction and restoration are discussed. These include changes of acidity by differential migration of H ion (membrane "electrolysis"); movement of inorganic ions such as potassium; movement of organic ions, (such as Osterhout's substance R), or the radicals (such as fatty acid) of the surface film itself. Although no decision can be yet made between these, much evidence indicates that inward currents increase acidity in some critical part of the protoplasm, while outward ones decrease acidity. PMID:19872991
Segmentation of knee MRI using structure enhanced local phase filtering
NASA Astrophysics Data System (ADS)
Lim, Mikhiel; Hacihaliloglu, Ilker
2016-03-01
The segmentation of bone surfaces from magnetic resonance imaging (MRI) data has applications in the quanti- tative measurement of knee osteoarthritis, surgery planning for patient specific total knee arthroplasty and its subsequent fabrication of artificial implants. However, due to the problems associated with MRI imaging such as low contrast between bone and surrounding tissues, noise, bias fields, and the partial volume effect, segmentation of bone surfaces continues to be a challenging operation. In this paper, a new framework is presented for the enhancement of knee MRI scans prior to segmentation in order to obtain high contrast bone images. During the first stage, a new contrast enhanced relative total variation (RTV) regularization method is used in order to remove textural noise from the bone structures and surrounding soft tissue interface. This salient bone edge information is further enhanced using a sparse gradient counting method based on L0 gradient minimization, which globally controls how many non-zero gradients are resulted in order to approximate prominent bone structures in a structure-sparsity-management manner. The last stage of the framework involves incorporation of local phase bone boundary information in order to provide an intensity invariant enhancement of contrast between the bone and surrounding soft tissue. The enhanced images are segmented using a fast random walker algorithm. Validation against expert segmentation was performed on 10 clinical knee MRI images, and achieved a mean dice similarity coefficient (DSC) of 0.975.
NASA Astrophysics Data System (ADS)
Das, Shantanu; Drucker, Jeff
2017-03-01
The nucleation density and average size of graphene crystallites grown using cold wall chemical vapor deposition (CVD) on 4 μm thick Cu films electrodeposited on W substrates can be tuned by varying growth parameters. Growth at a fixed substrate temperature of 1000 °C and total pressure of 700 Torr using Ar, H2 and CH4 mixtures enabled the contribution of total flow rate, CH4:H2 ratio and dilution of the CH4/H2 mixture by Ar to be identified. The largest variation in nucleation density was obtained by varying the CH4:H2 ratio. The observed morphological changes are analogous to those that would be expected if the deposition rate were varied at fixed substrate temperature for physical deposition using thermal evaporation. The graphene crystallite boundary morphology progresses from irregular/jagged through convex hexagonal to regular hexagonal as the effective C deposition rate decreases. This observation suggests that edge diffusion of C atoms along the crystallite boundaries, in addition to H2 etching, may contribute to shape evolution of the graphene crystallites. These results demonstrate that graphene grown using cold wall CVD follows a nucleation and growth mechanism similar to hot wall CVD. As a consequence, the vast knowledge base relevant to hot wall CVD may be exploited for graphene synthesis by the industrially preferable cold wall method.
NASA Astrophysics Data System (ADS)
Mendaza, T.; Blanco-Ávalos, J. J.; Martín-Torres, J.
2017-11-01
The solar activity induces long term and short term periodical variations in the dynamics and composition of Earth's atmosphere. The Sun also shows non periodical (i.e., impulsive) activity that reaches the planets orbiting around it. In particular, Interplanetary Coronal Mass Ejections (ICMEs) reach Earth and interact with its magnetosphere and upper neutral atmosphere. Nevertheless, the interaction with the upper atmosphere is not well characterized because of the absence of regular and dedicated in situ measurements at high altitudes; thus, current descriptions of the thermosphere are based on semi empirical models. In this paper, we present the total neutral mass densities of the thermosphere retrieved from the orbital data of the International Space Station (ISS) using the General Perturbation Method, and we applied these densities to routinely compiled trajectories of the ISS in low Earth orbit (LEO). These data are explicitly independent of any atmospheric model. Our density values are consistent with atmospheric models, which demonstrates that our method is reliable for the inference of thermospheric density. We have inferred the thermospheric total neutral density response to impulsive solar activity forcing from 2001 to the end of 2006 and determined how solar events affect this response. Our results reveal that the ISS orbital parameters can be used to infer the thermospheric density and analyze solar effects on the thermosphere.
Seasonal alterations of landfill leachate composition and toxic potency in semi-arid regions.
Tsarpali, Vasiliki; Kamilari, Maria; Dailianis, Stefanos
2012-09-30
The present study investigates seasonal variations of leachate composition and its toxic potency on different species, such as the brine shrimp Artemia franciscana (formerly Artemia salina), the fairy shrimp Thamnocephalus platyurus, the estuarine rotifer Brachionus plicatilis and the microalgal flagellate Dunaliella tertiolecta. In specific, leachate regularly collected from the municipal landfill site of Aigeira (Peloponissos, Greece) during the year 2011, showed significant alterations of almost all its physicochemical parameters with time. Further analysis showed that seasonal alterations of leachate composition are related with the amount of rainfall obtained throughout the year. In fact, rainfall-related parameters, such as conductivity (Cond), nitrates (NO(3)(-)), total nitrogen (TN), ammonium (NH(4)-N), total dissolved solids (TDS) and the BOD(5)/NH(4)-N ratio could merely reflect the leachate strength and toxicity, as verified by the significant correlations occurred among each of them with the toxic endpoints, 24 h LC(50) and/or 72 h IC(50), obtained in all species tested. According to the result of the present study, it could be suggested that the aforementioned leachate parameters could be used independently, or in combination as a low-cost effective tools for estimating leachate strength and toxic potency, at least in the case of semi-arid areas such as the most of the Mediterranean countries. Copyright © 2012 Elsevier B.V. All rights reserved.
Global Electric Circuit Implications of Total Current Measurements over Electrified Clouds
NASA Technical Reports Server (NTRS)
Mach, Douglas M.; Blakeslee, Richard J.; Bateman, Monte G.
2009-01-01
We determined total conduction (Wilson) currents and flash rates for 850 overflights of electrified clouds spanning regions including the Southeastern United States, the Western Atlantic Ocean, the Gulf of Mexico, Central America and adjacent oceans, Central Brazil, and the South Pacific. The overflights include storms over land and ocean, with and without lightning, and with positive and negative Wilson currents. We combined these individual storm overflight statistics with global diurnal lightning variation data from the Lightning Imaging Sensor (LIS) and Optical Transient Detector (OTD) to estimate the thunderstorm and electrified shower cloud contributions to the diurnal variation in the global electric circuit. The contributions to the global electric circuit from lightning producing clouds are estimated by taking the mean current per flash derived from the overflight data for land and ocean overflights and combining it with the global lightning rates (for land and ocean) and their diurnal variation derived from the LIS/OTD data. We estimate the contribution of non-lightning producing electrified clouds by assuming several different diurnal variations and total non-electrified storm counts to produce estimates of the total storm currents (lightning and non-lightning producing storms). The storm counts and diurnal variations are constrained so that the resultant total current diurnal variation equals the diurnal variation in the fair weather electric field (+/-15%). These assumptions, combined with the airborne and satellite data, suggest that the total mean current in the global electric circuit ranges from 2.0 to 2.7 kA, which is greater than estimates made by others using other methods.
A 320-year AMM+SOI Index Reconstruction from Historical Atlantic Tropical Cyclone Records
NASA Astrophysics Data System (ADS)
Chenoweth, M.; Divine, D.
2010-12-01
Trends in the frequency of North Atlantic tropical cyclones, including major hurricanes, are dominated by those originating in the deep tropics. In addition, these tropical cyclones are stronger when making landfall and their total power dissipation is higher than storms forming elsewhere in the Atlantic basin. Both the Atlantic Meridional Mode (AMM) and El Nino-Southern Oscillation (ENSO) are the leading modes of coupled air-sea interaction in the Atlantic and Pacific, respectively, and have well-established relationships with Atlantic hurricane variability. Here we use a 320-year record of tropical cyclone activity in the Lesser Antilles region of the North Atlantic from historical manuscript and newspaper records to reconstruct a normalized seasonal (July-October) index combining the Southern Oscillation Index (SOI) and AMM employing both the modern analog technique and back-propagation artificial neural networks. Our results indicate that the AMM+SOI index since 1690 shows no long-term trend but is dominated by both short-term (<10 years) and long-term (quasi-decadal to bi-decadal) variations. The decadal-scale variation is consistent with both instrumental and proxy records elsewhere from the global tropics. Distinct periods of high and low index values, corresponding to high and low tropical cyclone frequency, are regularly-appearing features in the record and provides further evidence that natural decadal -scale variability in Atlantic tropical cyclone frequency must be accounted for when determining trends in records and attribution of climate change.
22 CFR 19.10-2 - Reduced annuity with regular survivor annuity to spouse or former spouse.
Code of Federal Regulations, 2010 CFR
2010-04-01
... amount over $3,600 ($14,000-3,600) $10,400: $1,040. Total reduction in participant's full annuity: $1,130... of the base: $90. Plus 10 percent of the amount over $3,600 ($12,600-3,600) $9,000: $900. Total... initial retirement or reversion to retired status following recall service. ...
The cardioprotective effect of wine on human blood chemistry.
van Velden, David P; Mansvelt, Erna P G; Fourie, Elba; Rossouw, Marietjie; Marais, A David
2002-05-01
We investigated the in vivo effects of regular consumption of red and white wine on the serum lipid profile, plasma plasminogen activator-1, homocysteine levels, and total antioxidant status. This study confirmed that moderate consumption of wine, red more than white, exerts cardioprotective effects through beneficial changes in lipid profiles and plasma total antioxidant status.
Lifetime physical activity and calcium intake related to bone density in young women.
Wallace, Lorraine Silver; Ballard, Joyce E
2002-05-01
Osteoporosis is a significant public health problem associated with increased mortality and morbidity. Our aim in this cross-sectional study was to investigate the relationship between lifetime physical activity and calcium intake and bone mineral density (BMD) and BMC (bone mineral content) in 42 regularly menstruating Caucasian women (age 21.26+/-1.91 years, BMI 23.83+/-5.85). BMD and BMC at the lumbar spine (L2-L4), hip (femoral neck, trochanter, total), and total body were assessed by dual energy x-ray absorptiometry (DXA). Lifetime history of physical activity and calcium intake was obtained by a structured interview using valid and reliable instruments. Measures of both lifetime physical activity and calcium intake were highly correlated. In stepwise multiple regression analyses, lean mass was the most important and consistent factor for predicting BMD and BMC at all skeletal sites (attributable r2 = 28.8%-78.7%). Lifetime physical activity contributed to 3.0% of the variation in total body BMD, and life-time weight-bearing physical activity explained 15.1% of variance in lumbar spine BMC. Current calcium intake predicted 6% of the variance in BMD at the femoral neck and trochanter. We found lean mass to be a powerful predictor of BMD and BMC in young women. Because lean mass can be modified to some extent by physical activity, public health efforts must be directed at increasing physical activity throughout the lifespan. Furthermore, our results suggest that adequate calcium intake may help to enhance bone mass, thus decreasing the risk of osteoporotic fracture later in life.
Rigler, E. Joshua
2017-04-26
A theoretical basis and prototype numerical algorithm are provided that decompose regular time series of geomagnetic observations into three components: secular variation; solar quiet, and disturbance. Respectively, these three components correspond roughly to slow changes in the Earth’s internal magnetic field, periodic daily variations caused by quasi-stationary (with respect to the sun) electrical current systems in the Earth’s magnetosphere, and episodic perturbations to the geomagnetic baseline that are typically driven by fluctuations in a solar wind that interacts electromagnetically with the Earth’s magnetosphere. In contrast to similar algorithms applied to geomagnetic data in the past, this one addresses the issue of real time data acquisition directly by applying a time-causal, exponential smoother with “seasonal corrections” to the data as soon as they become available.