On epicardial potential reconstruction using regularization schemes with the L1-norm data term.
Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart
2011-01-07
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann
2011-10-07
The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.
L1-norm locally linear representation regularization multi-source adaptation learning.
Tao, Jianwen; Wen, Shiting; Hu, Wenjun
2015-09-01
In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.
Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie
2013-05-01
Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian
2016-06-18
In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.
Regularized Filters for L1-Norm-Based Common Spatial Patterns.
Wang, Haixian; Li, Xiaomeng
2016-02-01
The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.
Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.
Majumdar, Angshul
2013-06-01
In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng
2018-01-01
Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.
Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources
Rahimi, Azar; Xu, Jingjia; Wang, Linwei
2013-01-01
Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735
Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization
NASA Astrophysics Data System (ADS)
Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing
2015-05-01
Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
A blind deconvolution method based on L1/L2 regularization prior in the gradient space
NASA Astrophysics Data System (ADS)
Cai, Ying; Shi, Yu; Hua, Xia
2018-02-01
In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.
Selection of regularization parameter for l1-regularized damage detection
NASA Astrophysics Data System (ADS)
Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing
2018-06-01
The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.
L1 norm based common spatial patterns decomposition for scalp EEG BCI.
Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong
2013-08-06
Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.
Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi
2017-01-18
Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.
X-Ray Phase Imaging for Breast Cancer Detection
2010-09-01
regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification
Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong
2018-01-01
Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307
2016-05-01
norm does not cap - ture the geometry completely. The L1−L2 in (c) does a better job than TV while L1 in (b) and L1−0.5L2 in (d) capture the squares most...and isotropic total variation (TV) norms into a relaxed formu- lation of the two phase Mumford-Shah (MS) model for image segmentation. We show...results exceeding those obtained by the MS model when using the standard TV norm to regular- ize partition boundaries. In particular, examples illustrating
Low-illumination image denoising method for wide-area search of nighttime sea surface
NASA Astrophysics Data System (ADS)
Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang
2018-05-01
In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.
Sparse regularization for force identification using dictionaries
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Sparse Coding and Counting for Robust Visual Tracking
Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu
2016-01-01
In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
Molecular cancer classification using a meta-sample-based regularized robust coding method.
Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen
2014-01-01
Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.
Human action recognition with group lasso regularized-support vector machine
NASA Astrophysics Data System (ADS)
Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei
2016-05-01
The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was usedmore » on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant No.61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Poisson image reconstruction with Hessian Schatten-norm regularization.
Lefkimmiatis, Stamatios; Unser, Michael
2013-11-01
Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.
An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-09-01
Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.
Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Dunca, Argus A.
2017-12-01
This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.
Visual tracking based on the sparse representation of the PCA subspace
NASA Astrophysics Data System (ADS)
Chen, Dian-bing; Zhu, Ming; Wang, Hui-li
2017-09-01
We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts
Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.
2013-01-01
To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080
Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms
NASA Astrophysics Data System (ADS)
Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.
2017-09-01
Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.
A new weak Galerkin finite element method for elliptic interface problems
Mu, Lin; Wang, Junping; Ye, Xiu; ...
2016-08-26
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
A new weak Galerkin finite element method for elliptic interface problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less
NASA Astrophysics Data System (ADS)
Danchin, Raphaël; Xu, Jiang
2017-04-01
The global existence issue for the isentropic compressible Navier-Stokes equations in the critical regularity framework was addressed in Danchin (Invent Math 141(3):579-614, 2000) more than 15 years ago. However, whether (optimal) time-decay rates could be shown in critical spaces has remained an open question. Here we give a positive answer to that issue not only in the L 2 critical framework of Danchin (Invent Math 141(3):579-614, 2000) but also in the general L p critical framework of Charve and Danchin (Arch Ration Mech Anal 198(1):233-271, 2010), Chen et al. (Commun Pure Appl Math 63(9):1173-1224, 2010), Haspot (Arch Ration Mech Anal 202(2):427-460, 2011): we show that under a mild additional decay assumption that is satisfied if, for example, the low frequencies of the initial data are in {L^{p/2}(Rd)}, the L p norm (the slightly stronger dot B^0_{p,1} norm in fact) of the critical global solutions decays like t^{-d(1/p - 1/4} for {tto+∞,} exactly as firstly observed by Matsumura and Nishida in (Proc Jpn Acad Ser A 55:337-342, 1979) in the case p = 2 and d = 3, for solutions with high Sobolev regularity. Our method relies on refined time weighted inequalities in the Fourier space, and is likely to be effective for other hyperbolic/parabolic systems that are encountered in fluid mechanics or mathematical physics.
Brain vascular image enhancement based on gradient adjust with split Bregman
NASA Astrophysics Data System (ADS)
Liang, Xiao; Dong, Di; Hui, Hui; Zhang, Liwen; Fang, Mengjie; Tian, Jie
2016-04-01
Light Sheet Microscopy is a high-resolution fluorescence microscopic technique which enables to observe the mouse brain vascular network clearly with immunostaining. However, micro-vessels are stained with few fluorescence antibodies and their signals are much weaker than large vessels, which make micro-vessels unclear in LSM images. In this work, we developed a vascular image enhancement method to enhance micro-vessel details which should be useful for vessel statistics analysis. Since gradient describes the edge information of the vessel, the main idea of our method is to increase the gradient values of the enhanced image to improve the micro-vessels contrast. Our method contained two steps: 1) calculate the gradient image of LSM image, and then amplify high gradient values of the original image to enhance the vessel edge and suppress low gradient values to remove noises. Then we formulated a new L1-norm regularization optimization problem to find an image with the expected gradient while keeping the main structure information of the original image. 2) The split Bregman iteration method was used to deal with the L1-norm regularization problem and generate the final enhanced image. The main advantage of the split Bregman method is that it has both fast convergence and low memory cost. In order to verify the effectiveness of our method, we applied our method to a series of mouse brain vascular images acquired from a commercial LSM system in our lab. The experimental results showed that our method could greatly enhance micro-vessel edges which were unclear in the original images.
Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.
Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng
2015-02-01
This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.
NASA Astrophysics Data System (ADS)
Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping
2016-01-01
The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Zheng, Wenming; Lin, Zhouchen; Wang, Haixian
2014-04-01
A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.
1-norm support vector novelty detection and its sparseness.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.
Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography
Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.
2012-01-01
Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906
Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai
2015-02-01
Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan
2016-07-01
The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-04-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
Time-domain least-squares migration using the Gaussian beam summation method
NASA Astrophysics Data System (ADS)
Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo
2018-07-01
With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B
2015-12-01
We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.
Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C
2015-03-01
The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
NASA Astrophysics Data System (ADS)
Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki
2015-06-01
Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous-source method can be adopted to improve the computational efficiency of FWI based on Student's t distribution.
NASA Astrophysics Data System (ADS)
Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin
2018-02-01
In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.
Suppressing multiples using an adaptive multichannel filter based on L1-norm
NASA Astrophysics Data System (ADS)
Shi, Ying; Jing, Hongliang; Zhang, Wenwu; Ning, Dezhi
2017-08-01
Adaptive subtraction is an important link for removing surface-related multiples in the wave equation-based method. In this paper, we propose an adaptive multichannel subtraction method based on the L1-norm. We achieve enhanced compensation for the mismatch between the input seismogram and the predicted multiples in terms of the amplitude, phase, frequency band, and travel time. Unlike the conventional L2-norm, the proposed method does not rely on the assumption that the primary and the multiples are orthogonal, and also takes advantage of the fact that the L1-norm is more robust when dealing with outliers. In addition, we propose a frequency band extension via modulation to reconstruct the high frequencies to compensate for the frequency misalignment. We present a parallel computing scheme to accelerate the subtraction algorithm on graphic processing units (GPUs), which significantly reduces the computational cost. The synthetic and field seismic data tests show that the proposed method effectively suppresses the multiples.
Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun
2017-09-01
Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.
An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.
Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim
2015-10-01
In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.
Automated ambiguity estimation for VLBI Intensive sessions using L1-norm
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.
L2-norm multiple kernel learning and its application to biomedical data fusion
2010-01-01
Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL. Availability The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html. PMID:20529363
Regularization of Instantaneous Frequency Attribute Computations
NASA Astrophysics Data System (ADS)
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Mirone, Alessandro; Brun, Emmanuel; Coan, Paola
2014-01-01
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.
Robust Ambiguity Estimation for an Automated Analysis of the Intensive Sessions
NASA Astrophysics Data System (ADS)
Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger
2016-12-01
Very Long Baseline Interferometry (VLBI) is a unique space-geodetic technique that can directly determine the Earth's phase of rotation, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) are computed from one-hour long VLBI Intensive sessions. These sessions are essential for providing timely UT1 estimates for satellite navigation systems. To produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This requires automated processing of X- and S-band group delays. These data often contain an unknown number of integer ambiguities in the observed group delays. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimization). We implement the robust L1-norm with an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions for the Kokee-Wettzell baseline. The results are compared to an analysis setup where the ambiguity estimation is computed using the L2-norm. Additionally, we investigate three alternative weighting strategies for the ambiguity estimation. The results show that in automated analysis the L1-norm resolves ambiguities better than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies.
Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.
Liu, Jing; Zhou, Weidong; Juwono, Filbert H
2017-05-08
Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.
Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin
2016-01-01
Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.
WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS
MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN
2013-01-01
Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935
Hessian-based norm regularization for image restoration with biomedical applications.
Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael
2012-03-01
We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.
Blind motion image deblurring using nonconvex higher-order total variation model
NASA Astrophysics Data System (ADS)
Li, Weihong; Chen, Rui; Xu, Shangwen; Gong, Weiguo
2016-09-01
We propose a nonconvex higher-order total variation (TV) method for blind motion image deblurring. First, we introduce a nonconvex higher-order TV differential operator to define a new model of the blind motion image deblurring, which can effectively eliminate the staircase effect of the deblurred image; meanwhile, we employ an image sparse prior to improve the edge recovery quality. Second, to improve the accuracy of the estimated motion blur kernel, we use L1 norm and H1 norm as the blur kernel regularization term, considering the sparsity and smoothing of the motion blur kernel. Third, because it is difficult to solve the numerically computational complexity problem of the proposed model owing to the intrinsic nonconvexity, we propose a binary iterative strategy, which incorporates a reweighted minimization approximating scheme in the outer iteration, and a split Bregman algorithm in the inner iteration. And we also discuss the convergence of the proposed binary iterative strategy. Last, we conduct extensive experiments on both synthetic and real-world degraded images. The results demonstrate that the proposed method outperforms the previous representative methods in both quality of visual perception and quantitative measurement.
An iterative algorithm for L1-TV constrained regularization in image restoration
NASA Astrophysics Data System (ADS)
Chen, K.; Loli Piccolomini, E.; Zama, F.
2015-11-01
We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.
On the Critical One Component Regularity for 3-D Navier-Stokes System: General Case
NASA Astrophysics Data System (ADS)
Chemin, Jean-Yves; Zhang, Ping; Zhang, Zhifei
2017-06-01
Let us consider initial data {v_0} for the homogeneous incompressible 3D Navier-Stokes equation with vorticity belonging to {L^{3/2}\\cap L^2}. We prove that if the solution associated with {v_0} blows up at a finite time {T^\\star}, then for any p in {]4,∞[}, and any unit vector e of {R^3}, the L p norm in time with value in \\dot{H}^{1/2 + 2/p } of {(v|e)_{R^3}} blows up at {T^\\star}.
NASA Astrophysics Data System (ADS)
Parekh, Ankit
Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis
NASA Astrophysics Data System (ADS)
Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal
Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.
Nonconvex Sparse Logistic Regression With Weakly Convex Regularization
NASA Astrophysics Data System (ADS)
Shen, Xinyue; Gu, Yuantao
2018-06-01
In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.
Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews
2016-06-01
The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994
Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A
2017-01-01
The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.
The construction of sparse models of Mars' crustal magnetic field
NASA Astrophysics Data System (ADS)
Moore, Kimberly; Bloxham, Jeremy
2017-04-01
The crustal magnetic field of Mars is a key constraint on Martian geophysical history, especially the timing of the dynamo shutoff. Maps of the crustal magnetic field of Mars show wide variations in the intensity of magnetization, with most of the Northern hemisphere only weakly magnetized. Previous methods of analysis tend to favor smooth solutions for the crustal magnetic field of Mars, making use of techniques such as L2 norms. Here we utilize inversion methods designed for sparse models, to see how much of the surface area of Mars must be magnetized in order to fit available spacecraft magnetic field data. We solve for the crustal magnetic field at 10,000 individual magnetic pixels on the surface of Mars. We employ an L1 regularization, and solve for models where each magnetic pixel is identically zero, unless required otherwise by the data. We find solutions with an adequate fit to the data with over 90% sparsity (90% of magnetic pixels having a field value of exactly 0). We contrast these solutions with L2-based solutions, as well as an elastic net model (combination of L1 and L2). We find our sparse solutions look dramatically different from previous models in the literature, but still give a physically reasonable history of the dynamo (shutting off around 4.1 Ga).
Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.
Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong
2015-11-01
In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.
A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes
Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong
2015-01-01
In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006
A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.
Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong
2015-01-01
In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data.
NASA Astrophysics Data System (ADS)
Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien
2017-09-01
We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.
NASA Astrophysics Data System (ADS)
Weidemaier, P.
2005-06-01
The trace problem on the hypersurface y_n=0 is investigated for a function u=u(y,t) \\in L_q(0,T;W_{\\underline p}^{\\underline m}(\\mathbb R_+^n)) with \\partial_t u \\in L_q(0,T; L_{\\underline p}(\\mathbb R_+^n)), that is, Sobolev spaces with mixed Lebesgue norm L_{\\underline p,q}(\\mathbb R^n_+\\times(0,T))=L_q(0,T;L_{\\underline p}(\\mathbb R_+^n)) are considered; here \\underline p=(p_1,\\dots,p_n) is a vector and \\mathbb R^n_+=\\mathbb R^{n-1} \\times (0,\\infty). Such function spaces are useful in the context of parabolic equations. They allow, in particular, different exponents of summability in space and time. It is shown that the sharp regularity of the trace in the time variable is characterized by the Lizorkin-Triebel space F_{q,p_n}^{1-1/(p_nm_n)}(0,T;L_{\\widetilde{\\underline p}}(\\mathbb R^{n-1})), \\underline p=(\\widetilde{\\underline p},p_n). A similar result is established for first order spatial derivatives of u. These results allow one to determine the exact spaces for the data in the inhomogeneous Dirichlet and Neumann problems for parabolic equations of the second order if the solution is in the space L_q(0,T; W_p^2(\\Omega)) \\cap W_q^1(0,T;L_p(\\Omega)) with p \\le q.
Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.
Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin
2014-09-01
l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.
Computerized tomography with total variation and with shearlets
NASA Astrophysics Data System (ADS)
Garduño, Edgar; Herman, Gabor T.
2017-04-01
To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.
NASA Astrophysics Data System (ADS)
Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le
2018-01-01
The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.
Kim, Junghoe; Calhoun, Vince D.; Shim, Eunsoo; Lee, Jong-Hwan
2015-01-01
Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. PMID:25987366
A Stochastic Model for Detecting Overlapping and Hierarchical Community Structure
Cao, Xiaochun; Wang, Xiao; Jin, Di; Guo, Xiaojie; Tang, Xianchao
2015-01-01
Community detection is a fundamental problem in the analysis of complex networks. Recently, many researchers have concentrated on the detection of overlapping communities, where a vertex may belong to more than one community. However, most current methods require the number (or the size) of the communities as a priori information, which is usually unavailable in real-world networks. Thus, a practical algorithm should not only find the overlapping community structure, but also automatically determine the number of communities. Furthermore, it is preferable if this method is able to reveal the hierarchical structure of networks as well. In this work, we firstly propose a generative model that employs a nonnegative matrix factorization (NMF) formulization with a l2,1 norm regularization term, balanced by a resolution parameter. The NMF has the nature that provides overlapping community structure by assigning soft membership variables to each vertex; the l2,1 regularization term is a technique of group sparsity which can automatically determine the number of communities by penalizing too many nonempty communities; and hence the resolution parameter enables us to explore the hierarchical structure of networks. Thereafter, we derive the multiplicative update rule to learn the model parameters, and offer the proof of its correctness. Finally, we test our approach on a variety of synthetic and real-world networks, and compare it with some state-of-the-art algorithms. The results validate the superior performance of our new method. PMID:25822148
Image interpolation via regularized local linear regression.
Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang
2011-12-01
The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng Jinchao; Qin Chenghu; Jia Kebin
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
The design of L1-norm visco-acoustic wavefield extrapolators
NASA Astrophysics Data System (ADS)
Salam, Syed Abdul; Mousa, Wail A.
2018-04-01
Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.
Exploring L1 model space in search of conductivity bounds for the MT problem
NASA Astrophysics Data System (ADS)
Wheelock, B. D.; Parker, R. L.
2013-12-01
Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of spatial variation delivers the fewest possible regions defined by a mean conductivity, the quantity we wish to bound. Thus, these regions present a natural set for which the most narrow and discriminating bounds can be found. For illustration, we apply these techniques to synthetic magnetotelluric (MT) data sets resulting from one-dimensional (1D) earth models. In each case we find that with realistic data coverage, any single inverted model can often stray from the truth, while the computed bounds on an encompassing region contain both the inverted and the true conductivities, indicating that our measure of model uncertainty is robust. Such estimates of uncertainty for conductivity can then be translated to bounds on important petrological parameters such as mineralogy, porosity, saturation, and fluid type.
NASA Astrophysics Data System (ADS)
Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan
2018-04-01
Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.
Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-11-01
A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.
A TVSCAD approach for image deblurring with impulsive noise
NASA Astrophysics Data System (ADS)
Gu, Guoyong; Jiang, Suhong; Yang, Junfeng
2017-12-01
We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).
Aggarwal, Priya; Gupta, Anubha
2017-12-01
A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm.
Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart
2016-02-01
The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach. Copyright © 2016 Elsevier Inc. All rights reserved.
Lex-SVM: exploring the potential of exon expression profiling for disease classification.
Yuan, Xiongying; Zhao, Yi; Liu, Changning; Bu, Dongbo
2011-04-01
Exon expression profiling technologies, including exon arrays and RNA-Seq, measure the abundance of every exon in a gene. Compared with gene expression profiling technologies like 3' array, exon expression profiling technologies could detect alterations in both transcription and alternative splicing, therefore they are expected to be more sensitive in diagnosis. However, exon expression profiling also brings higher dimension, more redundancy, and significant correlation among features. Ignoring the correlation structure among exons of a gene, a popular classification method like L1-SVM selects exons individually from each gene and thus is vulnerable to noise. To overcome this limitation, we present in this paper a new variant of SVM named Lex-SVM to incorporate correlation structure among exons and known splicing patterns to promote classification performance. Specifically, we construct a new norm, ex-norm, including our prior knowledge on exon correlation structure to regularize the coefficients of a linear SVM. Lex-SVM can be solved efficiently using standard linear programming techniques. The advantage of Lex-SVM is that it can select features group-wisely, force features in a subgroup to take equal weihts and exclude the features that contradict the majority in the subgroup. Experimental results suggest that on exon expression profile, Lex-SVM is more accurate than existing methods. Lex-SVM also generates a more compact model and selects genes more consistently in cross-validation. Unlike L1-SVM selecting only one exon in a gene, Lex-SVM assigns equal weights to as many exons in a gene as possible, lending itself easier for further interpretation.
Robust L1-norm two-dimensional linear discriminant analysis.
Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang
2015-05-01
In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
Resource Balancing Control Allocation
NASA Technical Reports Server (NTRS)
Frost, Susan A.; Bodson, Marc
2010-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the control effort. The paper discusses the alternative choice of using the l1 norm for minimization of the tracking error and a normalized l(infinity) norm, or sup norm, for minimization of the control effort. The algorithm computes the norm of the actuator deflections scaled by the actuator limits. Minimization of the control effort then translates into the minimization of the maximum actuator deflection as a percentage of its range of motion. The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are investigated through examples. In particular, the min-max criterion results in a type of resource balancing, where the resources are the control surfaces and the algorithm balances these resources to achieve the desired command. A study of the sensitivity of the algorithms to the data is presented, which shows that the normalized l(infinity) algorithm has the lowest sensitivity, although high sensitivities are observed whenever the limits of performance are reached.
Multi-task feature learning by using trace norm regularization
NASA Astrophysics Data System (ADS)
Jiangmei, Zhang; Binfeng, Yu; Haibo, Ji; Wang, Kunpeng
2017-11-01
Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.
Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.
Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli
2015-05-01
2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.
Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan
2016-01-01
Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. Copyright © 2015 Elsevier Inc. All rights reserved.
Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.
Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui
2018-02-01
Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud
2017-11-01
Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Pouliot, J
2015-06-15
Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn
NASA Astrophysics Data System (ADS)
Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han
2018-06-01
We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.
Feature Grouping and Selection Over an Undirected Graph.
Yang, Sen; Yuan, Lei; Lai, Ying-Cheng; Shen, Xiaotong; Wonka, Peter; Ye, Jieping
2012-01-01
High-dimensional regression/classification continues to be an important and challenging problem, especially when features are highly correlated. Feature selection, combined with additional structure information on the features has been considered to be promising in promoting regression/classification performance. Graph-guided fused lasso (GFlasso) has recently been proposed to facilitate feature selection and graph structure exploitation, when features exhibit certain graph structures. However, the formulation in GFlasso relies on pairwise sample correlations to perform feature grouping, which could introduce additional estimation bias. In this paper, we propose three new feature grouping and selection methods to resolve this issue. The first method employs a convex function to penalize the pairwise l ∞ norm of connected regression/classification coefficients, achieving simultaneous feature grouping and selection. The second method improves the first one by utilizing a non-convex function to reduce the estimation bias. The third one is the extension of the second method using a truncated l 1 regularization to further reduce the estimation bias. The proposed methods combine feature grouping and feature selection to enhance estimation accuracy. We employ the alternating direction method of multipliers (ADMM) and difference of convex functions (DC) programming to solve the proposed formulations. Our experimental results on synthetic data and two real datasets demonstrate the effectiveness of the proposed methods.
Control Allocation with Load Balancing
NASA Technical Reports Server (NTRS)
Bodson, Marc; Frost, Susan A.
2009-01-01
Next generation aircraft with a large number of actuators will require advanced control allocation methods to compute the actuator commands needed to follow desired trajectories while respecting system constraints. Previously, algorithms were proposed to minimize the l1 or l2 norms of the tracking error and of the actuator deflections. The paper discusses the alternative choice of the l(infinity) norm, or sup norm. Minimization of the control effort translates into the minimization of the maximum actuator deflection (min-max optimization). The paper shows how the problem can be solved effectively by converting it into a linear program and solving it using a simplex algorithm. Properties of the algorithm are also investigated through examples. In particular, the min-max criterion results in a type of load balancing, where the load is th desired command and the algorithm balances this load among various actuators. The solution using the l(infinity) norm also results in better robustness to failures and to lower sensitivity to nonlinearities in illustrative examples.
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-04-14
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.
Schatten Matrix Norm Based Polarimetric SAR Data Regularization Application over Chamonix Mont-Blanc
NASA Astrophysics Data System (ADS)
Le, Thu Trang; Atto, Abdourrahmane M.; Trouve, Emmanuel
2013-08-01
The paper addresses the filtering of Polarimetry Synthetic Aperture Radar (PolSAR) images. The filtering strategy is based on a regularizing cost function associated with matrix norms called the Schatten p-norms. These norms apply on matrix singular values. The proposed approach is illustrated upon scattering and coherency matrices on RADARSAT-2 PolSAR images over the Chamonix Mont-Blanc site. Several p values of Schatten p-norms are surveyed and their capabilities on filtering PolSAR images is provided in comparison with conventional strategies for filtering PolSAR data.
Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.
Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce
2018-06-15
A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Deblurring traffic sign images based on exemplars
Qiu, Tianshuang; Luan, Shengyang; Song, Haiyu; Wu, Linxiu
2018-01-01
Motion blur appearing in traffic sign images may lead to poor recognition results, and therefore it is of great significance to study how to deblur the images. In this paper, a novel method for deblurring traffic sign is proposed based on exemplars and several related approaches are also made. First, an exemplar dataset construction method is proposed based on multiple-size partition strategy to lower calculation cost of exemplar matching. Second, a matching criterion based on gradient information and entropy correlation coefficient is also proposed to enhance the matching accuracy. Third, L0.5-norm is introduced as the regularization item to maintain the sparsity of blur kernel. Experiments verify the superiority of the proposed approaches and extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm. PMID:29513677
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.
Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces sparser coefficients than the original SPF basis and results in significantly lower reconstruction error than Bilgic et al.'s method.
Seismic data restoration with a fast L1 norm trust region method
NASA Astrophysics Data System (ADS)
Cao, Jingjie; Wang, Yanfei
2014-08-01
Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.
Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan
2017-07-01
High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.
So it is, so it shall be: Group regularities license children’s prescriptive judgments
Roberts, Steven O.; Gelman, Susan A.; Ho, Arnold K.
2016-01-01
When do descriptive regularities (what characteristics individuals have) become prescriptive norms (what characteristics individuals should have)? We examined children’s (4–13 years) and adults’ use of group regularities to make prescriptive judgments, employing novel groups (Hibbles and Glerks) that engaged in morally neutral behaviors (e.g., eating different kinds of berries). Participants were introduced to conforming or non-conforming individuals (e.g., a Hibble who ate berries more typical of a Glerk). Children negatively evaluated non-conformity, with negative evaluations declining with age (Study 1). These effects were replicable across competitive and cooperative intergroup contexts (Study 2), and stemmed from reasoning about group regularities rather than reasoning about individual regularities (Study 3). These data provide new insights into children’s group concepts and have important implications for understanding the development of stereotyping and norm enforcement. PMID:27914116
Concave 1-norm group selection
Jiang, Dingfeng; Huang, Jian
2015-01-01
Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206
Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar
Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing
2016-01-01
In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345
Regularizing portfolio optimization
NASA Astrophysics Data System (ADS)
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Gu, Wenbo; O'Connor, Daniel; Nguyen, Dan; Yu, Victoria Y; Ruan, Dan; Dong, Lei; Sheng, Ke
2018-04-01
Intensity-Modulated Proton Therapy (IMPT) is the state-of-the-art method of delivering proton radiotherapy. Previous research has been mainly focused on optimization of scanning spots with manually selected beam angles. Due to the computational complexity, the potential benefit of simultaneously optimizing beam orientations and spot pattern could not be realized. In this study, we developed a novel integrated beam orientation optimization (BOO) and scanning-spot optimization algorithm for intensity-modulated proton therapy (IMPT). A brain chordoma and three unilateral head-and-neck patients with a maximal target size of 112.49 cm 3 were included in this study. A total number of 1162 noncoplanar candidate beams evenly distributed across 4π steradians were included in the optimization. For each candidate beam, the pencil-beam doses of all scanning spots covering the PTV and a margin were calculated. The beam angle selection and spot intensity optimization problem was formulated to include three terms: a dose fidelity term to penalize the deviation of PTV and OAR doses from ideal dose distribution; an L1-norm sparsity term to reduce the number of active spots and improve delivery efficiency; a group sparsity term to control the number of active beams between 2 and 4. For the group sparsity term, convex L2,1-norm and nonconvex L2,1/2-norm were tested. For the dose fidelity term, both quadratic function and linearized equivalent uniform dose (LEUD) cost function were implemented. The optimization problem was solved using the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The IMPT BOO method was tested on three head-and-neck patients and one skull base chordoma patient. The results were compared with IMPT plans created using column generation selected beams or manually selected beams. The L2,1-norm plan selected spatially aggregated beams, indicating potential degeneracy using this norm. L2,1/2-norm was able to select spatially separated beams and achieve smaller deviation from the ideal dose. In the L2,1/2-norm plans, the [mean dose, maximum dose] of OAR were reduced by an average of [2.38%, 4.24%] and[2.32%, 3.76%] of the prescription dose for the quadratic and LEUD cost function, respectively, compared with the IMPT plan using manual beam selection while maintaining the same PTV coverage. The L2,1/2 group sparsity plans were dosimetrically superior to the column generation plans as well. Besides beam orientation selection, spot sparsification was observed. Generally, with the quadratic cost function, 30%~60% spots in the selected beams remained active. With the LEUD cost function, the percentages of active spots were in the range of 35%~85%.The BOO-IMPT run time was approximately 20 min. This work shows the first IMPT approach integrating noncoplanar BOO and scanning-spot optimization in a single mathematical framework. This method is computationally efficient, dosimetrically superior and produces delivery-friendly IMPT plans. © 2018 American Association of Physicists in Medicine.
On the Global Regularity of a Helical-Decimated Version of the 3D Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Biferale, Luca; Titi, Edriss S.
2013-06-01
We study the global regularity, for all time and all initial data in H 1/2, of a recently introduced decimated version of the incompressible 3D Navier-Stokes (dNS) equations. The model is based on a projection of the dynamical evolution of Navier-Stokes (NS) equations into the subspace where helicity (the L 2-scalar product of velocity and vorticity) is sign-definite. The presence of a second (beside energy) sign-definite inviscid conserved quadratic quantity, which is equivalent to the H 1/2-Sobolev norm, allows us to demonstrate global existence and uniqueness, of space-periodic solutions, together with continuity with respect to the initial conditions, for this decimated 3D model. This is achieved thanks to the establishment of two new estimates, for this 3D model, which show that the H 1/2 and the time average of the square of the H 3/2 norms of the velocity field remain finite. Such two additional bounds are known, in the spirit of the work of H. Fujita and T. Kato (Arch. Ration. Mech. Anal. 16:269-315, 1964; Rend. Semin. Mat. Univ. Padova 32:243-260, 1962), to be sufficient for showing well-posedness for the 3D NS equations. Furthermore, they are directly linked to the helicity evolution for the dNS model, and therefore with a clear physical meaning and consequences.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
Application of L1/2 regularization logistic method in heart disease diagnosis.
Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying
2014-01-01
Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.
NASA Astrophysics Data System (ADS)
Wang, Dong
2018-05-01
Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Liao, Ke; Zhu, Min; Ding, Lei
2013-08-01
The present study investigated the use of transform sparseness of cortical current density on human brain surface to improve electroencephalography/magnetoencephalography (EEG/MEG) inverse solutions. Transform sparseness was assessed by evaluating compressibility of cortical current densities in transform domains. To do that, a structure compression method from computer graphics was first adopted to compress cortical surface structure, either regular or irregular, into hierarchical multi-resolution meshes. Then, a new face-based wavelet method based on generated multi-resolution meshes was proposed to compress current density functions defined on cortical surfaces. Twelve cortical surface models were built by three EEG/MEG softwares and their structural compressibility was evaluated and compared by the proposed method. Monte Carlo simulations were implemented to evaluate the performance of the proposed wavelet method in compressing various cortical current density distributions as compared to other two available vertex-based wavelet methods. The present results indicate that the face-based wavelet method can achieve higher transform sparseness than vertex-based wavelet methods. Furthermore, basis functions from the face-based wavelet method have lower coherence against typical EEG and MEG measurement systems than vertex-based wavelet methods. Both high transform sparseness and low coherent measurements suggest that the proposed face-based wavelet method can improve the performance of L1-norm regularized EEG/MEG inverse solutions, which was further demonstrated in simulations and experimental setups using MEG data. Thus, this new transform on complicated cortical structure is promising to significantly advance EEG/MEG inverse source imaging technologies. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel
2017-05-01
The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.
Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim
2016-01-01
Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179
A Weight-Adaptive Laplacian Embedding for Graph-Based Clustering.
Cheng, De; Nie, Feiping; Sun, Jiande; Gong, Yihong
2017-07-01
Graph-based clustering methods perform clustering on a fixed input data graph. Thus such clustering results are sensitive to the particular graph construction. If this initial construction is of low quality, the resulting clustering may also be of low quality. We address this drawback by allowing the data graph itself to be adaptively adjusted in the clustering procedure. In particular, our proposed weight adaptive Laplacian (WAL) method learns a new data similarity matrix that can adaptively adjust the initial graph according to the similarity weight in the input data graph. We develop three versions of these methods based on the L2-norm, fuzzy entropy regularizer, and another exponential-based weight strategy, that yield three new graph-based clustering objectives. We derive optimization algorithms to solve these objectives. Experimental results on synthetic data sets and real-world benchmark data sets exhibit the effectiveness of these new graph-based clustering methods.
Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo
Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less
Pischke, Claudia R; Helmer, Stefanie M; McAlaney, John; Bewick, Bridgette M; Vriesacker, Bart; Van Hal, Guido; Mikolajczyk, Rafael T; Akvardar, Yildiz; Guillen-Grima, Francisco; Salonna, Ferdinand; Orosova, Olga; Dohrmann, Solveig; Dempsey, Robert C; Zeeb, Hajo
2015-12-01
Research conducted in North America suggests that students tend to overestimate tobacco use among their peers. This perceived norm may impact personal tobacco use. It remains unclear how these perceptions influence tobacco use among European students. The two aims were to investigate possible self-other discrepancies regarding personal use and attitudes towards use and to evaluate if perceptions of peer use and peer approval of use are associated with personal use and approval of tobacco use. The EU-funded 'Social Norms Intervention for the prevention of Polydrug usE' study was conducted in Belgium, Denmark, Germany, Slovak Republic, Spain, Turkey and United Kingdom. In total, 4482 students (71% female) answered an online survey including questions on personal and perceived tobacco use and personal and perceived attitudes towards tobacco use. Across all countries, the majority of students perceived tobacco use of their peers to be higher than their own use. The perception that the majority (>50%) of peers used tobacco regularly in the past two months was significantly associated with higher odds for personal regular use (OR: 2.66, 95% CI: 1.90-3.73). The perception that the majority of peers approve of tobacco use was significantly associated with higher odds for personal approval of tobacco use (OR: 6.49, 95% CI: 4.54-9.28). Perceived norms are an important predictor of personal tobacco use and attitudes towards use. Interventions addressing perceived norms may be a viable method to change attitudes and tobacco use among European students, and may be a component of future tobacco control policy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy
2014-11-01
In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.
Least-squares finite element methods for compressible Euler equations
NASA Technical Reports Server (NTRS)
Jiang, Bo-Nan; Carey, G. F.
1990-01-01
A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation
Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei
2013-01-01
We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831
Algamal, Z Y; Lee, M H
2017-01-01
A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Change detection of medical images using dictionary learning techniques and PCA
NASA Astrophysics Data System (ADS)
Nika, Varvara; Babyn, Paul; Zhu, Hongmei
2014-03-01
Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.
Total variation superiorized conjugate gradient method for image reconstruction
NASA Astrophysics Data System (ADS)
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Graph cuts via l1 norm minimization.
Bhusnurmath, Arvind; Taylor, Camillo J
2008-10-01
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.
Luo, Lei; Yang, Jian; Qian, Jianjun; Tai, Ying; Lu, Gui-Fu
2017-09-01
Dealing with partial occlusion or illumination is one of the most challenging problems in image representation and classification. In this problem, the characterization of the representation error plays a crucial role. In most current approaches, the error matrix needs to be stretched into a vector and each element is assumed to be independently corrupted. This ignores the dependence between the elements of error. In this paper, it is assumed that the error image caused by partial occlusion or illumination changes is a random matrix variate and follows the extended matrix variate power exponential distribution. This has the heavy tailed regions and can be used to describe a matrix pattern of l×m dimensional observations that are not independent. This paper reveals the essence of the proposed distribution: it actually alleviates the correlations between pixels in an error matrix E and makes E approximately Gaussian. On the basis of this distribution, we derive a Schatten p -norm-based matrix regression model with L q regularization. Alternating direction method of multipliers is applied to solve this model. To get a closed-form solution in each step of the algorithm, two singular value function thresholding operators are introduced. In addition, the extended Schatten p -norm is utilized to characterize the distance between the test samples and classes in the design of the classifier. Extensive experimental results for image reconstruction and classification with structural noise demonstrate that the proposed algorithm works much more robustly than some existing regression-based methods.
Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.
Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.
2014-01-01
The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704
Autoregressive model in the Lp norm space for EEG analysis.
Li, Peiyang; Wang, Xurui; Li, Fali; Zhang, Rui; Ma, Teng; Peng, Yueheng; Lei, Xu; Tian, Yin; Guo, Daqing; Liu, Tiejun; Yao, Dezhong; Xu, Peng
2015-01-30
The autoregressive (AR) model is widely used in electroencephalogram (EEG) analyses such as waveform fitting, spectrum estimation, and system identification. In real applications, EEGs are inevitably contaminated with unexpected outlier artifacts, and this must be overcome. However, most of the current AR models are based on the L2 norm structure, which exaggerates the outlier effect due to the square property of the L2 norm. In this paper, a novel AR object function is constructed in the Lp (p≤1) norm space with the aim to compress the outlier effects on EEG analysis, and a fast iteration procedure is developed to solve this new AR model. The quantitative evaluation using simulated EEGs with outliers proves that the proposed Lp (p≤1) AR can estimate the AR parameters more robustly than the Yule-Walker, Burg and LS methods, under various simulated outlier conditions. The actual application to the resting EEG recording with ocular artifacts also demonstrates that Lp (p≤1) AR can effectively address the outliers and recover a resting EEG power spectrum that is more consistent with its physiological basis. Copyright © 2014 Elsevier B.V. All rights reserved.
Accelerating 4D flow MRI by exploiting vector field divergence regularization.
Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian
2016-01-01
To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.
Potential estimates for the p-Laplace system with data in divergence form
NASA Astrophysics Data System (ADS)
Cianchi, A.; Schwarzacher, S.
2018-07-01
A pointwise bound for local weak solutions to the p-Laplace system is established in terms of data on the right-hand side in divergence form. The relevant bound involves a Havin-Maz'ya-Wolff potential of the datum, and is a counterpart for data in divergence form of a classical result of [25], recently extended to systems in [28]. A local bound for oscillations is also provided. These results allow for a unified approach to regularity estimates for broad classes of norms, including Banach function norms (e.g. Lebesgue, Lorentz and Orlicz norms), and norms depending on the oscillation of functions (e.g. Hölder, BMO and, more generally, Campanato type norms). In particular, new regularity properties are exhibited, and well-known results are easily recovered.
NASA Astrophysics Data System (ADS)
Vachálek, Ján
2011-12-01
The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization
NASA Astrophysics Data System (ADS)
Zhang, Tao; Tang, Zhenmin; Liu, Qing
2017-05-01
Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.
Boundedness and almost Periodicity in Time of Solutions of Evolutionary Variational Inequalities
NASA Astrophysics Data System (ADS)
Pankov, A. A.
1983-04-01
In this paper existence theorems are obtained for the solutions of abstract parabolic variational inequalities, which are bounded with respect to time (in the Stepanov and L^\\infty norms). The regularity and almost periodicity properties of such solutions are studied. Theorems are also established concerning their solvability in spaces of Besicovitch almost periodic functions. The majority of the results are obtained without any compactness assumptions. Bibliography: 30 titles.
Age specific serum anti-Müllerian hormone levels in 1,298 Korean women with regular menstruation
Yoo, Ji Hee; Cha, Sun Wha; Park, Chan Woo; Yang, Kwang Moon; Song, In Ok; Koong, Mi Kyoung; Kang, Inn Soo
2011-01-01
Objective To determine the age specific serum anti-Müllerian hormone (AMH) reference values in Korean women with regular menstruation. Methods Between May, 2010 and January, 2011, the serum AMH levels were evaluated in a total of 1,298 women who have regular menstrual cycles aged between 20 and 50 years. Women were classified into 6 categories by age: 20-31 years, 32-34 years, 35-37 years, 38-40 years, 41-43 years, above 43 years. Measurement of serum AMH was measured by commercial enzyme-linked immunoassay. Results The serum AMH levels correlated negatively with age. The median AMH level of each age group was 4.20 ng/mL, 3.70 ng/mL, 2.60 ng/mL, 1.50 ng/mL, 1.30 ng/mL, and 0.60 ng/mL, respectively. The AMH values in the lower 5th percentile of each age group were 1.19 ng/mL, 0.60 ng/mL, 0.42 ng/mL, 0.27 ng/mL, 0.14 ng/mL, and 0.10 ng/mL, respectively. Conclusion This study determined reference values of serum AMH in Korean women with regular menstruation. These values can be applied to clinical evaluation and treatment of infertile women. PMID:22384425
Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge
2014-09-01
An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2014-05-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
NASA Technical Reports Server (NTRS)
Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.
2013-01-01
The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case studies featuring the downscaling of a hurricane precipitation field.
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
TH-AB-BRA-02: Automated Triplet Beam Orientation Optimization for MRI-Guided Co-60 Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, D; Thomas, D; Cao, M
2016-06-15
Purpose: MRI guided Co-60 provides daily and intrafractional MRI soft tissue imaging for improved target tracking and adaptive radiotherapy. To remedy the low output limitation, the system uses three Co-60 sources at 120° apart, but using all three sources in planning is considerably unintuitive. We automate the beam orientation optimization using column generation, and then solve a novel fluence map optimization (FMO) problem while regularizing the number of MLC segments. Methods: Three patients—1 prostate (PRT), 1 lung (LNG), and 1 head-and-neck boost plan (H&NBoost)—were evaluated. The beamlet dose for 180 equally spaced coplanar beams under 0.35 T magnetic field wasmore » calculated using Monte Carlo. The 60 triplets were selected utilizing the column generation algorithm. The FMO problem was formulated using an L2-norm minimization with anisotropic total variation (TV) regularization term, which allows for control over the number of MLC segments. Our Fluence Regularized and Optimized Selection of Triplets (FROST) plans were compared against the clinical treatment plans (CLN) produced by an experienced dosimetrist. Results: The mean PTV D95, D98, and D99 differ by −0.02%, +0.12%, and +0.44% of the prescription dose between planning methods, showing same PTV dose coverage. The mean PTV homogeneity (D95/D5) was at 0.9360 (FROST) and 0.9356 (CLN). R50 decreased by 0.07 with FROST. On average, FROST reduced Dmax and Dmean of OARs by 6.56% and 5.86% of the prescription dose. The manual CLN planning required iterative trial and error runs which is very time consuming, while FROST required minimal human intervention. Conclusions: MRI guided Co-60 therapy needs the output of all sources yet suffers from unintuitive and laborious manual beam selection processes. Automated triplet orientation optimization is shown essential to overcome the difficulty and improves the dosimetry. A novel FMO with regularization provides additional controls over the number of MLC segments and treatment time. Varian Medical Systems; NIH grant R01CA188300; NIH grant R43CA183390.« less
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Global Well-Posedness of the Boltzmann Equation with Large Amplitude Initial Data
NASA Astrophysics Data System (ADS)
Duan, Renjun; Huang, Feimin; Wang, Yong; Yang, Tong
2017-07-01
The global well-posedness of the Boltzmann equation with initial data of large amplitude has remained a long-standing open problem. In this paper, by developing a new {L^∞_xL^1v\\cap L^∞_{x,v}} approach, we prove the global existence and uniqueness of mild solutions to the Boltzmann equation in the whole space or torus for a class of initial data with bounded velocity-weighted {L^∞} norm under some smallness condition on the {L^1_xL^∞_v} norm as well as defect mass, energy and entropy so that the initial data allow large amplitude oscillations. Both the hard and soft potentials with angular cut-off are considered, and the large time behavior of solutions in the {L^∞_{x,v}} norm with explicit rates of convergence are also studied.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
Blind source deconvolution for deep Earth seismology
NASA Astrophysics Data System (ADS)
Stefan, W.; Renaut, R.; Garnero, E. J.; Lay, T.
2007-12-01
We present an approach to automatically estimate an empirical source characterization of deep earthquakes recorded teleseismically and subsequently remove the source from the recordings by applying regularized deconvolution. A principle goal in this work is to effectively deblur the seismograms, resulting in more impulsive and narrower pulses, permitting better constraints in high resolution waveform analyses. Our method consists of two stages: (1) we first estimate the empirical source by automatically registering traces to their 1st principal component with a weighting scheme based on their deviation from this shape, we then use this shape as an estimation of the earthquake source. (2) We compare different deconvolution techniques to remove the source characteristic from the trace. In particular Total Variation (TV) regularized deconvolution is used which utilizes the fact that most natural signals have an underlying spareness in an appropriate basis, in this case, impulsive onsets of seismic arrivals. We show several examples of deep focus Fiji-Tonga region earthquakes for the phases S and ScS, comparing source responses for the separate phases. TV deconvolution is compared to the water level deconvolution, Tikenov deconvolution, and L1 norm deconvolution, for both data and synthetics. This approach significantly improves our ability to study subtle waveform features that are commonly masked by either noise or the earthquake source. Eliminating source complexities improves our ability to resolve deep mantle triplications, waveform complexities associated with possible double crossings of the post-perovskite phase transition, as well as increasing stability in waveform analyses used for deep mantle anisotropy measurements.
An improved robust blind motion de-blurring algorithm for remote sensing images
NASA Astrophysics Data System (ADS)
He, Yulong; Liu, Jin; Liang, Yonghui
2016-10-01
Shift-invariant motion blur can be modeled as a convolution of the true latent image and the blur kernel with additive noise. Blind motion de-blurring estimates a sharp image from a motion blurred image without the knowledge of the blur kernel. This paper proposes an improved edge-specific motion de-blurring algorithm which proved to be fit for processing remote sensing images. We find that an inaccurate blur kernel is the main factor to the low-quality restored images. To improve image quality, we do the following contributions. For the robust kernel estimation, first, we adapt the multi-scale scheme to make sure that the edge map could be constructed accurately; second, an effective salient edge selection method based on RTV (Relative Total Variation) is used to extract salient structure from texture; third, an alternative iterative method is introduced to perform kernel optimization, in this step, we adopt l1 and l0 norm as the priors to remove noise and ensure the continuity of blur kernel. For the final latent image reconstruction, an improved adaptive deconvolution algorithm based on TV-l2 model is used to recover the latent image; we control the regularization weight adaptively in different region according to the image local characteristics in order to preserve tiny details and eliminate noise and ringing artifacts. Some synthetic remote sensing images are used to test the proposed algorithm, and results demonstrate that the proposed algorithm obtains accurate blur kernel and achieves better de-blurring results.
L 1-2 minimization for exact and stable seismic attenuation compensation
NASA Astrophysics Data System (ADS)
Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang
2018-06-01
Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Multi-objective based spectral unmixing for hyperspectral images
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei
2017-02-01
Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Liu, Hesheng; Gao, Xiaorong; Schimpf, Paul H; Yang, Fusheng; Gao, Shangkai
2004-10-01
Estimation of intracranial electric activity from the scalp electroencephalogram (EEG) requires a solution to the EEG inverse problem, which is known as an ill-conditioned problem. In order to yield a unique solution, weighted minimum norm least square (MNLS) inverse methods are generally used. This paper proposes a recursive algorithm, termed Shrinking LORETA-FOCUSS, which combines and expands upon the central features of two well-known weighted MNLS methods: LORETA and FOCUSS. This recursive algorithm makes iterative adjustments to the solution space as well as the weighting matrix, thereby dramatically reducing the computation load, and increasing local source resolution. Simulations are conducted on a 3-shell spherical head model registered to the Talairach human brain atlas. A comparative study of four different inverse methods, standard Weighted Minimum Norm, L1-norm, LORETA-FOCUSS and Shrinking LORETA-FOCUSS are presented. The results demonstrate that Shrinking LORETA-FOCUSS is able to reconstruct a three-dimensional source distribution with smaller localization and energy errors compared to the other methods.
Riou França, Lionel; Dautzenberg, Bertrand; Falissard, Bruno; Reynaud, Michel
2009-01-01
Background Knowledge of the correlates of smoking is a first step to successful prevention interventions. The social norms theory hypothesises that students' smoking behaviour is linked to their perception of norms for use of tobacco. This study was designed to test the theory that smoking is associated with perceived norms, controlling for other correlates of smoking. Methods In a pencil-and-paper questionnaire, 721 second-year students in sociology, medicine, foreign language or nursing studies estimated the number of cigarettes usually smoked in a month. 31 additional covariates were included as potential predictors of tobacco use. Multiple imputation was used to deal with missing values among covariates. The strength of the association of each variable with tobacco use was quantified by the inclusion frequencies of the variable in 1000 bootstrap sample backward selections. Being a smoker and the number of cigarettes smoked by smokers were modelled separately. Results We retain 8 variables to predict the risk of smoking and 6 to predict the quantities smoked by smokers. The risk of being a smoker is increased by cannabis use, binge drinking, being unsupportive of smoke-free universities, perceived friends' approval of regular smoking, positive perceptions about tobacco, a high perceived prevalence of smoking among friends, reporting not being disturbed by people smoking in the university, and being female. The quantity of cigarettes smoked by smokers is greater for smokers reporting never being disturbed by smoke in the university, unsupportive of smoke-free universities, perceiving that their friends approve of regular smoking, having more negative beliefs about the tobacco industry, being sociology students and being among the older students. Conclusion Other substance use, injunctive norms (friends' approval) and descriptive norms (friends' smoking prevalence) are associated with tobacco use. University-based prevention campaigns should take multiple substance use into account and focus on the norms most likely to have an impact on student smoking. PMID:19341453
Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malykhin, Yu V; Ryutin, K S
2014-11-30
For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.
Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van
2017-05-04
Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although careful voxel selection is mandatory to avoid sub-optimal segmentation.
NASA Astrophysics Data System (ADS)
Wang, Min
2017-06-01
This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
Rational Approximations with Hankel-Norm Criterion
1980-01-01
REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
NASA Astrophysics Data System (ADS)
Hernandez, Monica
2017-12-01
This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.
Blind estimation of blur in hyperspectral images
NASA Astrophysics Data System (ADS)
Zhang, Mo; Vozel, Benoit; Chehdi, Kacem; Uss, Mykhail; Abramov, Sergey; Lukin, Vladimir
2017-10-01
Hyperspectral images acquired by remote sensing systems are generally degraded by noise and can be sometimes more severely degraded by blur. When no knowledge is available about the degradations present on the original image, blind restoration methods can only be considered. By blind, we mean absolutely no knowledge neither of the blur point spread function (PSF) nor the original latent channel and the noise level. In this study, we address the blind restoration of the degraded channels component-wise, according to a sequential scheme. For each degraded channel, the sequential scheme estimates the blur point spread function (PSF) in a first stage and deconvolves the degraded channel in a second and final stage by means of using the PSF previously estimated. We propose a new component-wise blind method for estimating effectively and accurately the blur point spread function. This method follows recent approaches suggesting the detection, selection and use of sufficiently salient edges in the current processed channel for supporting the regularized blur PSF estimation. Several modifications are beneficially introduced in our work. A new selection of salient edges through thresholding adequately the cumulative distribution of their corresponding gradient magnitudes is introduced. Besides, quasi-automatic and spatially adaptive tuning of the involved regularization parameters is considered. To prove applicability and higher efficiency of the proposed method, we compare it against the method it originates from and four representative edge-sparsifying regularized methods of the literature already assessed in a previous work. Our attention is mainly paid to the objective analysis (via ݈l1-norm) of the blur PSF error estimation accuracy. The tests are performed on a synthetic hyperspectral image. This synthetic hyperspectral image has been built from various samples from classified areas of a real-life hyperspectral image, in order to benefit from realistic spatial distribution of reference spectral signatures to recover after synthetic degradation. The synthetic hyperspectral image has been successively degraded with eight real blurs taken from the literature, each of a different support size. Conclusions, practical recommendations and perspectives are drawn from the results experimentally obtained.
Flanagan, Sara V.; Marvinney, Robert G.; Zheng, Yan
2014-01-01
In 2001 the Environmental Protection Agency (EPA) adopted a new standard for arsenic (As) in drinking water of 10 μg/L, replacing the old standard of 50 μg/L. However, for the 12% of the U.S. population relying on unregulated domestic well water, including half of the population of Maine, it is solely the well owner’s responsibility to test and treat the water. A mailed household survey was implemented January 2013 in 13 towns of central Maine with the goal of understanding the population’s testing and treatment practices and the key behavior influencing factors in an area with high well-water dependency and frequent natural groundwater As. The response rate was 58.3%; 525 of 900 likely-delivered surveys to randomly selected addresses were completed. Although 78% of the households reported their well has been tested, for half it was more than 5 years ago. Among the 58.7% who believe they have tested for As, most do not remember results. Better educated, higher income homeowners who more recently purchased their homes are most likely to have included As when last testing. While households agree water and As-related health risks can be severe, they feel low personal vulnerability and there are low testing norms overall. Significant predictors of including As when last testing include: having knowledge that years of exposure increases As-related health risks (risk knowledge), knowing who to contact to test well water (action knowledge), believing regularly testing does not take too much time (instrumental attitude), and having neighbors who regularly test their water (descriptive norm). Homeowners in As-affected communities have the tendency to underestimate their As risks compared to their neighbors. The reasons for this optimistic bias require further study, but low testing behaviors in this area may be due to the influence of a combination of norm, ability, and attitude factors and barriers. PMID:24875279
Genotype-phenotype association study via new multi-task learning model
Huo, Zhouyuan; Shen, Dinggang
2018-01-01
Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896
OCT despeckling via weighted nuclear norm constrained non-local low-rank representation
NASA Astrophysics Data System (ADS)
Tang, Chang; Zheng, Xiao; Cao, Lijuan
2017-10-01
As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.
Murphy, Caitlin C.; Vernon, Sally W.; Diamond, Pamela M.; Tiro, Jasmin A.
2013-01-01
Background Competitive hypothesis testing may explain differences in predictive power across multiple health behavior theories. Purpose We tested competing hypotheses of the Health Belief Model (HBM) and Theory of Reasoned Action (TRA) to quantify pathways linking subjective norm, benefits, barriers, intention, and mammography behavior. Methods We analyzed longitudinal surveys of women veterans randomized to the control group of a mammography intervention trial (n=704). We compared direct, partial mediation, and full mediation models with Satorra-Bentler χ2 difference testing. Results Barriers had a direct and indirect negative effect on mammography behavior; intention only partially mediated barriers. Benefits had little to no effect on behavior and intention; however, it was negatively correlated with barriers. Subjective norm directly affected behavior and indirectly affected intention through barriers. Conclusions Our results provide empiric support for different assertions of HBM and TRA. Future interventions should test whether building subjective norm and reducing negative attitudes increases regular mammography. PMID:23868613
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
Notes on modified trace distance measure of coherence
NASA Astrophysics Data System (ADS)
Chen, Bin; Fei, Shao-Ming
2018-05-01
We investigate the modified trace distance measure of coherence recently introduced in Yu et al. [Phys. Rev. A 94, 060302(R), 2016]. We show that for any single-qubit state, the modified trace norm of coherence is equal to the l1-norm of coherence. For any d-dimensional quantum system, an analytical formula of this measure for a class of maximally coherent mixed states is provided. The trade-off relation between the coherence quantified by the new measure and the mixedness quantified by the trace norm is also discussed. Furthermore, we explore the relation between the modified trace distance measure of coherence and other measures such as the l1-norm of coherence and the geometric measure of coherence.
NASA Astrophysics Data System (ADS)
Bally, B.; Duguet, T.
2018-02-01
Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed-form expression allowing its computation without any phase ambiguity. The formula is physically intuitive, accurate, and versatile. It equally applies to norm overlaps between Bogoliubov states of even or odd number parity. Numerical applications illustrate these features and provide a transparent representation of the content of the norm overlaps. Conclusions: The complex norm overlap between arbitrary Bogoliubov states is computed, without any phase ambiguity, via elementary linear algebra operations. The method can be used in any configuration mixing of orthogonal and non-orthogonal product states. Furthermore, the closed-form expression extends naturally to correlated overlaps at play in PNR-BCC and PNR-BMBPT. As such, the straight overlap between Bogoliubov states is the zero-order reduction of more involved norm kernels to be studied in a forthcoming paper.
Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong
2017-04-01
Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Multi-normed spaces based on non-discrete measures and their tensor products
NASA Astrophysics Data System (ADS)
Helemskii, A. Ya.
2018-04-01
Lambert discovered a new type of structures situated, in a sense, between normed spaces and abstract operator spaces. His definition was based on the notion of amplifying a normed space by means of the spaces \\ell_2^n. Later, several mathematicians studied more general structures (`p-multi- normed spaces') introduced by means of the spaces \\ell_p^n, 1≤ p≤∞. We pass from \\ell_p to L_p(X,μ) with an arbitrary measure. This becomes possible in the framework of the non- coordinate approach to the notion of amplification. In the case of a discrete counting measure, this approach is equivalent to the approach in the papers mentioned. Two categories arise. One consists of amplifications by means of an arbitrary normed space, and the other consists of p-convex amplifications by means of L_p(X,μ). Each of them has its own tensor product of objects (the existence of each product is proved by a separate explicit construction). As a final result, we show that the `p-convex' tensor product has an especially transparent form for the minimal L_p-amplifications of L_q-spaces, where q is conjugate to p. Namely, tensoring L_q(Y,ν) and L_q(Z,λ), we obtain L_q(Y× Z, ν×λ).
Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint
NASA Astrophysics Data System (ADS)
Khoshsokhan, S.; Rajabi, R.; Zayyani, H.
2017-09-01
Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.
Efficient robust conditional random fields.
Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A
2015-10-01
Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.
Fast and accurate matrix completion via truncated nuclear norm regularization.
Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei
2013-09-01
Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheong, K; Lee, M; Kang, S
2014-06-01
Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a respiration regularity index, ρ. Such single-index testing of respiration regularity can facilitate determination of RGRT availability in clinical settings, especially for free-breathing cases. This work was supported by a Korea Science and Engineering Foundation (KOSEF) grant funded by the Korean Ministry of Science, ICT and Future Planning (No. 2013043498)« less
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Quantum Ergodicity and L p Norms of Restrictions of Eigenfunctions
NASA Astrophysics Data System (ADS)
Hezari, Hamid
2018-02-01
We prove an analogue of Sogge's local L p estimates for L p norms of restrictions of eigenfunctions to submanifolds, and use it to show that for quantum ergodic eigenfunctions one can get improvements of the results of Burq-Gérard-Tzvetkov, Hu, and Chen-Sogge. The improvements are logarithmic on negatively curved manifolds (without boundary) and by o(1) for manifolds (with or without boundary) with ergodic geodesic flows. In the case of ergodic billiards with piecewise smooth boundary, we get o(1) improvements on L^∞ estimates of Cauchy data away from a shrinking neighborhood of the corners, and as a result using the methods of Ghosh et al., Jung and Zelditch, Jung and Zelditch, we get that the number of nodal domains of 2-dimensional ergodic billiards tends to infinity as λ \\to ∞. These results work only for a full density subsequence of any given orthonormal basis of eigenfunctions. We also present an extension of the L p estimates of Burq-Gérard-Tzvetkov, Hu, Chen-Sogge for the restrictions of Dirichlet and Neumann eigenfunctions to compact submanifolds of the interior of manifolds with piecewise smooth boundary. This part does not assume ergodicity on the manifolds.
Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong
2014-01-01
We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148
The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition
NASA Astrophysics Data System (ADS)
Markopoulos, Panos P.; Chachlakis, Dimitris G.; Papalexakis, Evangelos E.
2018-04-01
We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \\times M$ matrices that are to be jointly decomposed. Our contributions are as follows. i) We prove that the problem is equivalent to combinatorial optimization over $N$ antipodal-binary variables. ii) We derive the first two algorithms in the literature for its exact solution. The first algorithm has cost exponential in $N$; the second one has cost polynomial in $N$ (under a mild assumption). Our algorithms are accompanied by formal complexity analysis. iii) We conduct numerical studies to compare the performance of exact L1-TUCKER2 (proposed) with standard HOSVD, HOOI, GLRAM, PCA, L1-PCA, and TPCA-L1. Our studies show that L1-TUCKER2 outperforms (in tensor approximation) all the above counterparts when the processed data are outlier corrupted.
Huis, Rudy; Hawkins, Simon; Neutelings, Godfrey
2010-04-19
Quantitative real-time PCR (qRT-PCR) is currently the most accurate method for detecting differential gene expression. Such an approach depends on the identification of uniformly expressed 'housekeeping genes' (HKGs). Extensive transcriptomic data mining and experimental validation in different model plants have shown that the reliability of these endogenous controls can be influenced by the plant species, growth conditions and organs/tissues examined. It is therefore important to identify the best reference genes to use in each biological system before using qRT-PCR to investigate differential gene expression. In this paper we evaluate different candidate HKGs for developmental transcriptomic studies in the economically-important flax fiber- and oil-crop (Linum usitatissimum L). Specific primers were designed in order to quantify the expression levels of 20 different potential housekeeping genes in flax roots, internal- and external-stem tissues, leaves and flowers at different developmental stages. After calculations of PCR efficiencies, 13 HKGs were retained and their expression stabilities evaluated by the computer algorithms geNorm and NormFinder. According to geNorm, 2 Transcriptional Elongation Factors (TEFs) and 1 Ubiquitin gene are necessary for normalizing gene expression when all studied samples are considered. However, only 2 TEFs are required for normalizing expression in stem tissues. In contrast, NormFinder identified glyceraldehyde-3-phosphate dehydrogenase (GADPH) as the most stably expressed gene when all samples were grouped together, as well as when samples were classed into different sub-groups.qRT-PCR was then used to investigate the relative expression levels of two splice variants of the flax LuMYB1 gene (homologue of AtMYB59). LuMYB1-1 and LuMYB1-2 were highly expressed in the internal stem tissues as compared to outer stem tissues and other samples. This result was confirmed with both geNorm-designated- and NormFinder-designated-reference genes. The use of 2 different statistical algorithms results in the identification of different combinations of flax HKGs for expression data normalization. Despite such differences, the use of geNorm-designated- and NormFinder-designated-reference genes enabled us to accurately compare the expression levels of a flax MYB gene in different organs and tissues. Our identification and validation of suitable flax HKGs will facilitate future developmental transcriptomic studies in this economically-important plant.
Zeng, Shaohua; Liu, Yongliang; Wu, Min; Liu, Xiaomin; Shen, Xiaofei; Liu, Chunzhao; Wang, Ying
2014-01-01
Lycium barbarum and L. ruthenicum are extensively used as traditional Chinese medicinal plants. Next generation sequencing technology provides a powerful tool for analyzing transcriptomic profiles of gene expression in non-model species. Such gene expression can then be confirmed with quantitative real-time polymerase chain reaction (qRT-PCR). Therefore, use of systematically identified suitable reference genes is a prerequisite for obtaining reliable gene expression data. Here, we calculated the expression stability of 18 candidate reference genes across samples from different tissues and grown under salt stress using geNorm and NormFinder procedures. The geNorm-determined rank of reference genes was similar to those defined by NormFinder with some differences. Both procedures confirmed that the single most stable reference gene was ACNTIN1 for L. barbarum fruits, H2B1 for L. barbarum roots, and EF1α for L. ruthenicum fruits. PGK3, H2B2, and PGK3 were identified as the best stable reference genes for salt-treated L. ruthenicum leaves, roots, and stems, respectively. H2B1 and GAPDH1+PGK1 for L. ruthenicum and SAMDC2+H2B1 for L. barbarum were the best single and/or combined reference genes across all samples. Finally, expression of salt-responsive gene NAC, fruit ripening candidate gene LrPG, and anthocyanin genes were investigated to confirm the validity of the selected reference genes. Suitable reference genes identified in this study provide a foundation for accurately assessing gene expression and further better understanding of novel gene function to elucidate molecular mechanisms behind particular biological/physiological processes in Lycium.
Autoclave decomposition method for metals in soils and sediments.
Navarrete-López, M; Jonathan, M P; Rodríguez-Espinosa, P F; Salgado-Galeana, J A
2012-04-01
Leaching of partially leached metals (Fe, Mn, Cd, Co, Cu, Ni, Pb, and Zn) was done using autoclave technique which was modified based on EPA 3051A digestion technique. The autoclave method was developed as an alternative to the regular digestion procedure passed the safety norms for partial extraction of metals in polytetrafluoroethylene (PFA vessel) with a low constant temperature (119.5° ± 1.5°C) and the recovery of elements were also precise. The autoclave method was also validated using two Standard Reference Materials (SRMs: Loam Soil B and Loam Soil D) and the recoveries were equally superior to the traditionally established digestion methods. Application of the autoclave was samples from different natural environments (beach, mangrove, river, and city soil) to reproduce the recovery of elements during subsequent analysis.
Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.
Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai
2017-07-15
Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.
OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE.
Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S
2017-05-01
Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order- k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k }. We derive general inequalities between the l p -norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm ( p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations.
Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization
Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996
Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng
2016-01-01
By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.
Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel
NASA Astrophysics Data System (ADS)
Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2015-03-01
Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).
Single image super-resolution based on approximated Heaviside functions and iterative refinement
Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian
2018-01-01
One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298
Yang, Chunxiao; Li, Hui; Pan, Huipeng; Ma, Yabin; Zhang, Deyong; Liu, Yong; Zhang, Zhanhong; Zheng, Changying; Chu, Dong
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for measuring and evaluating gene expression during variable biological processes. To facilitate gene expression studies, normalization of genes of interest relative to stable reference genes is crucial. The western flower thrips Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), the main vector of tomato spotted wilt virus (TSWV), is a destructive invasive species. In this study, the expression profiles of 11 candidate reference genes from nonviruliferous and viruliferous F. occidentalis were investigated. Five distinct algorithms, geNorm, NormFinder, BestKeeper, the ΔCt method, and RefFinder, were used to determine the performance of these genes. geNorm, NormFinder, BestKeeper, and RefFinder identified heat shock protein 70 (HSP70), heat shock protein 60 (HSP60), elongation factor 1 α, and ribosomal protein l32 (RPL32) as the most stable reference genes, and the ΔCt method identified HSP60, HSP70, RPL32, and heat shock protein 90 as the most stable reference genes. Additionally, two reference genes were sufficient for reliable normalization in nonviruliferous and viruliferous F. occidentalis. This work provides a foundation for investigating the molecular mechanisms of TSWV and F. occidentalis interactions.
Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying
2015-04-30
Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.
Quantitative Analysis of Intracellular Motility Based on Optical Flow Model
Li, Heng
2017-01-01
Analysis of cell mobility is a key issue for abnormality identification and classification in cell biology research. However, since cell deformation induced by various biological processes is random and cell protrusion is irregular, it is difficult to measure cell morphology and motility in microscopic images. To address this dilemma, we propose an improved variation optical flow model for quantitative analysis of intracellular motility, which not only extracts intracellular motion fields effectively but also deals with optical flow computation problem at the border by taking advantages of the formulation based on L1 and L2 norm, respectively. In the energy functional of our proposed optical flow model, the data term is in the form of L2 norm; the smoothness of the data changes with regional features through an adaptive parameter, using L1 norm near the edge of the cell and L2 norm away from the edge. We further extract histograms of oriented optical flow (HOOF) after optical flow field of intracellular motion is computed. Then distances of different HOOFs are calculated as the intracellular motion features to grade the intracellular motion. Experimental results show that the features extracted from HOOFs provide new insights into the relationship between the cell motility and the special pathological conditions. PMID:29065574
Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.
Gong, Changcheng; Cai, Yufang; Zeng, Li
2018-01-01
For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.
Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization
Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.
2014-01-01
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076
Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing
2014-04-01
Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.
Optimal Tikhonov regularization for DEER spectroscopy
NASA Astrophysics Data System (ADS)
Edwards, Thomas H.; Stoll, Stefan
2018-03-01
Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akiyama, Kazunori; Fish, Vincent L.; Doeleman, Sheperd S.
We propose a new imaging technique for radio and optical/infrared interferometry. The proposed technique reconstructs the image from the visibility amplitude and closure phase, which are standard data products of short-millimeter very long baseline interferometers such as the Event Horizon Telescope (EHT) and optical/infrared interferometers, by utilizing two regularization functions: the ℓ {sub 1}-norm and total variation (TV) of the brightness distribution. In the proposed method, optimal regularization parameters, which represent the sparseness and effective spatial resolution of the image, are derived from data themselves using cross-validation (CV). As an application of this technique, we present simulated observations of M87more » with the EHT based on four physically motivated models. We confirm that ℓ {sub 1} + TV regularization can achieve an optimal resolution of ∼20%–30% of the diffraction limit λ / D {sub max}, which is the nominal spatial resolution of a radio interferometer. With the proposed technique, the EHT can robustly and reasonably achieve super-resolution sufficient to clearly resolve the black hole shadow. These results make it promising for the EHT to provide an unprecedented view of the event-horizon-scale structure in the vicinity of the supermassive black hole in M87 and also the Galactic center Sgr A*.« less
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
LP-stability for the strong solutions of the Navier-Stokes equations in the whole space
NASA Astrophysics Data System (ADS)
Beiraodaveiga, H.; Secchi, P.
1985-10-01
We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
Yin, Yunlu; Yu, Hongbo; Su, Zhongbin; Zhang, Yuan; Zhou, Xiaolin
2017-09-01
Sanction is used by almost all known human societies to enforce fairness norm in resource distribution. Previous studies have consistently shown that the lateral prefrontal cortex (lPFC) and the adjacent orbitofrontal cortex (lOFC) play a causal role in mediating the effect of sanction threat on norm compliance. However, most of these studies were conducted in gain domain in which resources are distributed. Little is known about the mechanisms underlying norm compliance in loss domain in which individual sacrifices are needed. Here we employed a modified version of dictator game (DG) and high-definition transcranial direct current stimulation (HD-tDCS) to investigate to what extent lPFC/lOFC is involved in norm compliance (with and without sanction threat) in both gain- and loss-sharing contexts. Participants allocated a fixed total amount of monetary gain or loss between themselves and an anonymous partner in multiple rounds of the game. A computer program randomly decided whether a given round involved sanction threat for the participants. Results showed that disruption of the right lPFC/lOFC by tDCS increased the voluntary norm compliance in the gain domain, but not in the loss domain; tDCS on lPFC/lOFC had no effect on compliance under sanction threat in either the gain or loss domain. Our findings reveal a context-dependent nature of norm compliance and differential roles of lPFC/lOFC in norm compliance in gain and loss domains. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.
Zhang, Jianguang; Jiang, Jianmin
2018-02-01
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
A test of the perceived norms model to explain drinking patterns among university student athletes.
Thombs, D L
2000-09-01
The author tested the ability of perceived drinking norms to discriminate among drinking patterns in a sample of National Collegiate Athletic Association (NCAA) Division I student athletes. He used an anonymous questionnaire to assess 297 athletes, representing 18 teams, at a public university in the Midwest. Alcohol use patterns showed considerable variation, with many athletes (37.1%) abstaining during their season of competition. A discriminant function analysis revealed that higher levels of alcohol involvement are disproportionately found among athletes who began drinking regularly at an early age. Perceived drinking norms were less important in the discrimination of student athlete drinker groups. Women and those with higher grade point averages were somewhat more likely to refrain from in-season drinking than other survey respondents.
Xi, Jianing; Wang, Minghui; Li, Ao
2018-06-05
Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.
Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source
NASA Astrophysics Data System (ADS)
Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen
2018-05-01
Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.
Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations
NASA Astrophysics Data System (ADS)
Poleshchikov, S. M.
2018-03-01
Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.
A rapid and simple determination of caffeine in teas, coffees and eight beverages.
Sereshti, Hassan; Samadi, Soheila
2014-09-01
Caffeine was extracted and preconcentrated by the simple, fast and green method of dispersive liquid-liquid microextraction (DLLME) and analysed by gas chromatography-nitrogen phosphorus detection (GC-NPD). The influence of main parameters affecting the extraction efficiency investigated and optimised. Under the optimal conditions, the method was successfully applied to determination of caffeine in different real samples including five types of tea (green, black, white, oolong teas and tea bag), two kinds of coffee (Nescafe coffee and coffee), and eight beverages (regular Coca Cola, Coca Cola zero, regular Pepsi, Pepsi max, Sprite, 7up, Red Bull and Hype).The limit of detection (LOD) and limit of quantification (LOQ) were 0.02 and 0.05 μg mL(-1), respectively. Linear dynamic range (LDR) was 0.05-500 μg mL(-1) and determination coefficient (R(2)) was 0.9990. The relative standard deviation (RSD) was 3.2% (n=5, C=1 μg mL(-1)). Copyright © 2014 Elsevier Ltd. All rights reserved.
Zhang, Lingli; Zeng, Li; Guo, Yumeng
2018-01-01
Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.
High-order graph matching based feature selection for Alzheimer's disease identification.
Liu, Feng; Suk, Heung-Il; Wee, Chong-Yaw; Chen, Huafu; Shen, Dinggang
2013-01-01
One of the main limitations of l1-norm feature selection is that it focuses on estimating the target vector for each sample individually without considering relations with other samples. However, it's believed that the geometrical relation among target vectors in the training set may provide useful information, and it would be natural to expect that the predicted vectors have similar geometric relations as the target vectors. To overcome these limitations, we formulate this as a graph-matching feature selection problem between a predicted graph and a target graph. In the predicted graph a node is represented by predicted vector that may describe regional gray matter volume or cortical thickness features, and in the target graph a node is represented by target vector that include class label and clinical scores. In particular, we devise new regularization terms in sparse representation to impose high-order graph matching between the target vectors and the predicted ones. Finally, the selected regional gray matter volume and cortical thickness features are fused in kernel space for classification. Using the ADNI dataset, we evaluate the effectiveness of the proposed method and obtain the accuracies of 92.17% and 81.57% in AD and MCI classification, respectively.
Metric freeness and projectivity for classical and quantum normed modules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helemskii, A Ya
2013-07-31
In functional analysis, there are several diverse approaches to the notion of projective module. We show that a certain general categorical scheme contains all basic versions as special cases. In this scheme, the notion of free object comes to the foreground, and, in the best categories, projective objects are precisely retracts of free ones. We are especially interested in the so-called metric version of projectivity and characterize the metrically free classical and quantum (= operator) normed modules. Informally speaking, so-called extremal projectivity, which was known earlier, is interpreted as a kind of 'asymptotical metric projectivity'. In addition, we answer themore » following specific question in the geometry of normed spaces: what is the structure of metrically projective modules in the simplest case of normed spaces? We prove that metrically projective normed spaces are precisely the subspaces of l{sub 1}(M) (where M is a set) that are denoted by l{sub 1}{sup 0}(M) and consist of finitely supported functions. Thus, in this case, projectivity coincides with freeness. Bibliography: 28 titles.« less
Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection
NASA Astrophysics Data System (ADS)
Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang
2017-07-01
It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the wind turbine (WT) bearing fault detection and its effectiveness is sufficiently verified. Compared with the current popular bearing fault diagnosis techniques, wavelet analysis and spectral kurtosis, our model achieves a higher diagnostic accuracy.
NASA Astrophysics Data System (ADS)
Li, Meng; Gu, Xian-Ming; Huang, Chengming; Fei, Mingfa; Zhang, Guoyu
2018-04-01
In this paper, a fast linearized conservative finite element method is studied for solving the strongly coupled nonlinear fractional Schrödinger equations. We prove that the scheme preserves both the mass and energy, which are defined by virtue of some recursion relationships. Using the Sobolev inequalities and then employing the mathematical induction, the discrete scheme is proved to be unconditionally convergent in the sense of L2-norm and H α / 2-norm, which means that there are no any constraints on the grid ratios. Then, the prior bound of the discrete solution in L2-norm and L∞-norm are also obtained. Moreover, we propose an iterative algorithm, by which the coefficient matrix is independent of the time level, and thus it leads to Toeplitz-like linear systems that can be efficiently solved by Krylov subspace solvers with circulant preconditioners. This method can reduce the memory requirement of the proposed linearized finite element scheme from O (M2) to O (M) and the computational complexity from O (M3) to O (Mlog M) in each iterative step, where M is the number of grid nodes. Finally, numerical results are carried out to verify the correction of the theoretical analysis, simulate the collision of two solitary waves, and show the utility of the fast numerical solution techniques.
Quadratic obstructions to small-time local controllability for scalar-input systems
NASA Astrophysics Data System (ADS)
Beauchard, Karine; Marbach, Frédéric
2018-03-01
We consider nonlinear finite-dimensional scalar-input control systems in the vicinity of an equilibrium. When the linearized system is controllable, the nonlinear system is smoothly small-time locally controllable: whatever m > 0 and T > 0, the state can reach a whole neighborhood of the equilibrium at time T with controls arbitrary small in Cm-norm. When the linearized system is not controllable, we prove that: either the state is constrained to live within a smooth strict manifold, up to a cubic residual, or the quadratic order adds a signed drift with respect to it. This drift holds along a Lie bracket of length (2 k + 1), is quantified in terms of an H-k-norm of the control, holds for controls small in W 2 k , ∞-norm and these spaces are optimal. Our proof requires only C3 regularity of the vector field. This work underlines the importance of the norm used in the smallness assumption on the control, even in finite dimension.
Social influences, social norms, social support, and smoking behavior among adolescent workers.
Fagan, P; Eisenberg, M; Stoddard, A M; Frazier, L; Sorensen, G
2001-01-01
To examine the relationships between worksite interpersonal influences and smoking and quitting behavior among adolescent workers. The cross-sectional survey assessed factors influencing tobacco use behavior. During the fall of 1998, data were collected from 10 grocery stores in Massachusetts that were owned and managed by the same company. Eligible participants included 474 working adolescents ages 15 to 18. Eighty-three percent of workers (n = 379) completed the survey. The self-report questionnaire assessed social influences, social norms, social support, friendship networks, stage of smoking and quitting behavior, employment patterns, and demographic factors. Thirty-five percent of respondents were never smokers, 21% experimental, 5% occasional, 18% regular, and 23% former smokers. Using analysis of variance (ANOVA), results indicate that regular smokers were 30% more likely than experimental or occasional smokers to report coworker encouragement to quit (p = .0002). Compared with regular smokers, never smokers were 15% more likely to report greater nonacceptability of smoking (p = .01). chi 2 tests of association revealed no differences in friendship networks by stage of smoking. These data provide evidence for the need to further explore social factors inside and outside the work environment that influence smoking and quitting behavior among working teens. Interpretations of the data are limited because of cross-sectional and self-report data collection methods used in one segment of the retail sector.
NASA Astrophysics Data System (ADS)
Zhai, Guang; Shirzaei, Manoochehr
2017-12-01
Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.
Estimation and Control with Relative Measurements: Algorithms and Scaling Laws
2007-09-01
eigenvector of L −1 corre- sponding to its largest eigenvalue. Since L−1 is a positive matrix, Perron - Frobenius theory tells us that |u1| := {|u11...the Frobenius norm of a matrix, and a linear vector space SV as the space of all bounded node-functions with respect to the above defined 144 norm...je‖2F where Eu is the set edges in E that are incident on u. It can be shown from the relationship between the Frobenius norm and the singular
Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie; Hu, Qingmao
2016-01-01
Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L 0 -norm/ L 1 -norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.
The importance of being fractional in mixing: optimal choice of the index s in H-s norm
NASA Astrophysics Data System (ADS)
Vermach, Lukas; Caulfield, C. P.
2015-11-01
A natural measure of homogeneity of a mixture is the variance of the concentration field, which in the case of a zero-mean field is the L2-norm. Mathew et al. (Physica D, 2005) introduced a new multi-scale measure to quantify mixing referred to as the mix-norm, which is equivalent to the H - 1 / 2 norm, the Sobolev norm of negative fractional index. Unlike the L2-norm, the mix-norm is not conserved by the advection equation and thus captures mixing even in the non-diffusive systems. Furthermore, the mix-norm is consistent with the ergodic definition of mixing and Lin et al. (JFM, 2011) showed that this property extends to any norm from the class H-s , s > 0 . We consider a zero-mean passive scalar field organised into two layers of different concentrations advected by a flow field in a torus. We solve two non-linear optimisation problems. We identify the optimal initial perturbation of the velocity field with given initial energy as well as the optimal forcing with given total action (the time integral of the kinetic energy of the flow) which both yield maximal mixing by a target time horizon. We analyse sensitivity of the results with respect to s-variation and thus address the importance of the choice of the fractional index This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis.
Pan, Huipeng; Ma, Yabin; Zhang, Deyong; Liu, Yong; Zhang, Zhanhong; Zheng, Changying; Chu, Dong
2015-01-01
Reverse transcriptase-quantitative polymerase chain reaction (RT-qPCR) is a reliable technique for measuring and evaluating gene expression during variable biological processes. To facilitate gene expression studies, normalization of genes of interest relative to stable reference genes is crucial. The western flower thrips Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), the main vector of tomato spotted wilt virus (TSWV), is a destructive invasive species. In this study, the expression profiles of 11 candidate reference genes from nonviruliferous and viruliferous F. occidentalis were investigated. Five distinct algorithms, geNorm, NormFinder, BestKeeper, the ΔC t method, and RefFinder, were used to determine the performance of these genes. geNorm, NormFinder, BestKeeper, and RefFinder identified heat shock protein 70 (HSP70), heat shock protein 60 (HSP60), elongation factor 1 α, and ribosomal protein l32 (RPL32) as the most stable reference genes, and the ΔC t method identified HSP60, HSP70, RPL32, and heat shock protein 90 as the most stable reference genes. Additionally, two reference genes were sufficient for reliable normalization in nonviruliferous and viruliferous F. occidentalis. This work provides a foundation for investigating the molecular mechanisms of TSWV and F. occidentalis interactions. PMID:26244556
Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.
Wang, Xiudong; Gu, Yuantao
2017-05-10
This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.
Exploring local regularities for 3D object recognition
NASA Astrophysics Data System (ADS)
Tian, Huaiwen; Qin, Shengfeng
2016-11-01
In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.
NASA Technical Reports Server (NTRS)
Cai, Zhiqiang; Manteuffel, Thomas A.; McCormick, Stephen F.
1996-01-01
In this paper, we study the least-squares method for the generalized Stokes equations (including linear elasticity) based on the velocity-vorticity-pressure formulation in d = 2 or 3 dimensions. The least squares functional is defined in terms of the sum of the L(exp 2)- and H(exp -1)-norms of the residual equations, which is weighted appropriately by by the Reynolds number. Our approach for establishing ellipticity of the functional does not use ADN theory, but is founded more on basic principles. We also analyze the case where the H(exp -1)-norm in the functional is replaced by a discrete functional to make the computation feasible. We show that the resulting algebraic equations can be uniformly preconditioned by well-known techniques.
Zhang, Li; Zhou, WeiDa
2013-12-01
This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J
2004-03-01
Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
Higher-order Fourier analysis over finite fields and applications
NASA Astrophysics Data System (ADS)
Hatami, Pooya
Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.
L(2) stability for weak solutions of the Navier-Stokes equations in R(3)
NASA Astrophysics Data System (ADS)
Secchi, P.
1985-11-01
We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations is still an open problem. Up to now, the only available global existence theorem (other than for sufficiently small initial data) is that of weak (turbulent) solutions. From both the mathematical and the physical point of view, an interesting property is the stability of such weak solutions. We assume that v(t,x) is a solution, with initial datum vO(x). We suppose that the initial datum is perturbed and consider one weak solution u corresponding to the new initial velocity. Then we prove that, due to viscosity, the perturbed weak solution u approaches in a suitable norm the unperturbed one, as time goes to + infinity, without smallness assumptions on the initial perturbation.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
NASA Astrophysics Data System (ADS)
Liu, Peng; Wang, Yanfei
2018-04-01
We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.
Prakash, Jaya; Yalavarthy, Phaneendra K
2013-03-01
Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.
Prior, Anat; MacWhinney, Brian; Kroll, Judith F.
2014-01-01
We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923
Direct discontinuous Galerkin method and its variations for second order elliptic equations
Huang, Hongying; Chen, Zheng; Li, Jin; ...
2016-08-23
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
Direct discontinuous Galerkin method and its variations for second order elliptic equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Hongying; Chen, Zheng; Li, Jin
In this study, we study direct discontinuous Galerkin method (Liu and Yan in SIAM J Numer Anal 47(1):475–698, 2009) and its variations (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010; Vidden and Yan in J Comput Math 31(6):638–662, 2013; Yan in J Sci Comput 54(2–3):663–683, 2013) for 2nd order elliptic problems. A priori error estimate under energy norm is established for all four methods. Optimal error estimate under L 2 norm is obtained for DDG method with interface correction (Liu and Yan in Commun Comput Phys 8(3):541–564, 2010) and symmetric DDG method (Vidden and Yan in J Comput Mathmore » 31(6):638–662, 2013). A series of numerical examples are carried out to illustrate the accuracy and capability of the schemes. Numerically we obtain optimal (k+1)th order convergence for DDG method with interface correction and symmetric DDG method on nonuniform and unstructured triangular meshes. An interface problem with discontinuous diffusion coefficients is investigated and optimal (k+1)th order accuracy is obtained. Peak solutions with sharp transitions are captured well. Highly oscillatory wave solutions of Helmholz equation are well resolved.« less
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.
Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki
2017-01-01
Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, G; Xing, L
2016-06-15
Purpose: Cone beam X-ray luminescence computed tomography (CB-XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. However, the inverse problem of CB-XLCT is seriously ill-conditioned, hindering us to achieve good image quality. In this work, a novel reconstruction method based on Bayesian theory is proposed to tackle this problem Methods: Bayesian theory provides a natural framework for utilizing various kinds of available prior information to improve the reconstruction image quality. A generalized Gaussian Markov random field (GGMRF) model is proposed here to construct the prior model of the Bayesianmore » theory. The most important feature of GGMRF model is the adjustable shape parameter p, which can be continuously adjusted from 1 to 2. The reconstruction image tends to have more edge-preserving property when p is slide to 1, while having more noise tolerance property when p is slide to 2, just like the behavior of L1 and L2 regularization methods, respectively. The proposed method provides a flexible regularization framework to adapt to a wide range of applications. Results: Numerical simulations were implemented to test the performance of the proposed method. The Digimouse atlas were employed to construct a three-dimensional mouse model, and two small cylinders were placed inside to serve as the targets. Reconstruction results show that the proposed method tends to obtain better spatial resolution with a smaller shape parameter, while better signal-to-noise image with a larger shape parameter. Quantitative indexes, contrast-to-noise ratio (CNR) and full-width at half-maximum (FWHM), were used to assess the performance of the proposed method, and confirmed its effectiveness in CB-XLCT reconstruction. Conclusion: A novel reconstruction method for CB-XLCT is proposed based on GGMRF model, which enables an adjustable performance tradeoff between L1 and L2 regularization methods. Numerical simulations were conducted to demonstrate its performance.« less
Similar Inflammatory Responses following Sprint Interval Training Performed in Hypoxia and Normoxia
Richardson, Alan J.; Relf, Rebecca L.; Saunders, Arron; Gibson, Oliver R.
2016-01-01
Sprint interval training (SIT) is an efficient intervention capable of improving aerobic capacity and exercise performance. This experiment aimed to determine differences in training adaptations and the inflammatory responses following 2 weeks of SIT (30 s maximal work, 4 min recovery; 4–7 repetitions) performed in normoxia or hypoxia. Forty-two untrained participants [(mean ± SD), age 21 ±1 years, body mass 72.1 ±11.4 kg, and height 173 ±10 cm] were equally and randomly assigned to one of three groups; control (CONT; no training, n = 14), normoxic (NORM; SIT in FiO2: 0.21, n = 14), and normobaric hypoxic (HYP; SIT in FiO2: 0.15, n = 14). Participants completed a V˙O2peak test, a time to exhaustion (TTE) trial (power = 80% V˙O2peak) and had hematological [hemoglobin (Hb), haematocrit (Hct)] and inflammatory markers [interleukin-6 (IL-6), tumor necrosis factor-α (TNFα)] measured in a resting state, pre and post SIT. V˙O2peak (mL.kg−1.min−1) improved in HYP (+11.9%) and NORM (+9.8%), but not CON (+0.9%). Similarly TTE improved in HYP (+32.2%) and NORM (+33.0%), but not CON (+3.4%) whilst the power at the anaerobic threshold (AT; W.kg−1) also improved in HYP (+13.3%) and NORM (+8.0%), but not CON (–0.3%). AT (mL.kg−1.min−1) improved in HYP (+9.5%), but not NORM (+5%) or CON (–0.3%). No between group change occurred in 30 s sprint performance or Hb and Hct. IL-6 increased in HYP (+17.4%) and NORM (+20.1%), but not CON (+1.2%), respectively. TNF-α increased in HYP (+10.8%) NORM (+12.9%) and CON (+3.4%). SIT in HYP and NORM increased V˙O2peak, power at AT and TTE performance in untrained individuals, improvements in AT occurred only when SIT was performed in HYP. Increases in IL-6 and TNFα reflect a training induced inflammatory response to SIT; hypoxic conditions do not exacerbate this. PMID:27536249
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L 1 -norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods.
Social Norms Information Enhances the Efficacy of an Appearance-based Sun Protection Intervention
Kulik, James A; Butler, Heather; Gerrard, Meg; Gibbons, Frederick X; Mahler, Heike
2008-01-01
This experiment examined whether the efficacy of an appearance-based sun protection intervention could be enhanced by the addition of social norms information. Southern California college students (N=125, predominantly female) were randomly assigned to either an appearance-based sun protection intervention-that consisted of a photograph depicting underlying sun damage to their skin (UV photo) and information about photoaging or to a control condition. Those assigned to the intervention were further randomized to receive information about what one should do to prevent photoaging (injunctive norms information), information about the number of their peers who currently use regular sun protection (descriptive norms information), both injunctive and descriptive norms information, or neither type of norms information. The results demonstrated that those who received the UV Photo/photoaging information intervention expressed greater sun protection intentions and subsequently reported greater sun protection behaviors than did controls. Further, the addition of both injunctive and descriptive norms information increased self-reported sun protection behaviors during the subsequent month. PMID:18448221
Regularized maximum pure-state input-output fidelity of a quantum channel
NASA Astrophysics Data System (ADS)
Ernst, Moritz F.; Klesse, Rochus
2017-12-01
As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.
Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE
Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2013-01-01
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478
Klein, Eva M; Wölfling, Klaus; Beutel, Manfred E; Dreier, Michael; Müller, Kai W
2017-04-01
The proportion of adolescent migrants in Germany aged 15-20 years has risen to about 29.5% in 2014 according to Federal census statistics. The purpose of the current study was to describe and to compare the psychological strains of adolescent 1 st and 2 nd generation migrants with non-migrants in a representative school survey. Acceptance of violence legitimizing masculinity norms was explored and its correlation with psychological strain was analyzed. Self-reported data of psychological strain (internalizing and externalizing problems) and acceptance of violence legitimizing masculinity were gathered among 8 518 pupils aged 12-19 years across different school types. Among the surveyed adolescents, 27.6% reported a migration background (5.8% 1 st generation migrants; 21.8% 2 nd generation migrants). Particularly 1 st generation migrants scored higher in internalizing and externalizing problems than 2 nd generation migrants or non-migrants. The differences, however, were small. Adolescents with migration background suffered from educational disadvantage, especially 1 st generation migrants. Male adolescents reported significantly higher acceptance of violence legitimizing masculinity norms than their female counterparts. Strong agreement with the measured concept of masculinity was found among pupils of lower secondary school and adolescents reported regularly tobacco and cannabis consumption. The acceptance of violence legitimizing masculinity norms was greater among migrants, particularly 1 st generation migrants, than non-migrants. Overall, high acceptance of violence legitimizing masculinity norms was related to externalizing problems, which can be understood as dysfunctional coping mechanisms of social disadvantage and a lack of prospects. © Georg Thieme Verlag KG Stuttgart · New York.
Adaptive low-rank subspace learning with online optimization for robust visual tracking.
Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan
2017-04-01
In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yao, Bing; Yang, Hui
2016-12-01
This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.
Image degradation characteristics and restoration based on regularization for diffractive imaging
NASA Astrophysics Data System (ADS)
Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun
2017-11-01
The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan
2017-12-15
Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0
A mass-energy preserving Galerkin FEM for the coupled nonlinear fractional Schrödinger equations
NASA Astrophysics Data System (ADS)
Zhang, Guoyu; Huang, Chengming; Li, Meng
2018-04-01
We consider the numerical simulation of the coupled nonlinear space fractional Schrödinger equations. Based on the Galerkin finite element method in space and the Crank-Nicolson (CN) difference method in time, a fully discrete scheme is constructed. Firstly, we focus on a rigorous analysis of conservation laws for the discrete system. The definitions of discrete mass and energy here correspond with the original ones in physics. Then, we prove that the fully discrete system is uniquely solvable. Moreover, we consider the unconditionally convergent properties (that is to say, we complete the error estimates without any mesh ratio restriction). We derive L2-norm error estimates for the nonlinear equations and L^{∞}-norm error estimates for the linear equations. Finally, some numerical experiments are included showing results in agreement with the theoretical predictions.
Boudreau, François; Godin, Gaston
2014-12-01
Most people with type 2 diabetes do not engage in regular leisure-time physical activity. The theory of planned behavior and moral norm construct can enhance our understanding of physical activity intention and behavior among this population. This study aims to identify the determinants of both intention and behavior to participate in regular leisure-time physical activity among individuals with type 2 diabetes who not meet Canada's physical activity guidelines. By using secondary data analysis of a randomized computer-tailored print-based intervention, participants (n = 200) from the province of Quebec (Canada) completed and returned a baseline questionnaire measuring their attitude, perceived behavioral control, and moral norm. One month later, they self-reported their level of leisure-time physical activity. A hierarchical regression equation showed that attitude (beta = 0.10, P < 0.05), perceived behavioral control (beta = 0.37, P < 0.001), and moral norm (beta = 0.45, P < 0.001) were significant determinants of intention, with the final model explaining 63% of the variance. In terms of behavioral prediction, intention (beta = 0.34, P < 0.001) and perceived behavioral control (beta = 0.16, P < 0.05) added 17% to the variance, after controlling the effects of the experimental condition (R (2) = 0.04, P < 0.05) and past participation in leisure-time physical activity (R (2) = 0.22, P < 0.001). The final model explained 43% of the behavioral variance. Finally, the bootstrapping procedure indicated that the influence of moral norm on behavior was mediated by intention and perceived behavioral control. The determinants investigated offered an excellent starting point for designing appropriate counseling messages to promote leisure-time physical activity among individuals with type 2 diabetes.
Regularization of soft-X-ray imaging in the DIII-D tokamak
Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...
2015-03-02
We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less
Ruduś, Izabela; Kępczyński, Jan
2018-01-01
Molecular studies of primary and secondary dormancy in Avena fatua L., a serious weed of cereal and other crops, are intended to reveal the species-specific details of underlying molecular mechanisms which in turn may be useable in weed management. Among others, quantitative real-time PCR (RT-qPCR) data of comparative gene expression analysis may give some insight into the involvement of particular wild oat genes in dormancy release, maintenance or induction by unfavorable conditions. To assure obtaining biologically significant results using this method, the expression stability of selected candidate reference genes in different data subsets was evaluated using four statistical algorithms i.e. geNorm, NormFinder, Best Keeper and ΔCt method. Although some discrepancies in their ranking outputs were noticed, evidently two ubiquitin-conjugating enzyme homologs, AfUBC1 and AfUBC2, as well as one homolog of glyceraldehyde 3-phosphate dehydrogenase AfGAPDH1 and TATA-binding protein AfTBP2 appeared as more stably expressed than AfEF1a (translation elongation factor 1α), AfGAPDH2 or the least stable α-tubulin homolog AfTUA1 in caryopses and seedlings of A. fatua. Gene expression analysis of a dormancy-related wild oat transcription factor VIVIPAROUS1 (AfVP1) allowed for a validation of candidate reference genes performance. Based on the obtained results it can be recommended that the normalization factor calculated as a geometric mean of Cq values of AfUBC1, AfUBC2 and AfGAPDH1 would be optimal for RT-qPCR results normalization in the experiments comprising A. fatua caryopses of different dormancy status.
Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hojin; Becker, Stephen; Lee, Rena
2013-07-15
Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.« less
Zimmer, Bernd; Sino, Hiba
2018-03-19
To analyze common values of bracket torque (Andrews, Roth, MBT, Ricketts) for their validity in achieving incisor inclinations that are considered normal by different cephalometric standards. Using the equations developed in part 1 (eU1 (BOP) = 90° - BT (U1) - TCA (U1) + α 1 - α 2 and eL1 (BOP) = 90° - BT (L1) - TCA (L1) + β 1 - β 2 ) (abbreviations see part 1) and the mean values (± SD) obtained as statistical measures in parts 1 and 2 of the study (α 1 and β 1 [1.7° ± 0.7°], α 2 [3.6° ± 0.3°], β 2 [3.2° ± 0.4°], TCA (U1) [24.6° ± 3.6°] and TCA (L1) [22.9° ± 4.3°]) expected (= theoretically anticipated) values were calculated for upper and lower incisors (U1 and L1) and compared to targeted (= cephalometric norm) values. For U1, there was no overlapping between the ranges of expected and targeted values, as the lowest targeted value of (58.3°; Ricketts) was higher than the highest expected value (56.5°; Andrews) relative to the bisected occlusal plane (BOP). Thus all of these torque systems will aim for flatter inclinations than prescribed by any of the norm values. Depending on target values, the various bracket systems fell short by 1.8-5.5° (Andrews), 6.8-10.5° (Roth), 11.8-15.5° (MBT), or 16.8-20.5° (Ricketts). For L1, there was good agreement of the MBT system with the Ricketts and Björk target values (Δ0.1° and Δ-0.8°, respectively), and both the Roth and Ricketts systems came close to the Bergen target value (both Δ2.3°). Depending on target values, the ranges of deviation for L1 were 6.3-13.2° for Andrews (Class II prescription), 2.3°-9.2° for Roth, -3.7 to -3.2° for MBT, and 2.3-9.2° for Ricketts. Common values of upper incisor bracket torque do not have acceptable validity in achieving normal incisor inclinations. A careful selection of lower bracket torque may provide satisfactory matching with some of the targeted norm values.
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kowalski, Karol; Valiev, Marat
2009-12-21
The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski, P.D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent non-iterative coupled-cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wavefunction. Although proven to be effcient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we addressmore » the issue of size-consistent regularization of the GF expansion by redefning the equations for the cluster amplitudes. The performance and basic features of proposed methodology is illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with QM/MM module and applied to describe the SN2 reaction of CHCl3 and OH- in aqueous solution.« less
Psychological Dimensions of Cross-Cultural Differences
2013-05-01
26.2 6.84 Japan 429 63 20.9 8.67 East/Southeast Europe Russia 69 83 22.3 1.25 Ukraine 244 64 20.2 6.78 Poland ...items derived from anthropological literature. Final Report - Saucier 14 Table 3 Comparison of Item-Pool Sources: Average Eta-Squared Values...ethnonationalism only) 4 .22 .18 Regularity-norm behaviors, derived from anthropological literature 6 .16 .15 Family values
Mirone, Alessandro; Brun, Emmanuel; Coan, Paola
2014-01-01
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987
NASA Astrophysics Data System (ADS)
Sharapudinov, I. I.
1987-02-01
Let p=p(t) be a measurable function defined on \\lbrack0,\\,1\\rbrack. If p(t) is essentially bounded on \\lbrack0,\\,1\\rbrack, denote by \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) the set of measurable functions f defined on \\lbrack0,\\,1\\rbrack for which \\int_0^1\\vert f(t)\\vert^{p(t)}dt<\\infty. The space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) with p(t)\\geqslant 1 is a normed space with norm \\displaystyle \\vert\\vert f\\vert\\vert _p=\\inf\\bigg\\{\\alpha>0:\\,\\int_0^1\\bigg\\vert\\frac{f(t)}{\\alpha}\\bigg\\vert^{p(t)}dt\\leqslant1\\bigg\\}.This paper examines the question of whether the Haar system is a basis in \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack). Conditions that are in a certain sense definitive on the function p(t) in order that the Haar system be a basis of \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) are obtained. The concept of a localization principle in the mean is introduced, and its connection with the space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) is exhibited.Bibliography: 2 titles.
Obtaining sparse distributions in 2D inverse problems.
Reci, A; Sederman, A J; Gladden, L F
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L 1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L 1 regularization to a class of inverse problems; relaxation-relaxation, T 1 -T 2 , and diffusion-relaxation, D-T 2 , correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L 1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L 1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L 1 regularization algorithm stably recovers a distribution at a signal to noise ratio<20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise. Copyright © 2017. Published by Elsevier Inc.
Obtaining sparse distributions in 2D inverse problems
NASA Astrophysics Data System (ADS)
Reci, A.; Sederman, A. J.; Gladden, L. F.
2017-08-01
The mathematics of inverse problems has relevance across numerous estimation problems in science and engineering. L1 regularization has attracted recent attention in reconstructing the system properties in the case of sparse inverse problems; i.e., when the true property sought is not adequately described by a continuous distribution, in particular in Compressed Sensing image reconstruction. In this work, we focus on the application of L1 regularization to a class of inverse problems; relaxation-relaxation, T1-T2, and diffusion-relaxation, D-T2, correlation experiments in NMR, which have found widespread applications in a number of areas including probing surface interactions in catalysis and characterizing fluid composition and pore structures in rocks. We introduce a robust algorithm for solving the L1 regularization problem and provide a guide to implementing it, including the choice of the amount of regularization used and the assignment of error estimates. We then show experimentally that L1 regularization has significant advantages over both the Non-Negative Least Squares (NNLS) algorithm and Tikhonov regularization. It is shown that the L1 regularization algorithm stably recovers a distribution at a signal to noise ratio < 20 and that it resolves relaxation time constants and diffusion coefficients differing by as little as 10%. The enhanced resolving capability is used to measure the inter and intra particle concentrations of a mixture of hexane and dodecane present within porous silica beads immersed within a bulk liquid phase; neither NNLS nor Tikhonov regularization are able to provide this resolution. This experimental study shows that the approach enables discrimination between different chemical species when direct spectroscopic discrimination is impossible, and hence measurement of chemical composition within porous media, such as catalysts or rocks, is possible while still being stable to high levels of noise.
Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)
2011-01-01
Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The
Tripathi, Ritu; Cervone, Daniel; Savani, Krishna
2018-04-01
In Western theories of motivation, autonomy is conceived as a universal motivator of human action; enhancing autonomy is expected to increase motivation panculturally. Using a novel online experimental paradigm that afforded a behavioral measure of motivation, we found that, contrary to this prevailing view, autonomy cues affect motivation differently among American and Indian corporate professionals. Autonomy-supportive instructions increased motivation among Americans but decreased motivation among Indians. The motivational Cue × Culture interaction was extraordinarily large; the populations exhibited little statistical overlap. A second study suggested that this interaction reflects culturally specific norms that are widely understood by members of the given culture. When evaluating messages to motivate workers, Indians, far more than Americans, preferred a message invoking obligations to one invoking autonomous personal choice norms. Results cast doubt on the claim, made regularly in both basic and applied psychology, that enhancing autonomy is a universally preferred method for boosting motivation.
Sparse Covariance Matrix Estimation by DCA-Based Algorithms.
Phan, Duy Nhat; Le Thi, Hoai An; Dinh, Tao Pham
2017-11-01
This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.
Naumenko, Olesya I; Zheng, Han; Xiong, Yanwen; Senchenkova, Sof'ya N; Wang, Hong; Shashkov, Alexander S; Li, Qun; Wang, Jianping; Knirel, Yuriy A
2018-05-22
An O-polysaccharide was isolated from the lipopolysaccharide of Escherichia albertii O2 and studied by chemical methods and 1D and 2D 1 H and 13 C NMR spectroscopy. The following structure of the O-polysaccharide was established: . The O-polysaccharide is characterized by masked regularity owing to a non-stoichiometric O-acetylation of an l-fucose residue in the main chain and a non-stoichiometric side-chain l-fucosylation of a β-GlcNAc residue. A regular linear polysaccharide was obtained by sequential Smith degradation and alkaline O-deacetylation of the O-polysaccharide. The content of the O-antigen gene cluster of E. albertii O2 was found to be essentially consistent with the O-polysaccharide structure established. Copyright © 2018 Elsevier Ltd. All rights reserved.
Åstrøm, Anne N; Lie, Stein Atle; Gülcan, Ferda
2018-05-31
Understanding factors that affect dental attendance behavior helps in constructing effective oral health campaigns. A socio-cognitive model that adequately explains variance in regular dental attendance has yet to be validated among younger adults in Norway. Focusing a representative sample of younger Norwegian adults, this cross-sectional study provided an empirical test of the Theory of Planned Behavior (TPB) augmented with descriptive norm and action planning and estimated direct and indirect effects of attitudes, subjective norms, descriptive norms, perceived behavioral control and action planning on intended and self-reported regular dental attendance. Self-administered questionnaires provided by 2551, 25-35 year olds, randomly selected from the Norwegian national population registry were used to assess socio-demographic factors, dental attendance as well as the constructs of the augmented TPB model (attitudes, subjective norms, descriptive norms, intention, action planning). A two-stage process of structural equation modelling (SEM) was used to test the augmented TPB model. Confirmatory factor analysis, CFA, confirmed the proposed correlated 6-factor measurement model after re-specification. SEM revealed that attitudes, perceived behavioral control, subjective norms and descriptive norms explained intention. The corresponding standardized regression coefficients were respectively (β = 0.70), (β =0.18), (β = - 0.17) and (β =0.11) (p < 0.001). Intention (β =0.46) predicted action planning and action planning (β =0.19) predicted dental attendance behavior (p < 0.001). The model revealed indirect effects of intention and perceived behavioral control on behavior through action planning and through intention and action planning, respectively. The final model explained 64 and 41% of the total variance in intention and dental attendance behavior. The findings support the utility of the TPB, the expanded normative component and action planning in predicting younger adults' intended- and self-reported dental attendance. Interventions targeting young adults' dental attendance might usefully focus on positive consequences following this behavior accompanied with modeling and group performance.
Holbrook, Troy Lisa; Hoyt, David B; Coimbra, Raul; Potenza, Bruce; Sise, Michael J; Sack, Dan I; Anderson, John P
2007-03-01
Injury is a leading cause of death and preventable morbidity in adolescents. Little is known about long-term quality of life (QoL) outcomes in injured adolescents. The objectives of the present report are to describe long-term QoL outcomes and compare posttrauma QoL to national norms for QoL in uninjured adolescents from the National Health Interview Survey (NHIS). In all, 401 trauma patients aged 12 to 19 years were enrolled in the study. Enrollment criteria excluded spinal cord injury. QoL after trauma was measured using the Quality of Well-being (QWB) scale, a sensitive and well-validated functional index (range: 0 = death to 1.000 = optimum functioning). Patient outcomes were assessed at discharge, and 3, 6, 12, 18, and 24 months after discharge. NHIS data were based on 3 survey years and represent a population-based U.S. national random sample of uninjured adolescents. Major trauma in adolescents was associated with significant and marked deficits in QoL throughout the 24-month follow-up period, compared with NHIS norms for this age group. Compared with NHIS norms for QoL in uninjured adolescents aged 12 to 19 years (N = 81,216,835; QWB mean = 0.876), injured adolescents after major trauma had striking and significant QoL deficits beginning at 3-month follow-up (QWB mean = 0.694, p < 0.0001), that continued throughout the long-term follow-up 24 months after discharge (6-month follow-up QWB mean = 0.726, p < 0.0001; 12-month follow-up QWB mean = 0.747, p < 0.0001; 18-month follow-up QWB mean = 0.758, p < 0.0001; 24-month follow-up QWB mean = 0.766, p < 0.0001). QoL deficits were also strongly associated with age (>or=15 years) and female sex. Other significant risk factors for poor QoL outcomes were perceived threat to life, pedestrian struck mechanism, and Injury Severity Scores >16. Major trauma in adolescents is associated with significant and marked deficits in long-term QoL outcomes, compared with U.S. norms for healthy adolescents. Early identification and treatment of risk factors for poor long-term QoL outcomes must become an integral component of trauma care in mature trauma care systems.
Image restoration for civil engineering structure monitoring using imaging system embedded on UAV
NASA Astrophysics Data System (ADS)
Vozel, Benoit; Dumoulin, Jean; Chehdi, Kacem
2013-04-01
Nowadays, civil engineering structures are periodically surveyed by qualified technicians (i.e. alpinist) operating visual inspection using heavy mechanical pods. This method is far to be safe, not only for civil engineering structures monitoring staff, but also for users. Due to the unceasing traffic increase, making diversions or closing lanes on bridge becomes more and more difficult. New inspection methods have to be found. One of the most promising technique is to develop inspection method using images acquired by a dedicated monitoring system operating around the civil engineering structures, without disturbing the traffic. In that context, the use of images acquired with an UAV, which fly around the structures is of particular interest. The UAV can be equipped with different vision system (digital camera, infrared sensor, video, etc.). Nonetheless, detection of small distresses on images (like cracks of 1 mm or less) depends on image quality, which is sensitive to internal parameters of the UAV (vibration modes, video exposure times, etc.) and to external parameters (turbulence, bad illumination of the scene, etc.). Though progresses were made at UAV level and at sensor level (i.e. optics), image deterioration is still an open problem. These deteriorations are mainly represented by motion blur that can be coupled with out-of-focus blur and observation noise on acquired images. In practice, deteriorations are unknown if no a priori information is available or dedicated additional instrumentation is set-up at UAV level. Image restoration processing is therefore required. This is a difficult problem [1-3] which has been intensively studied over last decades [4-12]. Image restoration can be addressed by following a blind approach or a myopic one. In both cases, it includes two processing steps that can be implemented in sequential or alternate mode. The first step carries out the identification of the blur impulse response and the second one makes use of this estimated blur kernel for performing the deconvolution of the acquired image. In the present work, different regularization methods, mainly based on the pseudo norm aforementioned Total Variation, are studied and analysed. The key point of their respective implementation, their properties and limits are investigated in this particular applicative context. References [1] J. Hadamard. Lectures on Cauchy's problem in linear partial differential equations. Yale University Press, 1923. [2] A. N. Tihonov. On the resolution of incorrectly posed problems and regularisation method (in Russian). Doklady A. N.SSSR, 151(3), 1963. [3] C. R. Vogel. Computational Methods for inverse problems, SIAM, 2002. [4] A. K. Katsaggelos, J. Biemond, R.W. Schafer, and R. M. Mersereau, "A regularized iterative image restoration algorithm," IEEE Transactions on Signal Processing, vol.39, no. 4, pp. 914-929, 1991. [5] J. Biemond, R. L. Lagendijk, and R. M. Mersereau, "Iterative methods for image deblurring," Proceedings of the IEEE, vol. 78, no. 5, pp. 856-883, 1990. [6] D. Kundur and D. Hatzinakos, "Blind image deconvolution," IEEE Signal Processing Magazine, vol. 13, no. 3, pp. 43-64, 1996. [7] Y. L. You and M. Kaveh, "A regularization approach to joint blur identification and image restoration," IEEE Transactions on Image Processing, vol. 5, no. 3, pp. 416-428, 1996. [8] T. F. Chan and C. K. Wong, "Total variation blind deconvolution," IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 370-375, 1998. [9] S. Chardon, B. Vozel, and K. Chehdi. Parametric Blur Estimation Using the GCV Criterion and a Smoothness Constraint on the Image. Multidimensional Systems and Signal Processing Journal, Kluwer Ed., 10:395-414, 1999 [10] B. Vozel, K. Chehdi, and J. Dumoulin. Myopic image restoration for civil structures inspection using UAV (in French). In GRETSI, 2005. [11] L. Bar, N. Sochen, and N. Kiryati. Semi-blind image restoration via Mumford-Shah regularization. IEEE Transactions on Image Processing, 15(2), 2006. [12] J. H. Money and S. H. Kang, "Total variation minimizing blind deconvolution with shock filter reference," Image and Vision Computing, vol. 26, no. 2, pp. 302-314, 2008.
Brain abnormality segmentation based on l1-norm minimization
NASA Astrophysics Data System (ADS)
Zeng, Ke; Erus, Guray; Tanwar, Manoj; Davatzikos, Christos
2014-03-01
We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.
Inversion of Magnetic Measurements of the CHAMP Satellite Over the Pannonian Basin
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, P. T.; Wittmann, G.; Toronyi, B.; Puszta, S.
2011-01-01
The Pannonian Basin is a deep intra-continental basin that formed as part of the Alpine orogeny. In order to study the nature of the crustal basement we used the long-wavelength magnetic anomalies acquired by the CHAMP satellite. The anomalies were distributed in a spherical shell, some 107,927 data recorded between January 1 and December 31 of 2008. They covered the Pannonian Basin and its vicinity. These anomaly data were interpolated into a spherical grid of 0.5 x 0.5, at the elevation of 324 km by the Gaussian weight function. The vertical gradient of these total magnetic anomalies was also computed and mapped to the surface of a sphere at 324 km elevation. The former spherical anomaly data at 425 km altitude were downward continued to 324 km. To interpret these data at the elevation of 324 km we used an inversion method. A polygonal prism forward model was used for the inversion. The minimum problem was solved numerically by the Simplex and Simulated annealing methods; a L2 norm in the case of Gaussian distribution parameters and a L1 norm was used in the case of Laplace distribution parameters. We INTERPRET THAT the magnetic anomaly WAS produced by several sources and the effect of the sable magnetization of the exsolution of hemo-ilmenite minerals in the upper crustal metamorphic rocks.
NEIGHBORHOOD NORMS AND SUBSTANCE USE AMONG TEENS
Musick, Kelly; Seltzer, Judith A.; Schwartz, Christine R.
2008-01-01
This paper uses new data from the Los Angeles Family and Neighborhood Survey (L.A. FANS) to examine how neighborhood norms shape teenagers’ substance use. Specifically, it takes advantage of clustered data at the neighborhood level to relate adult neighbors’ attitudes and behavior with respect to smoking, drinking, and drugs, which we treat as norms, to teenagers’ own smoking, drinking, and drug use. We use hierarchical linear models to account for parents’ attitudes and behavior and other characteristics of individuals and families. We also investigate how the association between neighborhood norms and teen behavior depends on: (1) the strength of norms, as measured by consensus in neighbors’ attitudes and conformity in their behavior; (2) the willingness and ability of neighbors to enforce norms, for instance, by monitoring teens’ activities; and (3) the degree to which teens are exposed to their neighbors. We find little association between neighborhood norms and teen substance use, regardless of how we condition the relationship. We discuss possible theoretical and methodological explanations for this finding. PMID:18496598
Stabilizing l1-norm prediction models by supervised feature grouping.
Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha
2016-02-01
Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
Regularity estimates up to the boundary for elliptic systems of difference equations
NASA Technical Reports Server (NTRS)
Strikwerda, J. C.; Wade, B. A.; Bube, K. P.
1986-01-01
Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.
Pant, Jeevan K; Krishnan, Sridhar
2014-04-01
A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.
López-Landavery, Edgar A; Portillo-López, Amelia; Gallardo-Escárate, Cristian; Del Río-Portilla, Miguel A
2014-10-10
The red abalone Haliotis rufescens is one of the most important species for aquaculture in Baja California, México, and despite this, few gene expression studies have been done in tissues such as gill, head and gonad. For this purpose, reverse transcription and quantitative real time PCR (RT-qPCR) is a powerful tool for gene expression evaluation. For a reliable analysis, however, it is necessary to select and validate housekeeping genes that allow proper transcription quantification. Stability of nine housekeeping genes (ACTB, BGLU, TUBB, CY, GAPDH, HPRTI, RPL5, SDHA and UBC) was evaluated in different tissues of red abalone (gill, head and gonad/digestive gland). Four-fold serial dilutions of cDNA (from 25 ngμL(-1) to 0.39 ngμL(-1)) were used to prepare the standard curve, and it showed gene efficiencies between 0.95 and 0.99, with R(2)=0.99. geNorm and NormFinder analysis showed that RPL5 and CY were the most stable genes considering all tissues, whereas in gill HPRTI and BGLU were most stable. In gonad/digestive gland, RPL5 and TUBB were the most stable genes with geNorm, while SDHA and HPRTI were the best using NormFinder. Similarly, in head the best genes were RPL5 and UBC with geNorm, and GAPDH and CY with NormFinder. The technical variability analysis with RPL5 and abalone gonad/digestive gland tissue indicated a high repeatability with a variation coefficient within groups ≤ 0.56% and between groups ≤ 1.89%. These results will help us for further research in reproduction, thermoregulation and endocrinology in red abalone. Copyright © 2014 Elsevier B.V. All rights reserved.
School-based prevention of bullying and relational aggression in adolescence: the fairplayer.manual.
Scheithauer, Herbert; Hess, Markus; Schultze-Krumbholz, Anja; Bull, Heike Dele
2012-01-01
The fairplayer.manual is a school-based program to prevent bullying. The program consists of fifteen to seventeen consecutive ninety-minute lessons using cognitive-behavioral methods, methods targeting group norms and group dynamics, and discussions on moral dilemmas. Following a two-day training session, teachers, together with skilled fairplayer.teamers, implement fairplayer.manual in the classroom during regular school lessons. This chapter offers a summary of the program's conception and underlying prevention theory and summarizes the results from two evaluation studies. Standardized questionnaires showed a positive impact of the intervention program on several outcome variables. Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company.
Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.
Li, Yuanqing; Amari, Shun-Ichi
2010-07-01
In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.
Stals, M; Verhoeven, S; Bruggeman, M; Pellens, V; Schroeyers, W; Schreurs, S
2014-01-01
The Euratom BSS requires that in the near future (2015) the building materials for application in dwellings or buildings such as offices or workshops are screened for NORM nuclides. The screening tool is the activity concentration index (ACI). Therefore it is expected that a large number of building materials will be screened for NORM and thus require ACI determination. Nowadays, the proposed standard for determination of building material ACI is a laboratory analyses technique with high purity germanium spectrometry and 21 days equilibrium delay. In this paper, the B-NORM method for determination of building material ACI is assessed as a faster method that can be performed on-site, alternative to the aforementioned standard method. The B-NORM method utilizes a LaBr3(Ce) scintillation probe to obtain the spectral data. Commercially available software was applied to comprehensively take into account the factors determining the counting efficiency. The ACI was determined by interpreting the gamma spectrum from (226)Ra and its progeny; (232)Th progeny and (40)K. In order to assess the accuracy of the B-NORM method, a large selection of samples was analyzed by a certified laboratory and the results were compared with the B-NORM results. The results obtained with the B-NORM method were in good correlation with the results obtained by the certified laboratory, indicating that the B-NORM method is an appropriate screening method to assess building material ACI. The B-NORM method was applied to analyze more than 120 building materials on the Belgian market. No building materials that exceed the proposed reference level of 1 mSv/year were encountered. Copyright © 2013 Elsevier Ltd. All rights reserved.
Li, Weikai; Wang, Zhengxia; Zhang, Limei; Qiao, Lishan; Shen, Dinggang
2017-01-01
Functional brain network (FBN) has been becoming an increasingly important way to model the statistical dependence among neural time courses of brain, and provides effective imaging biomarkers for diagnosis of some neurological or psychological disorders. Currently, Pearson's Correlation (PC) is the simplest and most widely-used method in constructing FBNs. Despite its advantages in statistical meaning and calculated performance, the PC tends to result in a FBN with dense connections. Therefore, in practice, the PC-based FBN needs to be sparsified by removing weak (potential noisy) connections. However, such a scheme depends on a hard-threshold without enough flexibility. Different from this traditional strategy, in this paper, we propose a new approach for estimating FBNs by remodeling PC as an optimization problem, which provides a way to incorporate biological/physical priors into the FBNs. In particular, we introduce an L1-norm regularizer into the optimization model for obtaining a sparse solution. Compared with the hard-threshold scheme, the proposed framework gives an elegant mathematical formulation for sparsifying PC-based networks. More importantly, it provides a platform to encode other biological/physical priors into the PC-based FBNs. To further illustrate the flexibility of the proposed method, we extend the model to a weighted counterpart for learning both sparse and scale-free networks, and then conduct experiments to identify autism spectrum disorders (ASD) from normal controls (NC) based on the constructed FBNs. Consequently, we achieved an 81.52% classification accuracy which outperforms the baseline and state-of-the-art methods. PMID:28912708
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
A coupled electro-thermal Discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Homsi, L.; Geuzaine, C.; Noels, L.
2017-11-01
This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.
Bagheri, Shirin; Hansson, Emma; Manjer, Jonas; Troëng, Thomas; Brorson, Håkan
2017-01-01
Abstracts Background: Arm lymphedema after breast cancer surgery affects women both from physical and psychological points of view. Lymphedema leads to adipose tissue deposition. Liposuction and controlled compression therapy (CCT) reduces the lymphedema completely. Methods and Results: Sixty female patients with arm lymphedema were followed for a 1-year period after surgery. The 36-item short-form health survey (SF-36) was used to assess health-related quality of life (HRQoL). Patients completed the SF-36 questionnaire before liposuction, and after 1, 3, 6, and 12 months. Preoperative excess arm volume was 1365 ± 73 mL. Complete reduction was achieved after 3 months and was sustained during follow-up. The adipose tissue volume removed at surgery was 1373 ± 56 mL. One month after liposuction, better scores were found in mental health. After 3 months, an increase in physical functioning, bodily pain, and vitality was detected. After 1 year, an increase was also seen for social functioning. The physical component score was higher at 3 months and thereafter, while the mental component score was improved at 3 and 12 months. Compared with SF-36 norm data for the Swedish population, only physical functioning showed lower values than the norm at baseline. After liposuction, general health, bodily pain, vitality, mental health, and social functioning showed higher values at various time points. Conclusions: Liposuction of arm lymphedema in combination with CCT improves patients HRQoL as measured with SF-36. The treatment seems to target and improve both the physical and mental health domains. PMID:28135120
Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.
Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing
2012-11-01
In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.
Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D
NASA Astrophysics Data System (ADS)
Delzanno, Gian Luca; Chacon, Luis; Finn, John M.
2008-11-01
In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).
A highly regular fucan sulfate from the sea cucumber Stichopus horrens.
Ustyuzhanina, Nadezhda E; Bilan, Maria I; Dmitrenok, Andrey S; Borodina, Elizaveta Yu; Nifantiev, Nikolay E; Usov, Anatolii I
2018-02-01
A highly regular fucan sulfate SHFS was isolated from the sea cucumber Stichopus horrens by extraction of the body walls in the presence of papain followed by ion-exchange and gel permeation chromatography. SHFS had MW of about 140 kDa and contained fucose and sulfate in the molar ratio of about 1:1. Chemical and NMR spectroscopic methods were applied for the structural characterization of the polysaccharide. SHFS was shown to have linear molecules built up of 3-linked α-l-fucopyranose 2-sulfate residues. Anticoagulant properties of SHFS were assessed in vitro in comparison with the LMW heparin (enoxaparin) and totally sulfated 3-linked α-l-fucan. SHFS was found to have the lowest activity, and hence, both sulfate groups at O-2 and O-4 of fucosyl units seem to be important for anticoagulant effect of sulfated homo-(1 → 3)-α-l-fucans. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rios, Daniela; Santos, Flávia Cardoso Zaidan; Honório, Heitor Marques; Magalhães, Ana Carolina; Wang, Linda; de Andrade Moreira Machado, Maria Aparecida; Buzalaf, Marilia Afonso Rabelo
2011-03-01
To evaluate whether the type of cola drink (regular or diet) could influence the wear of enamel subjected to erosion followed by brushing abrasion. Ten volunteers wore intraoral devices that each had eight bovine enamel blocks divided into four groups: ER, erosion with regular cola; EAR, erosion with regular cola plus abrasion; EL, erosion with light cola; and EAL, erosion with light cola plus abrasion. Each day for 1 week, half of each device was immersed in regular cola for 5 minutes. Then, two blocks were brushed using a fluoridated toothpaste and electric toothbrush for 30 seconds four times daily. Immediately after, the other half of the device was subjected to the same procedure using a light cola. The pH, calcium, phosphorus, and fluoride concentrations of the colas were analyzed using standard procedures. Enamel alterations were measured by profilometry. Data were tested using two-way ANOVA and Bonferroni test (P<.05). Regarding chemical characteristics, light cola presented pH 3.0, 13.7 mg Ca/L, 15.5 mg P/L, and 0.31 mg F/L, while regular cola had pH 2.6, 32.1 mg Ca/L, 18.1 mg P/L, and 0.26 mg F/L. The light cola promoted less enamel loss (EL, 0.36 Μm; EAL, 0.39 Μm) than its regular counterpart (ER, 0.72 Μm; EAR, 0.95 Μm) for both conditions. There was not a significant difference (P>.05) between erosion and erosion plus abrasion for light cola. However, for regular cola, erosion plus abrasion resulted in higher enamel loss than erosion alone. The data suggest that light cola promoted less enamel wear even when erosion was followed by brushing abrasion.
Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan
2016-01-01
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction. PMID:28603407
Detection of Alzheimer's disease using group lasso SVM-based region selection
NASA Astrophysics Data System (ADS)
Sun, Zhuo; Fan, Yong; Lelieveldt, Boudewijn P. F.; van de Giessen, Martijn
2015-03-01
Alzheimer's disease (AD) is one of the most frequent forms of dementia and an increasing challenging public health problem. In the last two decades, structural magnetic resonance imaging (MRI) has shown potential in distinguishing patients with Alzheimer's disease and elderly controls (CN). To obtain AD-specific biomarkers, previous research used either statistical testing to find statistically significant different regions between the two clinical groups, or l1 sparse learning to select isolated features in the image domain. In this paper, we propose a new framework that uses structural MRI to simultaneously distinguish the two clinical groups and find the bio-markers of AD, using a group lasso support vector machine (SVM). The group lasso term (mixed l1- l2 norm) introduces anatomical information from the image domain into the feature domain, such that the resulting set of selected voxels are more meaningful than the l1 sparse SVM. Because of large inter-structure size variation, we introduce a group specific normalization factor to deal with the structure size bias. Experiments have been performed on a well-designed AD vs. CN dataset1 to validate our method. Comparing to the l1 sparse SVM approach, our method achieved better classification performance and a more meaningful biomarker selection. When we vary the training set, the selected regions by our method were more stable than the l1 sparse SVM. Classification experiments showed that our group normalization lead to higher classification accuracy with fewer selected regions than the non-normalized method. Comparing to the state-of-art AD vs. CN classification methods, our approach not only obtains a high accuracy with the same dataset, but more importantly, we simultaneously find the brain anatomies that are closely related to the disease.
Chromotomography for a rotating-prism instrument using backprojection, then filtering.
Deming, Ross W
2006-08-01
A simple closed-form solution is derived for reconstructing a 3D spatial-chromatic image cube from a set of chromatically dispersed 2D image frames. The algorithm is tailored for a particular instrument in which the dispersion element is a matching set of mechanically rotated direct vision prisms positioned between a lens and a focal plane array. By using a linear operator formalism to derive the Tikhonov-regularized pseudoinverse operator, it is found that the unique minimum-norm solution is obtained by applying the adjoint operator, followed by 1D filtering with respect to the chromatic variable. Thus the filtering and backprojection (adjoint) steps are applied in reverse order relative to an existing method. Computational efficiency is provided by use of the fast Fourier transform in the filtering step.
ERIC Educational Resources Information Center
de Zeeuw, Marlies; Schreuder, Rob; Verhoeven, Ludo
2013-01-01
We investigated written word identification of regular and irregular past-tense verb forms by first (L1) and second language (L2) learners of Dutch in third and sixth grade. Using a lexical decision task, we measured speed and accuracy in the identification of regular and irregular past-tense verb forms by children from Turkish-speaking homes (L2…
ERIC Educational Resources Information Center
Sesep, N'Sial Bal-Nsien
A study explored, from a sociolinguistic perspective, the phenomenon of indoubill, patterns and usage of a special variety of Lingala, among a group of delinquent urban youth in Kinshasa (Zaire). It is proposed that: (1) at a particular moment in its social history, the community experienced sociocultural change that brought with it a special…
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sparsity-promoting inversion for modeling of irregular volcanic deformation source
NASA Astrophysics Data System (ADS)
Zhai, G.; Shirzaei, M.
2016-12-01
Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.
Kurasawa, Hisashi; Hayashi, Katsuyoshi; Fujino, Akinori; Takasugi, Koichi; Haga, Tsuneyuki; Waki, Kayo; Noguchi, Takashi; Ohe, Kazuhiko
2015-01-01
Background: About 10% of patients with diabetes discontinue treatment, resulting in the progression of diabetes-related complications and reduced quality of life. Objective: The objective was to predict a missed clinical appointment (MA), which can lead to discontinued treatment for diabetes patients. Methods: A machine-learning algorithm was used to build a logistic regression model for MA predictions, with L2-norm regularization used to avoid over-fitting and 10-fold cross validation used to evaluate prediction performance. Data associated with patient MAs were extracted from electronic medical records and classified into two groups: one related to patients’ clinical condition (X1) and the other related to previous findings (X2). The records used were those of the University of Tokyo Hospital, and they included the history of 16 026 clinical appointments scheduled by 879 patients whose initial clinical visit had been made after January 1, 2004, who had diagnostic codes indicating diabetes, and whose HbA1c had been tested within 3 months after their initial visit. Records between April 1, 2011, and June 30, 2014, were inspected for a history of MAs. Results: The best predictor of MAs proved to be X1 + X2 (AUC = 0.958); precision and recall rates were, respectively, 0.757 and 0.659. Among all the appointment data, the day of the week when an appointment was made was most strongly associated with MA predictions (weight = 2.22). Conclusions: Our findings may provide information to help clinicians make timely interventions to avoid MAs. PMID:26555782
Dissipative structure and global existence in critical space for Timoshenko system of memory type
NASA Astrophysics Data System (ADS)
Mori, Naofumi
2018-08-01
In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.
Approximate Separability of Green’s Function for High Frequency Helmholtz Equations
2014-09-01
is highly separable (Theorem 2.8) for two disjoint domains X,Y based on a key gradient estimate by Caccioppoli inequality . The method and result can be...bounded by (17) 1 ≥< Ĝ(·,y1), Ĝ(·,y2) >X≥ K̃−2, K̃ = C(d, λ, µ) c(d, λ, µ) [ 1 + r ρ ]d−2 . Also Caccioppoli inequality gives a L2 norm bound of the...1− 2) nhk∑ m=1 λm ≥ (1− 2)cnhk . 24 BJÖRN ENGQUIST AND HONGKAI ZHAO Hence inequalities (60) is replaced by the following: (75) nhk∑ m=1 λ2m > N
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
Airman Classification Batteries from 1948 to 1975: A Review and Evaluation
1975-12-01
1975 6 PERFORMING ORG REPORT NUMBER 7. AUTI4ORWe S. CONTRACT OR GRANT NUMBER(S) Joseph L. Weeks Cecil J. Mullins Bart M. %’itola 9 PERF’rRMING...norms were developed by the equi-percentile method based on a random sample of 3,936 basic trainees (Vitola, Massey, & Wilbourn , 1971). The TALENT test...661 996. Lackland AFB. Tex.: Personnel Research Laboratory, Aerospace Medical Division, August 1967. Vitola, B.M., Massey, 1.11., & Wilbourn , J .M
Detection of ultratrace phosphorus and sulfur by quadrupole ICPMS with dynamic reaction cell.
Bandura, Dmitry R; Baranov, Vladimir I; Tanner, Scott D
2002-04-01
A method of detection of ultratrace phosphorus and sulfur that uses reaction with O2 in a dynamic reaction cell (DRC) to oxidize S+ and P+ to allow their detection as SO+ and PO+ is described. The method reduces the effect of polyatomic isobaric interferences at m/z = 31 and 32 by detecting P+ and S+ as the product oxide ions that are less interfered. Use of an axial field in the DRC improves transmission of the product oxide ions 4-6 times. With no axial field, detection limits (3sigma, 5-s integration) of 0.20 and 0.52 ng/mL, with background equivalent concentrations of 0.53 and 4.8 ng/mL, respectively, are achieved. At an optimum axial field potential (200 V), the detection limits are 0.06 ng/mL for P and 0.2 ng/mL for S, respectively. The method is used for determining the degree of phosphorylation of beta-casein, and regular and dephosphorylated alpha-caseins at 10-1000 fmol/microL concentration, with 5-10% v/v organic sample matrix (acetonitrile, formic acid, ammonium bicarbonate). The measured degree of phosphorylation for beta-casein (4.9 phosphorus atoms/molecule) and regular alpha-casein (8.8 phoshorus atoms/molecule) are in good agreement with the structural data for the proteins. The P/S ratio for regular alpha-casein (1.58) is in good agreement with the ratio of the number of phosphorylation sites to the number of sulfur-containing amino acid residues cysteine and methionine. The P/S ratio for commercially available dephosphorylated alpha-casein is measured at 0.41 (approximately 26% residual phosphate).
Assessing and improving quality of life in patients with head and neck cancer.
Singer, Susanne; Langendijk, Johannes; Yarom, Noam
2013-01-01
Health-related quality of life (QoL) indicates the patients' perception of their health. It depends not only on disease- and treatment-related factors but also on complex inter-relationships of expectations, values and norms, psychologic distress, and comparison with other patients. This article introduces methods and challenges of QoL assessment in patients with head and neck cancer, as well as ways to overcome measurement problems and ways to improve their QoL.
Regularity Aspects in Inverse Musculoskeletal Biomechanics
NASA Astrophysics Data System (ADS)
Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten
2008-09-01
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.
Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca
2017-05-01
To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.
Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen
2017-09-04
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
ERIC Educational Resources Information Center
Thompson, Patricia
Tests are described that were given to 1,000 students randomly selected at grade 7-9 levels with an equal representation from both sexes. Participants were selected from two junior high schools in North York for a study comparing students in a regular physical education program to those in a program to develop cardiovascular endurance. The first…
Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.
Sun, Tao; Jiang, Hao; Cheng, Lizhi
2017-08-25
The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.
NASA Astrophysics Data System (ADS)
Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas
2018-06-01
In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.
ERIC Educational Resources Information Center
Babcock, Laura; Stowe, John C.; Maloof, Christopher J.; Brovetto, Claudia; Ullman, Michael T.
2012-01-01
It remains unclear whether adult-learned second language (L2) depends on similar or different neurocognitive mechanisms as those involved in first language (L1). We examined whether English past tense forms are computed similarly or differently by L1 and L2 English speakers, and what factors might affect this: regularity (regular vs. irregular…
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-06-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We investigate the basal conditions throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a Shallow Shelf Approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L-curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of basal yield stress in the first 7 km of the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Evolution equation for quantum coherence
Hu, Ming-Liang; Fan, Heng
2016-01-01
The estimation of the decoherence process of an open quantum system is of both theoretical significance and experimental appealing. Practically, the decoherence can be easily estimated if the coherence evolution satisfies some simple relations. We introduce a framework for studying evolution equation of coherence. Based on this framework, we prove a simple factorization relation (FR) for the l1 norm of coherence, and identified the sets of quantum channels for which this FR holds. By using this FR, we further determine condition on the transformation matrix of the quantum channel which can support permanently freezing of the l1 norm of coherence. We finally reveal the universality of this FR by showing that it holds for many other related coherence and quantum correlation measures. PMID:27382933
Kunst, Laura E; Gebhardt, Winifred A
2018-04-18
Recent developments in drug use patterns call for an investigation of current party-drug use and associated problems among college students, who appear to be an important target population for harm reduction interventions. In addition to reporting on party-drug use prevalence, we investigated whether initial use and continuation of party-drug use among students was associated with demographic, personality and psychosocial factors. An online questionnaire was administered to 446 students from a Dutch university, inquiring about party-drug use, demographic characteristics, social norms and personality (big five, impulsiveness, aggression). Univariate and multivariate bootstrapped linear regression analyses were used. Of all students, 22.9% indicated having used party-drugs at least once, with a notable sex difference (39.2% of men vs. 16.2% of women). In contrast to the reported trends in Dutch nightlife, GHB was used rarely (lifetime 1.6%) and new psychoactive substances (NPS; 6.7%) appeared almost equally popular as amphetamines (7.6%) and cocaine (7%). Mild health/psychosocial problems (e.g., doing embarrassing things, feeling unwell) were common (65%), whereas serious problems (e.g., being hospitalized) were rare. Neuroticism, extraversion, conscientiousness and impulsiveness were associated with lifetime but not regular party-drug use. Of all predictors, lifetime and regular party-drug use were most strongly related to lenient injunctive and descriptive norms in friends, and a low motivation to comply with parents. Our findings indicate that harm reduction/preventive interventions might profit from focusing on social norms, and targeting students who are highly involved in a pro-party-drug environment while experiencing less parental influence.
Xie, Yuanlong; Tang, Xiaoqi; Song, Bao; Zhou, Xiangdong; Guo, Yixuan
2018-04-01
In this paper, data-driven adaptive fractional order proportional integral (AFOPI) control is presented for permanent magnet synchronous motor (PMSM) servo system perturbed by measurement noise and data dropouts. The proposed method directly exploits the closed-loop process data for the AFOPI controller design under unknown noise distribution and data missing probability. Firstly, the proposed method constructs the AFOPI controller tuning problem as a parameter identification problem using the modified l p norm virtual reference feedback tuning (VRFT). Then, iteratively reweighted least squares is integrated into the l p norm VRFT to give a consistent compensation solution for the AFOPI controller. The measurement noise and data dropouts are estimated and eliminated by feedback compensation periodically, so that the AFOPI controller is updated online to accommodate the time-varying operating conditions. Moreover, the convergence and stability are guaranteed by mathematical analysis. Finally, the effectiveness of the proposed method is demonstrated both on simulations and experiments implemented on a practical PMSM servo system. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bright, Ido; Lin, Guang; Kutz, J. Nathan
2013-12-01
Compressive sensing is used to determine the flow characteristics around a cylinder (Reynolds number and pressure/flow field) from a sparse number of pressure measurements on the cylinder. Using a supervised machine learning strategy, library elements encoding the dimensionally reduced dynamics are computed for various Reynolds numbers. Convex L1 optimization is then used with a limited number of pressure measurements on the cylinder to reconstruct, or decode, the full pressure field and the resulting flow field around the cylinder. Aside from the highly turbulent regime (large Reynolds number) where only the Reynolds number can be identified, accurate reconstruction of the pressure field and Reynolds number is achieved. The proposed data-driven strategy thus achieves encoding of the fluid dynamics using the L2 norm, and robust decoding (flow field reconstruction) using the sparsity promoting L1 norm.
ERIC Educational Resources Information Center
Chen, Wen-Hsin
2016-01-01
The goal of this study is to provide a better understanding of the influence from first language (L1) phonology and morphosyntax on second language (L2) production and perception of English regular past tense morphology. (Abstract shortened by ProQuest.) [The dissertation citations contained here are published with the permission of ProQuest LLC.…
On the sparseness of 1-norm support vector machines.
Zhang, Li; Zhou, Weida
2010-04-01
There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fan, Fan; Ma, Yong; Dai, Xiaobing; Mei, Xiaoguang
2018-04-01
Infrared image enhancement is an important and necessary task in the infrared imaging system. In this paper, by defining the contrast in terms of the area between adjacent non-zero histogram, a novel analytical model is proposed to enlarge the areas so that the contrast can be increased. In addition, the analytical model is regularized by a penalty term based on the saliency value to enhance the salient regions as well. Thus, both of the whole images and salient regions can be enhanced, and the rank consistency can be preserved. The comparisons on 8-bit images show that the proposed method can enhance the infrared images with more details.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Constructing the L2-Graph for Robust Subspace Learning and Subspace Clustering.
Peng, Xi; Yu, Zhiding; Yi, Zhang; Tang, Huajin
2017-04-01
Under the framework of graph-based learning, the key to robust subspace clustering and subspace learning is to obtain a good similarity graph that eliminates the effects of errors and retains only connections between the data points from the same subspace (i.e., intrasubspace data points). Recent works achieve good performance by modeling errors into their objective functions to remove the errors from the inputs. However, these approaches face the limitations that the structure of errors should be known prior and a complex convex problem must be solved. In this paper, we present a novel method to eliminate the effects of the errors from the projection space (representation) rather than from the input space. We first prove that l 1 -, l 2 -, l ∞ -, and nuclear-norm-based linear projection spaces share the property of intrasubspace projection dominance, i.e., the coefficients over intrasubspace data points are larger than those over intersubspace data points. Based on this property, we introduce a method to construct a sparse similarity graph, called L2-graph. The subspace clustering and subspace learning algorithms are developed upon L2-graph. We conduct comprehensive experiment on subspace learning, image clustering, and motion segmentation and consider several quantitative benchmarks classification/clustering accuracy, normalized mutual information, and running time. Results show that L2-graph outperforms many state-of-the-art methods in our experiments, including L1-graph, low rank representation (LRR), and latent LRR, least square regression, sparse subspace clustering, and locally linear representation.
Mrdakovic Popic, Jelena; Meland, Sondre; Salbu, Brit; Skipperud, Lindis
2014-05-01
Investigation of radionuclides (232Th and 238U) and trace elements (Cr, As and Pb) in soil from two legacy NORM (former mining sites) and one undisturbed naturally 232Th-rich site was conducted as a part of the ongoing environmental impact assessment in the Fen Complex area (Norway). The major objectives were to determine the radionuclide and trace element distribution and mobility in soils as well as to analyze possible differences between legacy NORM and surrounding undisturbed naturally 232Th-rich soils. Inhomogeneous soil distribution of radionuclides and trace elements was observed for each of the investigated sites. The concentration of 232Th was high (up to 1685 mg kg(-1), i.e., ∼7000 Bq kg(-1)) and exceeded the screening value for the radioactive waste material in Norway (1 Bq g(-1)). Based on the sequential extraction results, the majority of 232Th and trace elements were rather inert, irreversibly bound to soil. Uranium was found to be potentially more mobile, as it was associated with pH-sensitive soil phases, redox-sensitive amorphous soil phases and soil organic compounds. Comparison of the sequential extraction datasets from the three investigated sites revealed increased mobility of all analyzed elements at the legacy NORM sites in comparison with the undisturbed 232Th-rich site. Similarly, the distribution coefficients Kd (232Th) and Kd (238U) suggested elevated dissolution, mobility and transportation at the legacy NORM sites, especially at the decommissioned Nb-mining site (346 and 100 L kg(-1) for 232Th and 238U, respectively), while the higher sorption of radionuclides was demonstrated at the undisturbed 232Th-rich site (10,672 and 506 L kg(-1) for 232Th and 238U, respectively). In general, although the concentration ranges of radionuclides and trace elements were similarly wide both at the legacy NORM and at the undisturbed 232Th-rich sites, the results of soil sequential extractions together with Kd values supported the expected differences between sites as the consequences of previous mining operations. Hence, mobility and possible elevated bioavailability at the legacy NORM site could be expected and further risk assessment should take this into account when decisions about the possible intervention measures are made.
MedXN: an open source medication extraction and normalization tool for clinical text
Sohn, Sunghwan; Clark, Cheryl; Halgrim, Scott R; Murphy, Sean P; Chute, Christopher G; Liu, Hongfang
2014-01-01
Objective We developed the Medication Extraction and Normalization (MedXN) system to extract comprehensive medication information and normalize it to the most appropriate RxNorm concept unique identifier (RxCUI) as specifically as possible. Methods Medication descriptions in clinical notes were decomposed into medication name and attributes, which were separately extracted using RxNorm dictionary lookup and regular expression. Then, each medication name and its attributes were combined together according to RxNorm convention to find the most appropriate RxNorm representation. To do this, we employed serialized hierarchical steps implemented in Apache's Unstructured Information Management Architecture. We also performed synonym expansion, removed false medications, and employed inference rules to improve the medication extraction and normalization performance. Results An evaluation on test data of 397 medication mentions showed F-measures of 0.975 for medication name and over 0.90 for most attributes. The RxCUI assignment produced F-measures of 0.932 for medication name and 0.864 for full medication information. Most false negative RxCUI assignments in full medication information are due to human assumption of missing attributes and medication names in the gold standard. Conclusions The MedXN system (http://sourceforge.net/projects/ohnlp/files/MedXN/) was able to extract comprehensive medication information with high accuracy and demonstrated good normalization capability to RxCUI as long as explicit evidence existed. More sophisticated inference rules might result in further improvements to specific RxCUI assignments for incomplete medication descriptions. PMID:24637954
Discrete maximal regularity of time-stepping schemes for fractional evolution equations.
Jin, Bangti; Li, Buyang; Zhou, Zhi
2018-01-01
In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.
Diaz-loving, R; Rivera Aragon, S
1995-01-01
1203 sexually active workers in six government agencies in Mexico City participated in a study of the applicability of the theory of reasoned action to prediction of condom use for AIDS prevention. The theory of reasoned action is one of a series of models of attitudes that have had consistent success in predicting various types of intentions and behaviors, especially in the area of sexual and contraceptive behavior. The theory specifies that the intention of executing a particular behavior is determined as the function of attitude toward the behavior and a social factor termed "subjective norm", referring to the perception of social pressure supporting or opposing a particular behavior. The 1203 subjects, who ranged from low to high educational and socioeconomic status, completed self-administered questionnaires concerning their beliefs, attitudes, and intentions regarding condom use, motivation to comply with the subjective norm, and actual condom use. Various scales were constructed to measure the different components of the theory. Hierarchical regression analysis was carried out separately for men and women and for condom use with regular or occasional partners. The model explained over 20% of condom use behavior. The total explained variance was similar in all groups, but the components of the model determining the variance were different. Personal beliefs and attitudes were more important in reference to occasional sexual partners, but the subjective norm and motivation to comply with the reference group were more important with regular sexual partners. The results demonstrate the need for interventions to be adapted to gender groups and in reference to regular or occasional partners.
Balasubramanian, Madhusudhanan; Žabić, Stanislav; Bowd, Christopher; Thompson, Hilary W.; Wolenski, Peter; Iyengar, S. Sitharama; Karki, Bijaya B.; Zangwill, Linda M.
2009-01-01
Glaucoma is the second leading cause of blindness worldwide. Often the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this work, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L1 and L2 norms, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUC) were used to compare the diagnostic performance of the POD induced parameters with the parameters of Topographic Change Analysis (TCA) method. The IMED and L2 norm parameters in the POD framework provided the highest AUC of 0.94 at 10° field of imaging and 0.91 at 15° field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88 respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management. PMID:19369163
Te Brake, Hans
2013-01-01
Background Internationally, several initiatives exist to describe standards for post-disaster psychosocial care. Objective This study explored the level of consensus of experts within Europe on a set of recommendations on early psychosocial intervention after shocking events (Dutch guidelines), and to what degree these standards are implemented into mental health care practice. Methods Two hundred and six (mental) health care professionals filled out a questionnaire to assess the extent to which they consider the guidelines’ scope and recommendations relevant and part of the regular practice in their own country. Forty-five European experts from 24 EU countries discussed the guidelines at an international seminar. Results The data suggest overall agreement on the standards although many of the recommendations appear not (yet) to be embedded in everyday practice. Conclusions Although large consensus exists on standards for early psychosocial care, a chasm between norms and practice appears to exist throughout the EU, stressing the general need for investments in guideline development and implementation. PMID:23393613
Hajian, Reza; Mousavi, Esmat; Shams, Nafiseh
2013-06-01
Net analyte signal standard addition method has been used for the simultaneous determination of sulphadiazine and trimethoprim by spectrophotometry in some bovine milk and veterinary medicines. The method combines the advantages of standard addition method with the net analyte signal concept which enables the extraction of information concerning a certain analyte from spectra of multi-component mixtures. This method has some advantages such as the use of a full spectrum realisation, therefore it does not require calibration and prediction step and only a few measurements require for the determination. Cloud point extraction based on the phenomenon of solubilisation used for extraction of sulphadiazine and trimethoprim in bovine milk. It is based on the induction of micellar organised media by using Triton X-100 as an extraction solvent. At the optimum conditions, the norm of NAS vectors increased linearly with concentrations in the range of 1.0-150.0 μmolL(-1) for both sulphadiazine and trimethoprim. The limits of detection (LOD) for sulphadiazine and trimethoprim were 0.86 and 0.92 μmolL(-1), respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.
A norm knockout method on indirect reciprocity to reveal indispensable norms
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-01-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out. PMID:28276485
A norm knockout method on indirect reciprocity to reveal indispensable norms
NASA Astrophysics Data System (ADS)
Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya
2017-03-01
Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.
Thorndike, Anne N.; Riis, Jason; Levy, Douglas E.
2016-01-01
Population-level strategies to improve healthy food choices are needed for obesity prevention. We conducted a randomized controlled trial of 2,672 employees at Massachusetts General Hospital who were regular customers of the hospital cafeteria with all items labeled green (healthy), yellow (less healthy), or red (unhealthy) to determine if social norm (peer-comparison) feedback with or without financial incentives increased employees’ healthy food choices. Participants were randomized in 2012 to three arms: 1) monthly letter with social norm feedback about healthy food purchases, comparing employee to “all” and to “healthiest” customers (feedback-only); 2) monthly letter with social norm feedback plus small financial incentive for increasing green purchases (feedback-incentive); or 3) no contact (control). The main outcome was change in proportion of green-labeled purchases at end of 3-month intervention. Post-hoc analyses examined linear trends. At baseline, the proportion of green-labeled purchases (50%) did not differ between arms. At end of the 3-month intervention, the percentage increase in green-labeled purchases was larger in the feedback-incentive arm compared to control (2.2% vs. 0.1%, P=0.03), but the two intervention arms were not different. The rate of increase in green-labeled purchases was higher in both feedback-only (P=0.04) and feedback-incentive arms (P=0.004) compared to control. At end of a 3-month wash-out, there were no differences between control and intervention arms. Social norms plus small financial incentives increased employees’ healthy food choices over the short-term. Future research will be needed to assess the impact of this relatively low-cost intervention on employees’ food choices and weight over the long-term. Trial Registration: Clinical Trials.gov NCT01604499 PMID:26827617
Thorndike, Anne N; Riis, Jason; Levy, Douglas E
2016-05-01
Population-level strategies to improve healthy food choices are needed for obesity prevention. We conducted a randomized controlled trial of 2672 employees at the Massachusetts General Hospital who were regular customers of the hospital cafeteria with all items labeled green (healthy), yellow (less healthy), or red (unhealthy) to determine if social norm (peer-comparison) feedback with or without financial incentives increased employees' healthy food choices. Participants were randomized in 2012 to three arms: 1) monthly letter with social norm feedback about healthy food purchases, comparing employee to "all" and to "healthiest" customers (feedback-only); 2) monthly letter with social norm feedback plus small financial incentive for increasing green purchases (feedback-incentive); or 3) no contact (control). The main outcome was change in proportion of green-labeled purchases at the end of 3-month intervention. Post-hoc analyses examined linear trends. At baseline, the proportion of green-labeled purchases (50%) did not differ between arms. At the end of the 3-month intervention, the percentage increase in green-labeled purchases was larger in the feedback-incentive arm compared to control (2.2% vs. 0.1%, P=0.03), but the two intervention arms were not different. The rate of increase in green-labeled purchases was higher in both feedback-only (P=0.04) and feedback-incentive arms (P=0.004) compared to control. At the end of a 3-month wash-out, there were no differences between control and intervention arms. Social norms plus small financial incentives increased employees' healthy food choices over the short-term. Future research will be needed to assess the impact of this relatively low-cost intervention on employees' food choices and weight over the long-term. Clinical Trials.gov: NCT01604499. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Chester J
FORTRAN90 codes for inversion of electrostatic geophysical data in terms of three subsurface parameters in a single-well, oilfield environment: the linear charge density of the steel well casing (L), the point charge associated with an induced fracture filled with a conductive contrast agent (Q) and the location of said fracture (s). Theory is described in detail in Weiss et al. (Geophysics, 2016). Inversion strategy is to loop over candidate fracture locations, and at each one minimize the squared Cartesian norm of the data misfit to arrive at L and Q. Solution method is to construct the 2x2 linear system ofmore » normal equations and compute L and Q algebraically. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed by a simple L-Q-s model. This may include hydrofracking operations, as postulated in Weiss et al. (2016), but no field validation examples have so far been provided.« less
LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction
Huang, Li
2017-01-01
Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885
The impact of Nordic walking training on the gait of the elderly.
Ben Mansour, Khaireddine; Gorce, Philippe; Rezzoug, Nasser
2018-03-27
The purpose of the current study was to define the impact of regular practice of Nordic walking on the gait of the elderly. Thereby, we aimed to determine whether the gait characteristics of active elderly persons practicing Nordic walking are more similar to healthy adults than that of the sedentary elderly. Comparison was made based on parameters computed from three inertial sensors during walking at a freely chosen velocity. Results showed differences in gait pattern in terms of the amplitude computed from acceleration and angular velocity at the lumbar region (root mean square), the distribution (Skewness) quantified from the vertical and Euclidean norm of the lumbar acceleration, the complexity (Sample Entropy) of the mediolateral component of lumbar angular velocity and the Euclidean norm of the shank acceleration and angular velocity, the regularity of the lower limbs, the spatiotemporal parameters and the variability (standard deviation) of stance and stride durations. These findings reveal that the pattern of active elderly differs significantly from sedentary elderly of the same age while similarity was observed between the active elderly and healthy adults. These results advance that regular physical activity such as Nordic walking may counteract the deterioration of gait quality that occurs with aging.
Bowden, Harriet Wood; Gelfand, Matthew P.; Sanz, Cristina; Ullman, Michael T.
2009-01-01
This study examines the storage vs. composition of Spanish inflected verbal forms in L1 and L2 speakers of Spanish. L2 participants were selected to have mid-to-advanced proficiency, high classroom experience, and low immersion experience, typical of medium-to-advanced foreign language learners. Participants were shown the infinitival forms of verbs from either Class I (the default class, which takes new verbs) or Classes II and III (non-default classes), and were asked to produce either first-person singular present-tense or imperfect forms, in separate tasks. In the present tense, the L1 speakers showed inflected-form frequency effects (i.e., higher frequency forms were produced faster, which is taken as a reflection of storage) for stem-changing (irregular) verb-forms from both Class I (e.g., pensar-pienso) and Classes II and III (e.g., perder-pierdo), as well as for non-stem-changing (regular) forms in Classes II/III (e.g., vender-vendo), in which the regular transformation does not appear to constitute a default. In contrast, Class I regulars (e.g., pescar-pesco), whose non-stem-changing transformation constitutes a default (e.g., it is applied to new verbs), showed no frequency effects. L2 speakers showed frequency effects for all four conditions (Classes I and II/III, regulars and irregulars). In the imperfect tense, the L1 speakers showed frequency effects for Class II/III (-ía-suffixed) but not Class I (-aba-suffixed) forms, even though both involve non-stem-change (regular) default transformations. The L2 speakers showed frequency effects for both types of forms. The pattern of results was not explained by a wide range of potentially confounding experimental and statistical factors, and does not appear to be compatible with single-mechanism models, which argue that all linguistic forms are learned and processed in associative memory. The findings are consistent with a dual-system view in which both verb class and regularity influence the storage vs. composition of inflected forms. Specifically, the data suggest that in L1, inflected verbal forms are stored (as evidenced by frequency effects) unless they are both from Class I and undergo non-stem-changing default transformations. In contrast the findings suggest that at least these L2 participants may store all inflected verb-forms. Taken together, the results support dual-system models of L1 and L2 processing in which, at least at mid-to-advanced L2 proficiency and lower levels of immersion experience, the processing of rule-governed forms may depend not on L1 combinatorial processes, but instead on memorized representations. PMID:20419083
Fraction of exhaled nitric oxide (FeNO ) norms in healthy North African children 5-16 years old.
Rouatbi, Sonia; Alqodwa, Ashraf; Ben Mdella, Samia; Ben Saad, Helmi
2013-10-01
(i) To identify factors that influence the FeNO values in healthy North African, Arab children aged 6-16 years; (ii) to test the applicability and reliability of the previously published FeNO norms; and (iii) if needed, to establish FeNO norms in this population, and to prospectively assess its reliability. This was a cross-sectional analytical study. A convenience sample of healthy Tunisian children, aged 6-16 years was recruited. First subjects have responded to two questionnaires, and then FeNO levels were measured by an online method with electrochemical analyzer (Medisoft, Sorinnes [Dinant], Belgium). Anthropometric and spirometric data were collected. Simple and a multiple linear regressions were determined. The 95% confidence interval (95% CI) and upper limit of normal (ULN) were defined. Two hundred eleven children (107 boys) were retained. Anthropometric data, gender, socioeconomic level, obesity or puberty status, and sports activity were not independent influencing variables. Total sample FeNO data appeared to be influenced only by maximum mid expiratory flow (l sec(-1) ; r(2) = 0.0236, P = 0.0516). For boys, only 1st second forced expiratory volume (l) explains a slight (r(2) = 0.0451) but significant FeNO variability (P = 0.0281). For girls, FeNO was not significantly correlated with any children determined data. For North African/Arab children, FeNO values were significantly lower than in other populations and the available published FeNO norms did not reliably predict FeNO in our population. The mean ± SD (95% CI ULN, minimum-maximum) of FeNO (ppb) for the total sample was 5.0 ± 2.9 (5.4, 1.0-17.0). For North African, Arab children of any age, any FeNO value greater than 17.0 ppb may be considered abnormal. Finally, in an additional group of children prospectively assessed, we found no child with a FeNO higher than 17.0 ppb. Our FeNO norms enrich the global repository of FeNO norms the pediatrician can use to choose the most appropriate norms based on children's location or ethnicity. © 2012 Wiley Periodicals, Inc.
Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II
2016-09-01
of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined
MRI Estimates of Brain Iron Concentration in Normal Aging Using Quantitative Susceptibility Mapping
Bilgic, Berkin; Pfefferbaum, Adolf; Rohlfing, Torsten; Sullivan, Edith V.; Adalsteinsson, Elfar
2011-01-01
Quantifying tissue iron concentration in vivo is instrumental for understanding the role of iron in physiology and in neurological diseases associated with abnormal iron distribution. Herein, we use recently-developed Quantitative Susceptibility Mapping (QSM) methodology to estimate the tissue magnetic susceptibility based on MRI signal phase. To investigate the effect of different regularization choices, we implement and compare ℓ1 and ℓ2 norm regularized QSM algorithms. These regularized approaches solve for the underlying magnetic susceptibility distribution, a sensitive measure of the tissue iron concentration, that gives rise to the observed signal phase. Regularized QSM methodology also involves a pre-processing step that removes, by dipole fitting, unwanted background phase effects due to bulk susceptibility variations between air and tissue and requires data acquisition only at a single field strength. For validation, performances of the two QSM methods were measured against published estimates of regional brain iron from postmortem and in vivo data. The in vivo comparison was based on data previously acquired using Field-Dependent Relaxation Rate Increase (FDRI), an estimate of MRI relaxivity enhancement due to increased main magnetic field strength, requiring data acquired at two different field strengths. The QSM analysis was based on susceptibility-weighted images acquired at 1.5T, whereas FDRI analysis used Multi-Shot Echo-Planar Spin Echo images collected at 1.5T and 3.0T. Both datasets were collected in the same healthy young and elderly adults. The in vivo estimates of regional iron concentration comported well with published postmortem measurements; both QSM approaches yielded the same rank ordering of iron concentration by brain structure, with the lowest in white matter and the highest in globus pallidus. Further validation was provided by comparison of the in vivo measurements, ℓ1-regularized QSM versus FDRI and ℓ2-regularized QSM versus FDRI, which again yielded perfect rank ordering of iron by brain structure. The final means of validation was to assess how well each in vivo method detected known age-related differences in regional iron concentrations measured in the same young and elderly healthy adults. Both QSM methods and FDRI were consistent in identifying higher iron concentrations in striatal and brain stem ROIs (i.e., caudate nucleus, putamen, globus pallidus, red nucleus, and substantia nigra) in the older than in the young group. The two QSM methods appeared more sensitive in detecting age differences in brain stem structures as they revealed differences of much higher statistical significance between the young and elderly groups than did FDRI. However, QSM values are influenced by factors such as the myelin content, whereas FDRI is a more specific indicator of iron content. Hence, FDRI demonstrated higher specificity to iron yet yielded noisier data despite longer scan times and lower spatial resolution than QSM. The robustness, practicality, and demonstrated ability of predicting the change in iron deposition in adult aging suggest that regularized QSM algorithms using single-field-strength data are possible alternatives to tissue iron estimation requiring two field strengths. PMID:21925274
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Health-Related Quality of Life among Pediatric Hematopoietic Stem Cell Donors
Switzer, Galen E.; Bruce, Jessica; Kiefer, Deidre M.; Kobusingye, Hati; Drexler, Rebecca; Besser, RaeAnne M.; Confer, Dennis L.; Horowitz, Mary M.; King, Roberta J.; Shaw, Bronwen E.; van Walraven, Suzanna M.; Wiener, Lori; Packman, Wendy; Varni, James W.; Pulsipher, Michael A.
2016-01-01
Objectives To examine health-related quality of life (HRQoL) among sibling pediatric hematopoietic stem cell donors from predonation through 1 year postdonation, to compare donor-reported HRQoL scores with proxy-reports by parents/guardians and those of healthy norms, and to identify predonation factors (including donor age) potentially associated with postdonation HRQoL, to better understand the physical and psychosocial effects of pediatric hematopoietic stem cell donation. Study design A random sample of 105 pediatric donors from US centers and a parent/guardian were interviewed by telephone predonation and 4 weeks and 1 year postdonation. The interview included sociodemo-graphic, psychosocial, and HRQoL items. A sample of healthy controls matched to donors by age, gender, and race/ethnicity was generated. Results Key findings included (1) approximately 20% of donors at each time point had very poor HRQoL; (2) child self-reported HRQoL was significantly lower than parent proxy-reported HRQoL at all 3 time points and significantly lower than that of norms at predonation and 4 weeks postdonation; and (3) younger children were at particular risk of poor HRQoL. Conclusions Additional research to identify the specific sources of poorer HRQoL among at-risk donors (eg, the donation experience vs having a chronically ill sibling) and the reasons that parents may be overestimating HRQoL in their donor children is critical and should lead to interventions and policy changes that ensure positive experiences for these minor donors. PMID:27522440
Health-Related Quality of Life among Pediatric Hematopoietic Stem Cell Donors.
Switzer, Galen E; Bruce, Jessica; Kiefer, Deidre M; Kobusingye, Hati; Drexler, Rebecca; Besser, RaeAnne M; Confer, Dennis L; Horowitz, Mary M; King, Roberta J; Shaw, Bronwen E; van Walraven, Suzanna M; Wiener, Lori; Packman, Wendy; Varni, James W; Pulsipher, Michael A
2016-11-01
To examine health-related quality of life (HRQoL) among sibling pediatric hematopoietic stem cell donors from predonation through 1 year postdonation, to compare donor-reported HRQoL scores with proxy-reports by parents/guardians and those of healthy norms, and to identify predonation factors (including donor age) potentially associated with postdonation HRQoL, to better understand the physical and psychosocial effects of pediatric hematopoietic stem cell donation. A random sample of 105 pediatric donors from US centers and a parent/guardian were interviewed by telephone predonation and 4 weeks and 1 year postdonation. The interview included sociodemographic, psychosocial, and HRQoL items. A sample of healthy controls matched to donors by age, gender, and race/ethnicity was generated. Key findings included (1) approximately 20% of donors at each time point had very poor HRQoL; (2) child self-reported HRQoL was significantly lower than parent proxy-reported HRQoL at all 3 time points and significantly lower than that of norms at predonation and 4 weeks postdonation; and (3) younger children were at particular risk of poor HRQoL. Additional research to identify the specific sources of poorer HRQoL among at-risk donors (eg, the donation experience vs having a chronically ill sibling) and the reasons that parents may be overestimating HRQoL in their donor children is critical and should lead to interventions and policy changes that ensure positive experiences for these minor donors. Published by Elsevier Inc.
Duhamel, T A; Green, H J; Perco, J G; Ouyang, J
2005-07-01
This study investigated the effects of prolonged exercise on muscle sarcoplasmic reticulum (SR) Ca2+ cycling properties and the metabolic responses with and without a session of exercise designed to reduce muscle glycogen reserves while on a normal carbohydrate (CHO) diet. Eight untrained males (VO(2peak) = 3.81 +/- 0.12 L/min, mean +/- SE) performed a standardized cycle-to-fatigue at 55% VO(2peak) while on a normal CHO diet (Norm CHO) and 4 days following prolonged exercise while on a normal CHO diet (Ex+Norm CHO). Compared to rest, exercise in Norm CHO to fatigue resulted in significant reductions (p < 0.05) in Ca2+ uptake (3.17 +/- 0.21 vs. 2.47 +/- 0.12 micromol.(g protein)-1.min-1), maximal Ca2+ ATPase activity (Vmax, 152 +/- 12 vs. 119 +/- 9 micromol.(g protein)-1.min-1) and both phase 1 (15.1 +/- 0.98 vs. 13.1 +/- 0.28 micromol.(g protein)-1.min-1) and phase 2 (6.56 +/- 0.33 vs. 4.91 +/- 0.28 micromol.(g protein)-1.min-1) Ca2+ release in vastus lateralis muscle. No differences were observed between Norm CHO and Ex-Norm CHO in the response of these properties to exercise. Compared with Norm CHO, Ex+Norm CHO resulted in higher (p < 0.05) resting Ca2+ uptake (3.17 +/- 0.21 vs. 3.49 +/- 0.24 micromol.(g protein).min-1 and higher ionophore ratio, defined as the ratio of Vmax measured with and without the Ca2+-ionophore A23187, (2.3 +/- 0.3 vs. 4.4 +/- 0.3 micromol.(g protein).min-1) at fatigue. No differences were observed between conditions in the concentration of muscle glycogen, the high-energy phosphates (ATP and PCr), or metabolites (Pi, Cr, and lactate). Ex+Norm CHO also failed to modify the exercise-induced changes in CHO and fat oxidation. We conclude that prolonged exercise to fatigue performed 4 days following glycogen-depleting exercise while on a normal CHO diet elevates resting Ca2+ uptake and prevents increases in SR membrane permeability to Ca2+ as measured by the ionophore ratio.
Winstock, A R; Griffiths, P; Stewart, D
2001-09-01
This study explores the utility of a self-completion survey method to quickly and cheaply generate information on patterns and trends among regular "recreational" drug consumers. Data is reported here from 1151 subjects accessed through a dance music publication. In keeping with previous studies of drug use within the dance scene polysubstance use was the norm. Many of those reporting use of "ecstasy" were regularly using multiple tablets often consumed in combination with other substances thus exposing themselves to serious health risks, in particular the risk of dose related neurotoxic effects. Seventy percent were drinking alcohol at hazardous levels. Subjects' patterns of drug purchasing also put them at risk of severe criminal sanction. Data supported evidence that cocaine use had become increasing popular in the UK, but contrasted with some commentators' views that ecstasy use was in decline. The utility of this method and how the results should be interpreted is discussed, as are the data's implications for harm and risk reduction activities.
Oshima, Satomi; Takehata, Chisato; Sasahara, Ikuko; Lee, Eunjae; Akama, Takao; Taguchi, Motoko
2017-08-21
An intensive consecutive high-volume training camp may induce appetite loss in athletes. Therefore, this study aimed to investigate the changes in stress and appetite responses in male power-trained athletes during an intensive training camp. The measurements at Day 2 and at the end of a 9-day intensive training camp (Camp1 and Camp2, respectively) were compared with those of the resting period (Rest) and the regular training period (Regular; n = 13). The stress state was assessed based on plasma cortisol level, salivary immunoglobulin A level, and a profile of mood states score. The sensation of appetite was assessed using visual analog scale scores, and fasting plasma acylated ghrelin, insulin, and glucose were measured. The cortisol concentrations were significantly higher at Camp2 (466.7 ± 60.7 nmol∙L -1 ) than at Rest (356.3 ± 100.9 nmol∙L -1 ; p = 0.002) or Regular (361.7 ± 111.4 nmol∙L -1 ; p = 0.003). Both prospective and actual food consumption significantly decreased at Camp2, and acylated ghrelin concentration was significantly lower at Camp1 (34.2 ± 8.0 pg∙mL -1 ) and Camp2 (32.0 ± 8.7 pg∙mL -1 ) than at Rest (47.2 ± 11.2 pg∙mL -1 ) or Regular (53.4 ± 12.6 pg∙mL -1 ). Furthermore, the change in acylated ghrelin level was negatively correlated with the change in cortisol concentration. This study's findings suggest that an early-phase physiological stress response may decrease the acylated ghrelin level in male power-trained athletes during an intensive training camp.
Accurately determining direction of arrival by seismic array based on compressive sensing
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, H.; Yu, H.
2016-12-01
Seismic array analysis method plays an important role in detecting weak signals and determining their locations and rupturing process. In these applications, reliably estimating direction of arrival (DOA) for the seismic wave is very important. DOA is generally determined by the conventional beamforming method (CBM) [Rost et al, 2000]. However, for a fixed seismic array generally the resolution of CBM is poor in the case of low-frequency seismic signals, and in the case of high frequency seismic signals the CBM may produce many local peaks, making it difficult to pick the one corresponding to true DOA. In this study, we develop a new seismic array method based on compressive sensing (CS) to determine the DOA with high resolution for both low- and high-frequency seismic signals. The new method takes advantage of the space sparsity of the incoming wavefronts. The CS method has been successfully used to determine spatial and temporal earthquake rupturing distributions with seismic array [Yao et al, 2011;Yao et al, 2013;Yin 2016]. In this method, we first form the problem of solving the DOA as a L1-norm minimization problem. The measurement matrix for CS is constructed by dividing the slowness-angle domain into many grid nodes, which needs to satisfy restricted isometry property (RIP) for optimized reconstruction of the image. The L1-norm minimization is solved by the interior point method. We first test the CS-based DOA array determination method on synthetic data constructed based on Shanghai seismic array. Compared to the CBM, synthetic test for data without noise shows that the new method can determine the true DOA with a super-high resolution. In the case of multiple sources, the new method can easily separate multiple DOAs. When data are contaminated by noise at various levels, the CS method is stable when the noise amplitude is lower than the signal amplitude. We also test the CS method for the Wenchuan earthquake. For different arrays with different apertures, we are able to obtain reliable DOAs with uncertainties lower than 10 degrees.
Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection.
Wang, Haoran; Yuan, Chunfeng; Hu, Weiming; Ling, Haibin; Yang, Wankou; Sun, Changyin
2014-02-01
In this paper, we propose using high-level action units to represent human actions in videos and, based on such units, a novel sparse model is developed for human action recognition. There are three interconnected components in our approach. First, we propose a new context-aware spatial-temporal descriptor, named locally weighted word context, to improve the discriminability of the traditionally used local spatial-temporal descriptors. Second, from the statistics of the context-aware descriptors, we learn action units using the graph regularized nonnegative matrix factorization, which leads to a part-based representation and encodes the geometrical information. These units effectively bridge the semantic gap in action recognition. Third, we propose a sparse model based on a joint l2,1-norm to preserve the representative items and suppress noise in the action units. Intuitively, when learning the dictionary for action representation, the sparse model captures the fact that actions from the same class share similar units. The proposed approach is evaluated on several publicly available data sets. The experimental results and analysis clearly demonstrate the effectiveness of the proposed approach.
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
NASA Astrophysics Data System (ADS)
He, Zhi; Liu, Lin
2016-11-01
Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan{sup ©}600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution.« less
Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei
2016-01-01
Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise standard deviation (STD). Similar performance on spatial resolution is observed on an anthropomorphic head phantom. In addition, results of PWLS-SBR show substantially improved image quality due to preservation of image NPS. On the Catphan©600 phantom, NPS using PWLS-SBR has a correlation of 93% with that via direct matrix inversion, while the correlation drops to −52% for PWLS-EPR. Electron density measurement studies indicate high accuracy of PWLS-SBR. On seven different materials, the measured electron densities calculated from the decomposed material images using PWLS-SBR have a root-mean-square error (RMSE) of 1.20%, while the results of PWLS-EPR have a RMSE of 2.21%. In the study on a head-and-neck patient, PWLS-SBR is shown to reduce noise STD by a factor of 3 on material images with image qualities comparable to CT images, whereas fine structures are lost in the PWLS-EPR result. Additionally, PWLS-SBR better preserves low contrast on the tissue image. Conclusions: The authors propose improvements to the regularization term of an optimization framework which performs iterative image-domain decomposition for DECT with noise suppression. The regularization term avoids calculation of image gradient and is based on pixel similarity. The proposed method not only achieves a high decomposition accuracy, but also improves over the previous algorithm on NPS as well as spatial resolution. PMID:27147376
Maurice-Stam, H; Oort, F J; Last, B F; Brons, P P T; Caron, H N; Grootenhuis, M A
2009-07-01
The aim of the study was to investigate: (1) health-related quality of life (HRQoL) and anxiety in school-aged cancer survivors during the first 4 years of continuous remission after the end of treatment; and (2) correlations of disease-related coping with HRQoL and anxiety. A total of 76 survivors aged 8-15 years completed questionnaires about HRQoL, anxiety and disease-related cognitive coping at one to five measurement occasions. Their HRQoL was compared with norm data, 2 months (n = 49) and 1 year (n = 41), 2 years (n = 41), 3 years (n = 42) and 4 years (n = 27) after treatment. Through longitudinal mixed models analyses it was investigated to what extent disease-related cognitive coping was associated with HRQoL and anxiety over time, independent of the impact of demographic and medical variables. Survivors reported worse Motor Functioning (HRQoL) 2 months after the end of treatment, but from 1 year after treatment they did no longer differ from the norm population. Lower levels of anxiety were associated with male gender, being more optimistic about the further course of the disease (predictive control) and less searching for information about the disease (interpretative control). Stronger reliance on the physician (vicarious control) was associated with better mental HRQoL. As a group, survivors regained good HRQoL from 1 year after treatment. Monitoring and screening survivors are necessary to be able to trace the survivors at risk of worse HRQoL.
Devine, J; Otto, C; Rose, M; Barthel, D; Fischer, F; Mühlan, H; Mülhan, H; Nolte, S; Schmidt, S; Ottova-Jordan, V; Ravens-Sieberer, U
2015-04-01
Assessing health-related quality of life (HRQoL) via Computerized Adaptive Tests (CAT) provides greater measurement precision coupled with a lower test burden compared to conventional tests. Currently, there are no European pediatric HRQoL CATs available. This manuscript aims at describing the development of a HRQoL CAT for children and adolescents: the Kids-CAT, which was developed based on the established KIDSCREEN-27 HRQoL domain structure. The Kids-CAT was developed combining classical test theory and item response theory methods and using large archival data of European KIDSCREEN norm studies (n = 10,577-19,580). Methods were applied in line with the US PROMIS project. Item bank development included the investigation of unidimensionality, local independence, exploration of Differential Item Functioning (DIF), evaluation of Item Response Curves (IRCs), estimation and norming of item parameters as well as first CAT simulations. The Kids-CAT was successfully built covering five item banks (with 26-46 items each) to measure physical well-being, psychological well-being, parent relations, social support and peers, and school well-being. The Kids-CAT item banks proved excellent psychometric properties: high content validity, unidimensionality, local independence, low DIF, and model conform IRCs. In CAT simulations, seven items were needed to achieve a measurement precision between .8 and .9 (reliability). It has a child-friendly design, is easy accessible online and gives immediate feedback reports of scores. The Kids-CAT has the potential to advance pediatric HRQoL measurement by making it less burdensome and enhancing the patient-doctor communication.
Changing basal conditions during the speed-up of Jakobshavn Isbræ, Greenland
NASA Astrophysics Data System (ADS)
Habermann, M.; Truffer, M.; Maxwell, D.
2013-11-01
Ice-sheet outlet glaciers can undergo dynamic changes such as the rapid speed-up of Jakobshavn Isbræ following the disintegration of its floating ice tongue. These changes are associated with stress changes on the boundary of the ice mass. We invert for basal conditions from surface velocity data throughout a well-observed period of rapid change and evaluate parameterizations currently used in ice-sheet models. A Tikhonov inverse method with a shallow-shelf approximation forward model is used for diagnostic inversions for the years 1985, 2000, 2005, 2006 and 2008. Our ice-softness, model norm, and regularization parameter choices are justified using the data-model misfit metric and the L curve method. The sensitivity of the inversion results to these parameter choices is explored. We find a lowering of effective basal yield stress in the first 7 km upstream from the 2008 grounding line and no significant changes higher upstream. The temporal evolution in the fast flow area is in broad agreement with a Mohr-Coulomb parameterization of basal shear stress, but with a till friction angle much lower than has been measured for till samples. The lowering of effective basal yield stress is significant within the uncertainties of the inversion, but it cannot be ruled out that there are other significant contributors to the acceleration of the glacier.
Zhang, Libo; Guo, Wenqian; Peng, Jinhui; Li, Jing; Lin, Guo; Yu, Xia
2016-07-01
A major source of germanium recovery and also the source of this research is the by-product of lead and zinc metallurgical process. The primary purpose of the research is to investigate the effects of ultrasonic assisted and regular methods on the leaching yield of germanium from roasted slag containing germanium. In the study, the HCl-CaCl2 mixed solution is adopted as the reacting system and the Ca(ClO)2 used as the oxidant. Through six single factor (leaching time, temperature, amount of Ca(ClO)2, acid concentration, concentration of CaCl2 solution, ultrasonic power) experiments and the comparison of the two methods, it is found the optimum collective of germanium for ultrasonic-assisted method is obtained at temperature 80 °C for a leaching duration of 40 min. The optimum concentration for hydrochloric acid, CaCl2 and oxidizing agent are identified to be 3.5 mol/L, 150 g/L and 58.33 g/L, respectively. In addition, 700 W is the best ultrasonic power and an over-high power is adverse in the leaching process. Under the optimum condition, the recovery of germanium could reach up to 92.7%. While, the optimum leaching condition for regular leaching method is same to ultrasonic-assisted method, except regular method consume 100 min and the leaching rate of Ge 88.35% is lower about 4.35%. All in all, the experiment manifests that the leaching time can be reduced by as much as 60% and the leaching rate of Ge can be increased by 3-5% with the application of ultrasonic tool, which is mainly thanks to the mechanical action of ultrasonic. Copyright © 2015 Elsevier B.V. All rights reserved.
Booth, Amy R; Norman, Paul; Harris, Peter R; Goyder, Elizabeth
2014-02-01
The study sought to (1) explain intentions to get tested for chlamydia regularly in a group of young people living in deprived areas using the theory of planned behaviour (TPB); and (2) test whether self-identity explained additional variance in testing intentions. A cross-sectional design was used for this study. Participants (N = 278, 53% male; M = 17.05 years) living in deprived areas of a UK city were recruited from a vocational education setting. Participants completed a self-administered questionnaire, including measures of attitude, injunctive subjective norm, descriptive norm, perceived behavioural control, self-identity, intention and past behaviour in relation to getting tested for chlamydia regularly. The TPB explained 43% of the variance in chlamydia testing intentions with all variables emerging as significant predictors. However, self-identity explained additional variance in intentions (ΔR(2) = .22) and emerged as the strongest predictor, even when controlling for past behaviour. The study identified the key determinants of intention to get tested for chlamydia regularly in a sample of young people living in areas of increased deprivation: a hard-to-reach, high-risk population. The findings indicate the key variables to target in interventions to promote motivation to get tested for chlamydia regularly in equivalent samples, amongst which self-identity is critical. What is already known on this subject? Young people living in deprived areas have been identified as an at-risk group for chlamydia. Qualitative research has identified several themes in relation to factors affecting the uptake of chlamydia testing, which fit well with the constructs of the Theory of Planned Behaviour (TPB). Identity concerns have also been identified as playing an important part in young people's chlamydia testing decisions. What does this study add? TPB explained 43% of the variance in chlamydia testing intentions and all variables were significant predictors. Self-identity explained additional 22% of the variance in intentions and emerged as the strongest predictor. Indicates key variables to target in interventions to promote regular chlamydia testing in deprived young people. © 2013 The British Psychological Society.
Godin, Gaston; Anderson, Donna; Lambert, Léo-Daniel; Desharnais, Raymond
2005-01-01
The purpose of this study was to identify the factors explaining regular physical activity among Canadian adolescents. A cohort study conducted over a period of 2 years. A French-language high school located near Québec City. A cohort of 740 students (352 girls; 388 boys) aged 13.3 +/- 1.0 years at baseline. Psychosocial, life context, profile, and sociodemographic variables were assessed at baseline and 1 and 2 years after baseline. Exercising almost every day during leisure time at each measurement time was the dependent variable. The Generalized Estimating Equations (GEE) analysis indicated that exercising almost every day was significantly associated with a high intention to exercise (odds ratio [OR]: 8.33, confidence interval [CI] 95%: 5.26, 13.18), being satisfied with the activity practiced (OR: 2.07, CI 95%: 1.27, 3.38), perceived descriptive norm (OR: 1.82, CI 95%: 1.41, 2.35), being a boy (OR: 1.83, CI 95%: 1.37, 2.46), practicing "competitive" activities (OR: 1.80, CI 95%: 1.37, 2.36), eating a healthy breakfast (OR: 1.68, CI 95%: 1.09, 2.60), and normative beliefs (OR: 1.48, CI 95%: 1.14, 1.90). Specific GEE analysis for gender indicated slight but significant differences. This study provides evidence for the need to design interventions that are gender specific and that focus on increasing intention to exercise regularly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, J; Udayakumar, T; Wang, Z
Purpose: CT is not able to differentiate tumors from surrounding soft tissue. This study is to develop a bioluminescence tomography (BLT) system that is integrated onto our previously developed CT guided small animal arc radiation treatment system (iSMAART) to guide radiation, monitor tumor growth and evaluate therapeutic response. Methods: The BLT system employs a CCD camera coupled with a high speed lens, and is aligned orthogonally to the x-ray beam central axis. The two imaging modalities, CT and BLT, are physically registered through geometrical calibration. The CT anatomy provides an accurate contour of animal surface which is used to constructmore » 3D mesh for BLT reconstruction. Bioluminescence projections are captured from multiple angles, once every 45 degree rotation. The diffusion equation based on analytical Kirchhoff approximation is adopted to model the photon propagation in tissues. A discrete cosine transform based reweighted L1-norm regularization (DCT-re-L1) algorithm is used for BLT reconstruction. Experiments are conducted on a mouse orthotopic prostate tumor model (n=12) to evaluate the BLT performance, in terms of its robustness and accuracy in locating and quantifying the bioluminescent tumor cells. Iodinated contrast agent was injected intravenously to delineate the tumor in CT. The tumor location and volume obtained from CT also serve as a benchmark against BLT. Results: With our cutting edge reconstruction algorithm, BLT is able to accurately reconstruct the orthotopic prostate tumors. The tumor center of mass in BLT is within 0.5 mm radial distance of that in CT. The tumor volume in BLT is significantly correlated with that in CT (R2 = 0.81). Conclusion: The BLT can differentiate, localize and quantify tumors. Together with CT, BLT will provide precision radiation guidance and reliable treatment assessment in preclinical cancer research.« less
DIETARY BAKED EGG ACCELERATES RESOLUTION OF EGG ALLERGY IN CHILDREN
Leonard, Stephanie A.; Sampson, Hugh A.; Sicherer, Scott H.; Noone, Sally; Moshier, Erin L.; Godbold, James; Nowak-Wȩgrzyn, Anna
2012-01-01
Background Baked egg is tolerated by a majority of egg-allergic children. Objective To characterize immunologic changes associated with ingestion of baked egg and evaluate the role that baked egg diets plays in the development of tolerance to regular egg. Methods Egg-allergic subjects who tolerated baked egg challenge incorporated baked egg into their diet. Immunologic parameters were measured at follow-up visits. A comparison group strictly avoiding egg was used to evaluate the natural history of the development of tolerance. Results Of the 79 subjects in the intent-to-treat group followed for a median of 37.8 months, 89% now tolerate baked egg and 53% now tolerate regular egg. Of 23 initial baked egg-reactive subjects, 14 (61%) subsequently tolerated baked egg and 6 (26%) now tolerate regular egg. Within the initially baked egg-reactive group, subjects with persistent reactivity to baked egg had higher median baseline egg white (EW)-specific IgE levels (13.5 kUA/L) than those who subsequently tolerated baked egg (4.4 kUA/L; P=0.04) and regular egg (3.1 kUA/L, P=0.05). In subjects ingesting baked egg, EW-induced SPT wheal diameter and EW-, ovalbumin-, and ovomucoid-specific IgE levels decreased significantly, while ovalbumin- and ovomucoid-specific IgG4 levels increased significantly. Subjects in the per-protocol group were 14.6 times more likely to develop regular egg tolerance than subjects in the comparison group (P < 0.0001), and they developed tolerance earlier (median 50.0 versus 78.7 months; P<0.0001). Conclusion Initiation of a baked egg diet accelerates the development of regular egg tolerance compared to strict avoidance. Higher serum EW-specific IgE level is associated with persistent baked and regular egg reactivity, while initial baked egg reactivity is not. PMID:22846751
[Study on changes of contents of 1-deoxynojirimycin in Bombyx mori and their byproducts].
Ouyang, Zhen; Meng, Xia; Chang, Yu; Yang, Yu
2009-02-01
To Study the changing regularity of the contents of 1-deoxynojirimycin in Bombyx mori and their byproducts in different growth periods. The samples were analyzed by high performance liquid chromatography equipped with fluorescence detector and separated on a HiQSiL C18 column at 25 degrees C. Mobile phase consisted of anetonitrile-0.1% aqueous acetic acid (55:45) with a flow rate of 1.0 mL/min. The fluorescence detector was operated at lambdaEX = 254 nm and lambdaEM = 322 nm. The contents of 1-deoxynojirimycin in Bombyx mori and their byproducts in different growth periods were remarkably different, and changed regularly. This study reveals the metabolic regularity of 1-deoxynojirimycin in Bombyx mori preliminarily.
CrossTalk: The Journal of Defense Software Engineering. Volume 20, Number 11, November 2007
2007-11-01
methodologies, work breakdown struc- tures, and risk management. (Any impo- sition of managerial or group expectation or norm is also accessing the Conforming...was a young nav- igator, and he felt green along side the more seasoned pilots on his very first mission. They headed for their destina- tion with a...oter 2 3 2 1 2 l : i l r i l t l reat re 1 2 3 4 5 tal tes 1 2 2 5 3 2 3 3 3 14 a le 4: xa le f ltiv ti take l er tal a keat re 1 2 3 4 5 6 tes 0 - 3
Moreira, Viviane S; Soares, Virgínia L F; Silva, Raner J S; Sousa, Aurizangela O; Otoni, Wagner C; Costa, Marcio G C
2018-05-01
Bixa orellana L., popularly known as annatto, produces several secondary metabolites of pharmaceutical and industrial interest, including bixin, whose molecular basis of biosynthesis remain to be determined. Gene expression analysis by quantitative real-time PCR (qPCR) is an important tool to advance such knowledge. However, correct interpretation of qPCR data requires the use of suitable reference genes in order to reduce experimental variations. In the present study, we have selected four different candidates for reference genes in B. orellana , coding for 40S ribosomal protein S9 (RPS9), histone H4 (H4), 60S ribosomal protein L38 (RPL38) and 18S ribosomal RNA (18SrRNA). Their expression stabilities in different tissues (e.g. flower buds, flowers, leaves and seeds at different developmental stages) were analyzed using five statistical tools (NormFinder, geNorm, BestKeeper, ΔCt method and RefFinder). The results indicated that RPL38 is the most stable gene in different tissues and stages of seed development and 18SrRNA is the most unstable among the analyzed genes. In order to validate the candidate reference genes, we have analyzed the relative expression of a target gene coding for carotenoid cleavage dioxygenase 1 (CCD1) using the stable RPL38 and the least stable gene, 18SrRNA , for normalization of the qPCR data. The results demonstrated significant differences in the interpretation of the CCD1 gene expression data, depending on the reference gene used, reinforcing the importance of the correct selection of reference genes for normalization.
Power, Agency and Middle Leadership in English Primary Schools
ERIC Educational Resources Information Center
Hammersley-Fletcher, Linda; Strain, Michael
2011-01-01
English primary schools are considered quasi-collegial institutions within which staff communicate regularly and openly. The activities of staff, however, are bound by institutional norms and conditions and by societal expectations. Wider agendas of governmental control over the curriculum and external controls to ensure accountability and…
NASA Astrophysics Data System (ADS)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman
2017-02-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.
Robust method to detect and locate local earthquakes by means of amplitude measurements.
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Brückl, Ewald
2016-04-01
In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic station. As a direct consequence, we are able to save computing time for the calculation of the final back-projected maximum resultant amplitude at every grid-point. The capability of the method was demonstrated firstly using synthetic data. In the next step, this method was applied to data of 43 local earthquakes of low and medium magnitude (1.7 < magnitude scale < 4.3). These earthquakes were recorded and detected by the seismic network ALPAACT (seismological and geodetic monitoring of Alpine PAnnonian ACtive Tectonics) in the period 2010/06/11 to 2013/09/20. Data provided by the ALPAACT network is used in order to understand seismic activity in the Mürz Valley - Semmering - Vienna Basin transfer fault system in Austria and what makes it such a relatively high earthquake hazard and risk area. The method will substantially support our efforts to involve scholars from polytechnic schools in seismological work within the Sparkling Science project Schools & Quakes.
Zinszer, Benjamin D; Malt, Barbara C; Ameel, Eef; Li, Ping
2014-01-01
SECOND LANGUAGE LEARNERS FACE A DUAL CHALLENGE IN VOCABULARY LEARNING: First, they must learn new names for the 100s of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants' response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals' names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals' language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals' L2 picture naming responses.
Zinszer, Benjamin D.; Malt, Barbara C.; Ameel, Eef; Li, Ping
2014-01-01
Second language learners face a dual challenge in vocabulary learning: First, they must learn new names for the 100s of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants’ response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals’ names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals’ language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals’ L2 picture naming responses. PMID:25386149
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Anirban; Ganguly, Anindita; Chatterjee, Saumya Deep
2018-04-01
In this paper the authors have dealt with seven kinds of non-linear Volterra and Fredholm classes of equations. The authors have formulated an algorithm for solving the aforementioned equation types via Hybrid Function (HF) and Triangular Function (TF) piecewise-linear orthogonal approach. In this approach the authors have reduced integral equation or integro-differential equation into equivalent system of simultaneous non-linear equation and have employed either Newton's method or Broyden's method to solve the simultaneous non-linear equations. The authors have calculated the L2-norm error and the max-norm error for both HF and TF method for each kind of equations. Through the illustrated examples, the authors have shown that the HF based algorithm produces stable result, on the contrary TF-computational method yields either stable, anomalous or unstable results.
Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)
NASA Astrophysics Data System (ADS)
Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.
2018-04-01
Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.
Roberto, Anthony J; Krieger, Janice L; Katz, Mira L; Goei, Ryan; Jain, Parul
2011-06-01
This study examines the ability of the theory of reasoned action (TRA) and the theory of planned behavior (TPB) to predict whether or not pediatricians encourage parents to get their adolescent daughters vaccinated against the human papillomavirus (HPV). Four-hundred and six pediatricians completed a mail survey measuring attitudes, subjective norms, perceived behavioral control, intentions, and behavior. Results indicate that pediatricians have positive attitudes, subjective norms, and perceived behavioral control toward encouraging parents to get their daughters vaccinated, that they intend to regularly encourage parents to get their daughters vaccinated against HPV in the next 30 days, and that they had regularly encouraged parents to get their daughters vaccinated against HPV in the past 30 days (behavior). Though the data were consistent with both the TRA and TPB models, results indicate that perceived behavioral control adds only slightly to the overall predictive power of the TRA, suggesting that attitudes and norms may be more important targets for interventions dealing with this topic and audience. No gender differences were observed for any of the individual variables or the overall fit of either model. These findings have important theoretical and practical implications for the development of health communication messages targeting health care providers in general, and for those designed to influence pediatricians' communication with parents regarding the HPV vaccine in particular.
Roberto, Anthony J.; Krieger, Janice L.; Katz, Mira L.; Goei, Ryan; Jain, Parul
2014-01-01
This study examines the ability of the theory of reasoned action (TRA) and the theory of planned behavior (TPB) to predict whether or not pediatricians encourage parents to get their adolescent daughters vaccinated against the human papillomavirus (HPV). Four-hundred and six pediatricians completed a mail survey measuring attitudes, subjective norms, perceived behavioral control, intentions, and behavior. Results indicate that pediatricians have positive attitudes, subjective norms, and perceived behavioral control toward encouraging parents to get their daughters vaccinated, that they intend to regularly encourage parents to get their daughters vaccinated against HPV in the next 30 days, and that they had regularly encouraged parents to get their daughters vaccinated against HPV in the past 30 days (behavior). Though the data were consistent with both the TRA and TPB models, results indicate that perceived behavioral control adds only slightly to the overall predictive power of the TRA, suggesting that attitudes and norms may be more important targets for interventions dealing with this topic and audience. No gender differences were observed for any of the individual variables or the overall fit of either model. These findings have important theoretical and practical implications for the development of health communication messages targeting health care providers in general, and for those designed to influence pediatricians’ communication with parents regarding the HPV vaccine in particular. PMID:21424964
NASA Astrophysics Data System (ADS)
Kuramochi, Kazuki; Akiyama, Kazunori; Ikeda, Shiro; Tazaki, Fumie; Fish, Vincent L.; Pu, Hung-Yi; Asada, Keiichi; Honma, Mareki
2018-05-01
We propose a new imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the ℓ 1-norm and a new function named total squared variation (TSV) of the brightness distribution. First, we demonstrate that our technique may achieve a superresolution of ∼30% compared with the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is ≲10% of the traditional CLEAN beam size for ℓ 1+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that TSV is well matched to the expected physical properties of the astronomical images and the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required. We also propose a feature-extraction method to detect circular features from the image of a black hole shadow and use it to evaluate the performance of the image reconstruction. With this method and reconstructed images, the EHT can constrain the radius of the black hole shadow with an accuracy of ∼10%–20% in present simulations for Sgr A*, suggesting that the EHT would be able to provide useful independent measurements of the mass of the supermassive black holes in Sgr A* and also another primary target, M87.
Ordering states with various coherence measures
NASA Astrophysics Data System (ADS)
Yang, Long-Mei; Chen, Bin; Fei, Shao-Ming; Wang, Zhi-Xi
2018-04-01
Quantum coherence is one of the most significant theories in quantum physics. Ordering states with various coherence measures is an intriguing task in quantification theory of coherence. In this paper, we study this problem by use of four important coherence measures—the l_1 norm of coherence, the relative entropy of coherence, the geometric measure of coherence and the modified trace distance measure of coherence. We show that each pair of these measures give a different ordering of qudit states when d≥3. However, for single-qubit states, the l_1 norm of coherence and the geometric coherence provide the same ordering. We also show that the relative entropy of coherence and the geometric coherence give a different ordering for single-qubit states. Then we partially answer the open question proposed in Liu et al. (Quantum Inf Process 15:4189, 2016) whether all the coherence measures give a different ordering of states.
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods
Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti
2012-01-01
Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459
Standard setting: comparison of two methods.
George, Sanju; Haque, M Sayeed; Oyebode, Femi
2006-09-14
The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.
Living on the Edge: A Geometric Theory of Phase Transitions in Convex Optimization
2013-03-24
framework for constructing a regularizer f that promotes a specified type of structure, as well as many additional examples. We say that the...Rd that promote the structures we expect to find in x0 8 D. AMELUNXEN, M. LOTZ, M. B. MCCOY, AND J. A. TROPP and y0. Then we can frame the convex...signal x0 is sparse in the standard basis, and the second signal U y0 is sparse in a known basis U . In this case, we can use `1 norms to promote
Lacome, Mathieu; Simpson, Ben M; Cholley, Yannick; Buchheit, Martin
2018-05-01
To (1) compare the locomotor and heart rate responses between floaters and regular players during both small and large small-sided games (SSGs) and (2) examine whether the type of game (ie, game simulation [GS] vs possession game [PO]) affects the magnitude of the difference between floaters and regular players. Data were collected in 41 players belonging to an elite French football team during 3 consecutive seasons (2014-2017). A 5-Hz global positionning system was used to collect all training data, with the Athletic Data Innovation analyzer (v5.4.1.514) used to derive total distance (m), high-speed distance (>14.4 km·h -1 , m), and external mechanical load (MechL, a.u.). All SSGs included exclusively 1 floater and were divided into 2 main categories, according to the participation of goalkeepers (GS) or not (PO) and then further divided into small and large (>100 m 2 per player) SSGs based on the area per player ratio. Locomotor activity and MechL performed were likely-to-most likely lower (moderate to large magnitude) in floaters compared with regular players, whereas differences in heart rate responses were unclear to possibly higher (small) in floaters. The magnitude of the difference in locomotor activity and MechL between floaters and regular players was substantially greater during GS compared with PO. Compared with regular players, floaters present decreased external load (both locomotor and MechL) despite unclear to possibly slightly higher heart rate responses during SSGs. Moreover, the responses of floaters compared with regular players are not consistent across different sizes of SSGs, with greater differences during GS than PO.
Assessing performance in pre-season wrestling athletes using biomarkers
Papassotiriou, Ionas; Nifli, Artemissia-Phoebe
2018-01-01
Introduction Although regular training introduces the desired changes in athletes’ metabolism towards optimal final performance, literature is rarely focusing on the metabolic responses off-competition. Therefore, the aim of this study was to evaluate biochemical indices during typical preseason training in wrestling athletes. Materials and methods Twenty male freestyle and Greco-roman wrestlers (14 to 31 years) followed a typical session of the preparatory phase. Capillary blood glucose and lactate concentrations were assessed immediately before and after training. Protein, microalbumin, creatinine and their ratio were estimated the next day in the first morning urine. Results Pre-training lactate concentrations were lower in Greco-roman than in freestyle wrestlers (1.8 (1.4 – 2.1) vs. 2.9 (2.1 – 3.1) mmol/L). Exertion resulted in a significant increase in lactate concentrations, by 3.2 (2.6 – 4.1) mmol/L in Greco-roman wrestlers and 4.5 (3.4 – 5.3) mmol/L in freestylers. These changes were found to correlate with athlete’s sport experience (rs = 0.71, P < 0.001). Glucose concentrations were also significantly increased by 0.5 (0.1 – 0.8) mmol/L, in correlation with lactate change (rs = 0.49, P = 0.003). Twelve subjects exhibited urine albumin concentrations at 30 mg/L, and thirteen creatinine concentrations around 17.7 mmol/L. The corresponding ratio was found abnormal in 4 cases, especially when creatinine excretion and body fat were low. Conclusions Wrestling training is associated with mobilization of both lactic and alactic anaerobic energy systems. The regular comprehensive monitoring of biochemical markers would be advantageous in determining the efficiency of the preparatory phase and the long-term physiological adaptations towards the competition phase, or athlete’s overtraining. PMID:29666559
Social norms and its correlates as a pathway to smoking among young Latino adults.
Echeverría, Sandra E; Gundersen, Daniel A; Manderski, Michelle T B; Delnevo, Cristine D
2015-01-01
Socially and culturally embedded norms regarding smoking may be one pathway by which individuals adopt smoking behaviors. However, few studies have examined if social norms operate in young adults, a population at high risk of becoming regular smokers. There is also little research examining correlates of social norms in populations with a large immigrant segment, where social norms are likely to differ from the receiving country and could contribute to a better understanding of previously reported acculturation-health associations. Using data from a nationally representative sample of young adults in the United States reached via a novel cell-phone sampling design, we explored the relationships between acculturation proxies (nativity, language spoken and generational status), socioeconomic position (SEP), smoking social norms and current smoking status among Latinos 18-34 years of age (n = 873). Specifically, we examined if a measure of injunctive norms assessed by asking participants about the acceptability of smoking among Latino co-ethnic peers was associated with acculturation proxies and SEP. Results showed a strong gradient in smoking social norms by acculturation proxies, with significantly less acceptance of smoking reported among the foreign-born and increasing acceptance among those speaking only/mostly English at home and third-generation individuals. No consistent and significant pattern in smoking social norms was observed by education, income or employment status, possibly due to the age of the study population. Lastly, those who reported that their Latino peers do not find smoking acceptable were significantly less likely to be current smokers compared to those who said their Latino peers were ambivalent about smoking (do not care either way) in crude models, and in models that adjusted for age, sex, generational status, language spoken, and SEP. This study provides new evidence regarding the role of social norms in shaping smoking behaviors among Latino young adults and suggests distinct influences of acculturation proxies and socioeconomic condition on smoking social norms in this population. Copyright © 2014 Elsevier Ltd. All rights reserved.
Social norms and its correlates as a pathway to smoking among young Latino adults
Echeverría, Sandra E.; Gundersen, Daniel A.; Manderski, Michelle T.B.; Delnevo, Cristine D.
2014-01-01
Socially and culturally embedded norms regarding smoking may be one pathway by which individuals adopt smoking behaviors. However, few studies have examined if social norms operate in young adults, a population at high risk of becoming regular smokers. There is also little research examining correlates of social norms in populations with a large immigrant segment, where social norms are likely to differ from the receiving country and could contribute to a better understanding of previously reported acculturation-health associations. Using data from a nationally representative sample of young adults in the United States reached via a novel cell-phone sampling design, we explored the relationships between acculturation proxies (nativity, language spoken and generational status), socioeconomic position (SEP), smoking social norms and current smoking status among Latinos 18–34 years of age (n=873). Specifically, we examined if a measure of injunctive norms assessed by asking participants about the acceptability of smoking among Latino co-ethnic peers was associated with acculturation proxies and SEP. Results showed a strong gradient in smoking social norms by acculturation proxies, with significantly less acceptance of smoking reported among the foreign-born and increasing acceptance among those speaking only/ mostly English at home and third-generation individuals. No consistent and significant pattern in smoking social norms was observed by education, income or employment status, possibly due to the age of the study population. Lastly, those who reported that their Latino peers do not find smoking acceptable were significantly less likely to be current smokers compared to those who said their Latino peers were ambivalent about smoking (do not care either way) in crude models, and in models that adjusted for age, sex, generational status, language spoken, and SEP. This study provides new evidence regarding the role of social norms in shaping smoking behaviors among Latino young adults and suggests distinct influences of acculturation proxies and socioeconomic condition on smoking social norms in this population. PMID:25461876
Structured Kernel Subspace Learning for Autonomous Robot Navigation.
Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai
2018-02-14
This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.
What's in a norm? Sources and processes of norm change.
Paluck, Elizabeth Levy
2009-03-01
This reply to the commentary by E. Staub and L. A. Pearlman (2009) revisits the field experimental results of E. L. Paluck (2009). It introduces further evidence and theoretical elaboration supporting Paluck's conclusion that exposure to a reconciliation-themed radio soap opera changed perceptions of social norms and behaviors, not beliefs. Experimental and longitudinal survey evidence reinforces the finding that the radio program affected socially shared perceptions of typical or prescribed behavior-that is, social norms. Specifically, measurements of perceptions of social norms called into question by Staub and Pearlman are shown to correlate with perceptions of public opinion and public, not private, behaviors. Although measurement issues and the mechanisms of the radio program's influence merit further testing, theory and evidence point to social interactions and emotional engagement, not individual education, as the likely mechanisms of change. The present exchange makes salient what is at stake in this debate: a model of change based on learning and personal beliefs versus a model based on group influence and social norms. These theoretical models recommend very different strategies for prejudice and conflict reduction. Future field experiments should attempt to adjudicate between these models by testing relevant policies in real-world settings.
Significance tests for functional data with complex dependence structure.
Staicu, Ana-Maria; Lahiri, Soumen N; Carroll, Raymond J
2015-01-01
We propose an L 2 -norm based global testing procedure for the null hypothesis that multiple group mean functions are equal, for functional data with complex dependence structure. Specifically, we consider the setting of functional data with a multilevel structure of the form groups-clusters or subjects-units, where the unit-level profiles are spatially correlated within the cluster, and the cluster-level data are independent. Orthogonal series expansions are used to approximate the group mean functions and the test statistic is estimated using the basis coefficients. The asymptotic null distribution of the test statistic is developed, under mild regularity conditions. To our knowledge this is the first work that studies hypothesis testing, when data have such complex multilevel functional and spatial structure. Two small-sample alternatives, including a novel block bootstrap for functional data, are proposed, and their performance is examined in simulation studies. The paper concludes with an illustration of a motivating experiment.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL
2007-12-15
We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less
English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.
Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A
2013-09-01
As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.
Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen
2016-06-01
High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina
2012-05-01
A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less
Brain Activity of Regular and Dyslexic Readers while Reading Hebrew as Compared to English Sentences
ERIC Educational Resources Information Center
Breznitz, Zvia; Oren, Revital; Shaul, Shelley
2004-01-01
The aim of the present study was to examine differences among "regular" and dyslexic adult bilingual readers when processing reading and reading related skills in their first (L1 Hebrew) and second (L2 English) languages. Brain activity during reading Hebrew and English unexpected sentence endings was also studied. Behavioral and…
Regularity theory for general stable operators
NASA Astrophysics Data System (ADS)
Ros-Oton, Xavier; Serra, Joaquim
2016-06-01
We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.
NASA Astrophysics Data System (ADS)
Ryo, Ikehata
Uniform energy and L2 decay of solutions for linear wave equations with localized dissipation will be given. In order to derive the L2-decay property of the solution, a useful device whose idea comes from Ikehata-Matsuyama (Sci. Math. Japon. 55 (2002) 33) is used. In fact, we shall show that the L2-norm and the total energy of solutions, respectively, decay like O(1/ t) and O(1/ t2) as t→+∞ for a kind of the weighted initial data.
Alcohol Use Disorders and Perceived Drinking Norms: Ethnic Differences in Israeli Adults
Shmulewitz, Dvora; Wall, Melanie M.; Keyes, Katherine M.; Aharonovich, Efrat; Aivadyan, Christina; Greenstein, Eliana; Spivak, Baruch; Weizman, Abraham; Frisch, Amos; Hasin, Deborah
2012-01-01
Objective: Individuals’ perceptions of drinking acceptability in their society (perceived injunctive drinking norms) are widely assumed to explain ethnic group differences in drinking and alcohol use disorders (AUDs), but this has never been formally tested. Immigrants to Israel from the former Soviet Union (FSU) are more likely to drink and report AUD symptoms than other Israelis. We tested perceived drinking norms as a mediator of differences between FSU immigrants and other Israelis in drinking and AUDs. Method: Adult household residents (N = 1,349) selected from the Israeli population register were assessed with a structured interview measuring drinking, AUD symptoms, and perceived drinking norms. Regression analyses were used to produce odds ratios (OR) and risk ratios (RR) and 95% confidence intervals (CI) to test differences between FSU immigrants and other Israelis on binary and graded outcomes. Mediation of FSU effects by perceived drinking norms was tested with bootstrapping procedures. Results: FSU immigrants were more likely than other Israelis to be current drinkers (OR = 2.39, CI [1.61, 3.55]), have higher maximum number of drinks per day (RR = 1.88, CI [1.64, 2.16]), have any AUD (OR = 1.75, CI [1.16, 2.64]), score higher on a continuous measure of AUD (RR = 1.44, CI [1.12, 1.84]), and perceive more permissive drinking norms (p < .0001). For all four drinking variables, the FSU group effect was at least partially mediated by perceived drinking norms. Conclusions: This is the first demonstration that drinking norms mediate ethnic differences in AUDs. This work contributes to understanding ethnic group differences in drinking and AUDs, potentially informing etiologic research and public policy aimed at reducing alcohol-related harm. PMID:23036217
Naserkhaki, Sadegh; Jaremko, Jacob L; El-Rich, Marwan
2016-09-06
There is a large, at times contradictory, body of research relating spinal curvature to Low Back Pain (LBP). Mechanical load is considered as important factor in LBP etiology. Geometry of the spinal structures and sagittal curvature of the lumbar spine govern its mechanical behavior. Thus, understanding how inter-individual geometry particularly sagittal curvature variation affects the spinal load-sharing becomes of high importance in LBP assessment. This study calculated and compared kinematics and load-sharing in three ligamentous lumbosacral spines: one hypo-lordotic (Hypo-L) with low lordosis, one normal-lordotic (Norm-L) with normal lordosis, and one hyper-lordotic (Hyper-L) with high lordosis in flexed and extended postures using 3D nonlinear Finite Element (FE) modeling. These postures were simulated by applying Follower Load (FL) combined with flexion or extension moment. The Hypo-L spine demonstrated stiffer behavior in flexion but more flexible response to extension compared to the Norm-L spine. The excessive lordosis stiffened response of the Hyper-L spine to extension but did not affect its resistance to flexion compared to the Norm-L spine. Despite the different resisting actions of the posterior ligaments to flexion moment, the increase of disc compression was similar in all the spines leading to similar load-sharing. However, resistance of the facet joints to extension was more important in the Norm- and Hyper-L spines which reduced the disc compression. The spinal curvature strongly influenced the magnitude and location of load on the spinal components and also altered the kinematics and load-sharing particularly in extension. Consideration of the subject-specific geometry and sagittal curvature should be an integral part of mechanical analysis of the lumbar spine. Copyright © 2016 Elsevier Ltd. All rights reserved.
Harrison, Tyler R; Muhamad, Jessica Wendorf; Yang, Fan; Morgan, Susan E; Talavera, Ed; Caban-Martinez, Alberto; Kobetz, Erin
2018-04-01
Firefighters are exposed to carcinogens such as volatile organic compounds (VOCs) and polycyclic aromatic hydrocarbons (PAHs) during fires and from their personal protective equipment (PPE). Recent research has shown that decontamination processes can reduce contamination on both gear and skin. While firefighter cultures that honor dirty gear are changing, little is known about current attitudes and behaviors toward decontamination in the fire service. Four hundred eighty-five firefighters from four departments completed surveys about their attitudes, beliefs, perceived norms, barriers, and behaviors toward post-fire decontamination processes. Overall, firefighters reported positive attitudes, beliefs, and perceived norms about decontamination, but showering after a fire was the only decontamination process that occurred regularly, with field decontamination, use of cleansing wipes, routine gear cleaning, and other behaviors all occurring less frequently. Firefighters reported time and concerns over wet gear as barriers to decontamination.
Clavien, Christine; Tanner, Colby J.; Clément, Fabrice; Chapuisat, Michel
2012-01-01
The punishment of social misconduct is a powerful mechanism for stabilizing high levels of cooperation among unrelated individuals. It is regularly assumed that humans have a universal disposition to punish social norm violators, which is sometimes labelled “universal structure of human morality” or “pure aversion to social betrayal”. Here we present evidence that, contrary to this hypothesis, the propensity to punish a moral norm violator varies among participants with different career trajectories. In anonymous real-life conditions, future teachers punished a talented but immoral young violinist: they voted against her in an important music competition when they had been informed of her previous blatant misconduct toward fellow violin students. In contrast, future police officers and high school students did not punish. This variation among socio-professional categories indicates that the punishment of norm violators is not entirely explained by an aversion to social betrayal. We suggest that context specificity plays an important role in normative behaviour; people seem inclined to enforce social norms only in situations that are familiar, relevant for their social category, and possibly strategically advantageous. PMID:22720012
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.
Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben
2017-08-02
It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.
Facteurs d’ambiance dans l’industrie textile en République Démocratique du Congo: état de lieu
Kitronza, Panda Lukongo; Philippe, Mairiaux
2016-01-01
Introduction Ce travail vise à faire une évaluation des nuisances dans le milieu de travail du secteur textile en République Démocratique du Congo. Méthodes Nous avons effectué une étude transversale et analytique. Sur 257 travailleurs sélectionnés par échantillonnage systématique, 229 travailleurs ont été retenus. 223 postes de travail ont fait l'objet de mesures pour le bruit, l'éclairage, et la chaleur. Les informations recueillies l'ont été à partir de la consultation des documents de l'entreprise, de l'interrogatoire mené par questionnaire dirigé portant essentiellement sur les renseignements socio professionnels et par des mesurages. L'analyse descriptive a été faite pour les données sociodémographiques et professionnelles et l'approche analytique pour les mesurages. Résultats Dans cette entreprise 88% des travailleurs sont des ouvriers. Le département de tissage englobe presque 68% des travailleurs. La plupart travaillent en trois pauses (85%). La population d'étude est majoritairement masculine à 85%, vieillissante avec 52% de plus de 40 ans et instruite (80%). Dans l'entreprise, seuls 12,1 % des postes de travail respectent les normes en matière de bruit et 18 % des postes en matière d'éclairage. 94% des postes ne respectent pas les normes en matière de chaleur pour un travail lourd. Conclusion Notre étude a permis de mettre en évidence les nuisances au sein de l'industrie, montrant un écart important par rapport aux normes prescrites pour les nuisances mesurées. Ces résultats est un plaidoyer pour développer des mesures de prévention appropriées. Ils sont à confronter à ceux d'autres études plus approfondies dans ce milieu. PMID:28154733
Purba, Fredrick Dermawan; Hunfeld, Joke A M; Iskandarsyah, Aulia; Fitriana, Titi Sahidah; Sadarjoen, Sawitri S; Passchier, Jan; Busschbach, Jan J V
2018-01-01
The objective of this study is to obtain population norms and to assess test-retest reliability of EQ-5D-5L and WHOQOL-BREF for the Indonesian population. A representative sample of 1056 people aged 17-75 years was recruited from the Indonesian general population. We used a multistage stratified quota sampling method with respect to residence, gender, age, education level, religion and ethnicity. Respondents completed EQ-5D-5L and WHOQOL-BREF with help from an interviewer. Norms data for both instruments were reported. For the test-retest evaluations, a sub-sample of 206 respondents completed both instruments twice. The total sample and test-retest sub-sample were representative of the Indonesian general population. The EQ-5D-5L shows almost perfect agreement between the two tests (Gwet's AC: 0.85-0.99 and percentage agreement: 90-99%) regarding the five dimensions. However, the agreement of EQ-VAS and index scores can be considered as poor (ICC: 0.45 and 0.37 respectively). For the WHOQOL-BREF, ICCs of the four domains were between 0.70 and 0.79, which indicates moderate to good agreement. For EQ-5D-5L, it was shown that female and older respondents had lower EQ-index scores, whilst rural, younger and higher-educated respondents had higher EQ-VAS scores. For WHOQOL-BREF: male, younger, higher-educated, high-income respondents had the highest scores in most of the domains, overall quality of life, and health satisfaction. This study provides representative estimates of self-reported health status and quality of life for the general Indonesian population as assessed by the EQ-5D-5L and WHOQOL-BREF instruments. The descriptive system of the EQ-5D-5L and the WHOQOL-BREF have high test-retest reliability while the EQ-VAS and the index score of EQ-5D-5L show poor agreement between the two tests. Our results can be useful to researchers and clinicians who can compare their findings with respect to these concepts with those of the Indonesian general population.
Computer-Delivered Social Norm Message Increases Pain Tolerance
Pulvers, Kim; Schroeder, Jacquelyn; Limas, Eleuterio F.; Zhu, Shu-Hong
2013-01-01
Background Few experimental studies have been conducted on social determinants of pain tolerance. Purpose This study tests a brief, computer-delivered social norm message for increasing pain tolerance. Methods Healthy young adults (N=260; 44 % Caucasian; 27 % Hispanic) were randomly assigned into a 2 (social norm)×2 (challenge) cold pressor study, stratified by gender. They received standard instructions or standard instructions plus a message that contained artifically elevated information about typical performance of others. Results Those receiving a social norm message displayed significantly higher pain tolerance, F(1, 255)=26.95, p<.001, ηp2=.10 and pain threshold F(1, 244)=9.81, p=.002, ηp2=.04, but comparable pain intensity, p>.05. There were no interactions between condition and gender on any outcome variables, p>.05. Conclusions Social norms can significantly increase pain tolerance, even with a brief verbal message delivered by a video. PMID:24146086
Radiation dose to the Malaysian populace via the consumption of bottled mineral water
NASA Astrophysics Data System (ADS)
Khandaker, Mayeen Uddin; Nasir, Noor Liyana Mohd; Zakirin, Nur Syahira; Kassim, Hasan Abu; Asaduzzaman, Khandoker; Bradley, D. A.; Zulkifli, M. Y.; Hayyan, Adeeb
2017-11-01
Due to the geological makeup of the various water bodies, mineral- and groundwater can be expected to contain levels of naturally occurring radioactive material (NORM) exceeding that of tap and surface water. Acknowledging mineral water to form a vital component of the intake in maintaining the healthy life of an individual, it nevertheless remains important to study the associated radiological implications of NORM content, especially in regard to the consumption of bottled mineral water, the presence of which is prevalent in modern urban society. In present study, various brands of bottled mineral waters that are commonly available in Malaysia were obtained from local markets, the presence of NORM subsequently being assessed by HPGe γ-ray spectrometry. The activity concentrations of the radionuclides of particular interest, 226Ra, 232Th and 40K, were found to be within the respective ranges of 1.45±0.28‒3.30±0.43, 0.65±0.18‒3.39±0.38 and 21.12±1.74‒25.31±1.84 Bq/L. The concentrations of 226Ra, of central importance in radiological risk assessment, exceed the World Health Organisation (WHO, 2011) recommended maximum permissible limit of 1.0 Bq/L; for all three radionuclides taken together, the annual effective doses are greater than the WHO recommended limit of 0.1 mSv/y, a matter of especial concern for those in the developmental stages of life.
NASA Astrophysics Data System (ADS)
Bassrei, A.; Terra, F. A.; Santos, E. T.
2007-12-01
Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.
[Sodium concentrations in solutions for oral rehydration in children with diarrhea].
Mota-Hernández, F; Morales-Barradas, J A
1990-04-01
Using the appropriate treatment for dehydration due to diarrhea, over a million deaths a year in children under five are being prevented. After analyzing the information related to the concentration of sodium in solutions used for oral rehydration, the following conclusions can be made: 1. Solutions with high glucose content, as well as hyperosmolar foods, favor the development of hypernatremia. Not so, sodium concentrations of up to 90 mmol/L, with glucose under 2.5%. 2. There are other factors which correlate with the presence of hypernatremia: abundant watery diarrhea, a good state of nutrition, under six months of age and the administration of solute loads, orally (boiled milk) as well as intravenously. 3. The WHO oral rehydration solution which contains, in mmol/L: sodium 90, glucose 111 (2%), chloride 80, potassium 20 and citrate 10, with a total osmolarity of 311 or 331 mOsm/L, is the one which more closely resembles the ideal concentration and has shown to be effective, not only in the treatment of dehydration due to diarrhea, but has also been to be useful in the prevention and maintenance of rehydration, independently from the etiology, the patient's age or the state of nutrition. 4. The use of oral serum with a sodium concentration of 90 mmol/L, reduces the natremia more slowly, therefore protecting the patient with hypernatremic dehydration from developing convulsions during treatment. This sodium concentration is also the best for cases of hyponatremic dehydration. 5. Using the recommended norms in cases of children with diarrhea, including continuing regular feeding habits and the adding of complementary liquids, no cases of hypernatremia have been recorded.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François
2018-06-01
The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.
Pétré, Benoit; Scheen, André J; Ziegler, Olivier; Donneau, Anne-Françoise; Dardenne, Nadia; Husson, Eddy; Albert, Adelin; Guillaume, Michèle
2016-01-01
Background and objective Despite the strength and consistency of the relationship between body mass index (BMI) and quality of life (QoL), a reduction in BMI does not necessarily lead to an improvement in QoL. Between-subject variability indicates the presence of mediators and moderators in the BMI–QoL association. This study aimed to examine the roles of body image discrepancy (BID) and subjective norm (SN) as potential mediators and moderators. Subjects and methods In 2012, 3,016 volunteers (aged ≥18 years) participated in a community-based survey conducted in the French-speaking region of Belgium. Participation was enhanced using a large multimedia campaign (which was supported by a large network of recruiters) that employed the nonstigmatizing slogan, “Whatever your weight, your opinion will count”. Participants were invited to complete a web-based questionnaire on their weight-related experiences. Self-reported measures were used to calculate each participant’s BMI, BID, SN, and QoL (a French obesity-specific QoL questionnaire was used to calculate the participants’ physical dimension of QoL scores [PHY-QoL], psychosocial dimension of QoL scores [PSY/SOC-QoL], and their total scores). The covariates included gender, age, subjective economic status, level of education, household size, and perceived health. The mediation/moderation tests were based on Hayes’ method. Results Tests showed that the relationships between BMI and PHY-QoL, PSY/SOC-QoL, and TOT-QoL were partially mediated by BID in both males and females and by SN in females. Moreover, BID was a moderator of the relationship between BMI and PSY/SOC-QoL in males and females. SN was a moderator of the relationship between BMI and PSY/SOC-QoL in males and between BMI and total scores in males (when used without BID in the models). Conclusion BID and SN should be considered as important factors in obesity management strategies. The study shows that targeting BMI only is not sufficient to improve the QoL of overweight and obese subjects, and that other variables, including perceptual factors, should be considered. PMID:27853356
NASA Astrophysics Data System (ADS)
Nezir, Veysel; Mustafa, Nizami
2017-04-01
In 2008, P.K. Lin provided the first example of a nonreflexive space that can be renormed to have fixed point property for nonexpansive mappings. This space was the Banach space of absolutely summable sequences l1 and researchers aim to generalize this to c0, Banach space of null sequences. Before P.K. Lin's intriguing result, in 1979, Goebel and Kuczumow showed that there is a large class of non-weak* compact closed, bounded, convex subsets of l1 with fixed point property for nonexpansive mappings. Then, P.K. Lin inspired by Goebel and Kuczumow's ideas to give his result. Similarly to P.K. Lin's study, Hernández-Linares worked on L1 and in his Ph.D. thesis, supervisored under Maria Japón, showed that L1 can be renormed to have fixed point property for affine nonexpansive mappings. Then, related questions for c0 have been considered by researchers. Recently, Nezir constructed several equivalent norms on c0 and showed that there are non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings. In this study, we construct a family of equivalent norms containing those developed by Nezir as well and show that there exists a large class of non-weakly compact closed, bounded, convex subsets of c0 with fixed point property for affine nonexpansive mappings.
Barrington, Clare; Kerrigan, Deanna
2014-01-01
Encouragement to use condoms reflects the injunctive norm, or idea that you should use condoms. In our previous research with the regular male partners of female sex workers in the Dominican Republic, we found that encouragement to use condoms with female sex workers from individuals in their personal social networks was not directly associated with condom use. In the current study, we used qualitative interviews to further explore the influence of social network norms on men's sexual risk behaviours. We interviewed eleven steady male partners of female sex workers; participants completed two interviews to achieve greater depth. We analysed data using analytic summaries and systematic thematic coding. All men perceived that the prevailing injunctive norm was that they should use condoms with sex workers. Men received encouragement to use condoms but did not articulate a link between this encouragement and condom use. Additionally, men who did not use condoms lied to their friends to avoid social sanction. Findings highlight that the influence of a pro-condom injunctive norm is not always health promoting and can even be negative. HIV prevention efforts seeking to promote condom use should address the alignment between injunctive and descriptive norms to strengthen their collective influence on behaviour. PMID:24555440
Properties of networks with partially structured and partially random connectivity
NASA Astrophysics Data System (ADS)
Ahmadian, Yashar; Fumarola, Francesco; Miller, Kenneth D.
2015-01-01
Networks studied in many disciplines, including neuroscience and mathematical biology, have connectivity that may be stochastic about some underlying mean connectivity represented by a non-normal matrix. Furthermore, the stochasticity may not be independent and identically distributed (iid) across elements of the connectivity matrix. More generally, the problem of understanding the behavior of stochastic matrices with nontrivial mean structure and correlations arises in many settings. We address this by characterizing large random N ×N matrices of the form A =M +L J R , where M ,L , and R are arbitrary deterministic matrices and J is a random matrix of zero-mean iid elements. M can be non-normal, and L and R allow correlations that have separable dependence on row and column indices. We first provide a general formula for the eigenvalue density of A . For A non-normal, the eigenvalues do not suffice to specify the dynamics induced by A , so we also provide general formulas for the transient evolution of the magnitude of activity and frequency power spectrum in an N -dimensional linear dynamical system with a coupling matrix given by A . These quantities can also be thought of as characterizing the stability and the magnitude of the linear response of a nonlinear network to small perturbations about a fixed point. We derive these formulas and work them out analytically for some examples of M ,L , and R motivated by neurobiological models. We also argue that the persistence as N →∞ of a finite number of randomly distributed outlying eigenvalues outside the support of the eigenvalue density of A , as previously observed, arises in regions of the complex plane Ω where there are nonzero singular values of L-1(z 1 -M ) R-1 (for z ∈Ω ) that vanish as N →∞ . When such singular values do not exist and L and R are equal to the identity, there is a correspondence in the normalized Frobenius norm (but not in the operator norm) between the support of the spectrum of A for J of norm σ and the σ pseudospectrum of M .
Bilevel Model-Based Discriminative Dictionary Learning for Recognition.
Zhou, Pan; Zhang, Chao; Lin, Zhouchen
2017-03-01
Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.
Gibbon, John D; Pal, Nairita; Gupta, Anupam; Pandit, Rahul
2016-12-01
We consider the three-dimensional (3D) Cahn-Hilliard equations coupled to, and driven by, the forced, incompressible 3D Navier-Stokes equations. The combination, known as the Cahn-Hilliard-Navier-Stokes (CHNS) equations, is used in statistical mechanics to model the motion of a binary fluid. The potential development of singularities (blow-up) in the contours of the order parameter ϕ is an open problem. To address this we have proved a theorem that closely mimics the Beale-Kato-Majda theorem for the 3D incompressible Euler equations [J. T. Beale, T. Kato, and A. J. Majda, Commun. Math. Phys. 94, 61 (1984)CMPHAY0010-361610.1007/BF01212349]. By taking an L^{∞} norm of the energy of the full binary system, designated as E_{∞}, we have shown that ∫_{0}^{t}E_{∞}(τ)dτ governs the regularity of solutions of the full 3D system. Our direct numerical simulations (DNSs) of the 3D CHNS equations for (a) a gravity-driven Rayleigh Taylor instability and (b) a constant-energy-injection forcing, with 128^{3} to 512^{3} collocation points and over the duration of our DNSs confirm that E_{∞} remains bounded as far as our computations allow.
Lerdal, Anners; Andenæs, Randi; Bjørnsborg, Eva; Bonsaksen, Tore; Borge, Lisbet; Christiansen, Bjørg; Eide, Hilde; Hvinden, Kari; Fagermoen, May Solveig
2011-10-01
To explore relationships of socio-demographic variables, health behaviours, environmental characteristics and personal factors, with physical and mental health variables in persons with morbid obesity, and to compare their health-related quality of life (HRQoL) scores with scores from the general population. A cross-sectional correlation study design was used. Data were collected by self-reported questionnaire from adult patients within the first 2 days of commencement of a mandatory educational course. Of 185 course attendees, 142 (76.8%) volunteered to participate in the study. Valid responses on all items were recorded for 128 participants. HRQoL was measured with the Short Form 12v2 from which physical (PCS) and mental component summary (MCS) scores were computed. Other standardized instruments measured regular physical activity, social support, self-esteem, sense of coherence, self-efficacy and coping style. Respondents scored lower on all the HRQoL sub-domains compared with norms. Linear regression analyses showed that personal factors that included self-esteem, self-efficacy, sense of coherence and coping style explained 3.6% of the variance in PCS scores and 41.6% in MCS scores. Personal factors such as self-esteem, sense of coherence and a high approaching coping style are strongly related to mental health in obese persons.
Baenziger, Julia; Roser, Katharina; Mader, Luzius; Christen, Salome; Kuehni, Claudia E; Gumy-Pause, Fabienne; Tinner, Eva Maria; Michel, Gisela
2018-06-01
Childhood cancer survivors are at high risk for late effects. Regular attendance to long-term follow-up care is recommended and helps monitoring survivors' health. Using the theory of planned behavior, we aimed to (1) investigate the predictors of the intention to attend follow-up care, and (2) examine the associations between perceived control and behavioral intention with actual follow-up care attendance in Swiss childhood cancer survivors. We conducted a questionnaire survey in Swiss childhood cancer survivors (diagnosed with cancer aged <16 years between 1990 and 2005; ≥5 years since diagnosis). We assessed theory of planned behavior-related predictors (attitude, subjective norm, perceived control), intention to attend follow-up care, and actual attendance. We applied structural equation modeling to investigate predictors of intention, and logistic regression models to study the association between intention and actual attendance. Of 299 responders (166 [55.5%] females), 145 (48.5%) reported attending follow-up care. We found that subjective norm, ie, survivors' perceived social pressure and support (coef = 0.90, P < 0.001), predicted the intention to attend follow-up; attitude and perceived control did not. Perceived control (OR = 1.58, 95%CI:1.04-2.41) and intention to attend follow-up (OR = 6.43, 95%CI:4.21-9.81) were positively associated with attendance. To increase attendance, an effort should be made to sensitize partners, friends, parents, and health care professionals on their important role in supporting survivors regarding follow-up care. Additionally, interventions promoting personal control over the follow-up attendance might further increase regular attendance. Copyright © 2018 John Wiley & Sons, Ltd.
Search Alternatives and Beyond
ERIC Educational Resources Information Center
Bell, Steven J.
2006-01-01
Internet search has become a routine computing activity, with regular visits to a search engine--usually Google--the norm for most people. The vast majority of searchers, as recent studies of Internet search behavior reveal, search only in the most basic of ways and fail to avail themselves of options that could easily and effortlessly improve…
Reflecting on the Role of Competence and Culture in Consultation at the International Level
ERIC Educational Resources Information Center
Rosenfield, Sylvia
2014-01-01
International educational consultation is challenging work that requires not only attention to best practices in consultation but also additional focus on cultural norms and regularities. In the three articles of this special issue, the consultation competencies of consultants play a critical role, as exemplified by entry issues, problem-solving…
Depictions of Human Bodies in the Illustrations of Early Childhood Textbooks
ERIC Educational Resources Information Center
Martínez-Bello, Vladimir E.; Martínez-Bello, Daniel A.
2016-01-01
In many Ibero-American countries children in the early childhood education (ECE) system have the opportunity to interact with textbooks on a regular basis. The powerful social function of textbooks in socializing children in primary and secondary school, and in legitimizing what counts as cultural norms and officially sanctioned values and…
Hawthorne, Graeme; Korn, Sam; Richardson, Jeff
2013-02-01
To provide Australian health-related quality of life (HRQoL) population norms, based on utility scores from the Assessment of Quality of Life (AQoL) measure, a participant-reported outcomes (PRO) instrument. The data were from the 2007 National Survey of Mental Health and Wellbeing. AQoL scores were analysed by age cohorts, gender, other demographic characteristics, and mental and physical health variables. The AQoL utility score mean was 0.81 (95%CI 0.81-0.82), and 47% obtained scores indicating a very high HRQoL (>0.90). HRQoL gently declined by age group, with older adults' scores indicating lower HRQoL. Based on effect sizes (ESs), there were small losses in HRQoL associated with other demographic variables (e.g. by lack of labour force participation, ES(median) : 0.27). Those with current mental health syndromes reported moderate losses in HRQoL (ES(median) : 0.64), while those with physical health conditions generally also reported moderate losses in HRQoL (ES(median) : 0.41). This study has provided contemporary Australian population norms for HRQoL that may be used by researchers as indicators allowing interpretation and estimation of population health (e.g. estimation of the burden of disease), cross comparison between studies, the identification of health inequalities, and to provide benchmarks for health care interventions. © 2013 The Authors. ANZJPH © 2013 Public Health Association of Australia.
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
NASA Astrophysics Data System (ADS)
Barker, T.
2018-03-01
The main subject of this paper concerns the establishment of certain classes of initial data, which grant short time uniqueness of the associated weak Leray-Hopf solutions of the three dimensional Navier-Stokes equations. In particular, our main theorem that this holds for any solenodial initial data, with finite L_2(R^3) norm, that also belongs to certain subsets of {it{VMO}}^{-1}(R^3). As a corollary of this, we obtain the same conclusion for any solenodial u0 belonging to L2(R^3)\\cap \\dot{B}^{-1+3/p}_{p,∞}(R^3), for any 3
The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.
Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin
2015-11-01
We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.
Lee, Hyun Cheol; Yoo, Do Hyeon; Testa, Mauro; Shin, Wook-Geun; Choi, Hyun Joon; Ha, Wi-Ho; Yoo, Jaeryong; Yoon, Seokwon; Min, Chul Hee
2016-04-01
The aim of this study is to evaluate the potential hazard of naturally occurring radioactive material (NORM) added consumer products. Using the Monte Carlo method, the radioactive products were simulated with ICRP reference phantom and the organ doses were calculated with the usage scenario. Finally, the annual effective doses were evaluated as lower than the public dose limit of 1mSv y(-1) for 44 products. It was demonstrated that NORM-added consumer products could be quantitatively assessed for the safety regulation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lotfi, Zeghari; Aboussaleh, Youssef; Sbaibi, Rachid; Achouri, Imane; Benguedour, Rachid
2017-01-01
Introduction Le diabète est définit comme un trouble de l'assimilation, de l'utilisation et du stockage des sucres apportés par l'alimentation, sa prise en charge est assurée par le suivi du surpoids et l'obésité et le contrôle glycémique régulier. L'objectif de ce travail était l'étude du surpoids, l'obésité et le contrôle glycémique chez 2227 diabétiques de différent type (type 1, 2 et gestationnel), consultants le centre de référence provincial de diabète (CRD), Kénitra-Maroc. Méthodes L'étude s'est déroulée sur une période d'une année du mois janvier au mois décembre 2015, L'évaluation du surpoids et l'obésité a été effectuée par le calcul de l'Indice de Masse Corporelle (IMC=Poids/Taille2 (Kg/m2)), elles sont définit respectivement par IMC > 25 Kg/m2, et IMC > 30 Kg/m2, le poids et la taille ont été mesurés selon les recommandations de l'organisation mondiale de santé (OMS), Le contrôle glycémique a été effectué par l'analyse sanguine de l'Hémoglobine glycosylée et de la Glycémie à jeun. Les normes sont 7% pour l'Hémoglobine glycosylée et 0,70g/l à 1,10g/l pour la Glycémie à jeun. Résultats L'intervalle d'âges des patients est compris entre 8 mois et 80 ans, avec une dominance des diabétiques provenant du milieu urbain (74%) par rapport à ceux provenant du milieu rural (26%). Le surpoids touche l'ensemble de cette population. L'IMC moyen des femmes tends vers l'obésité (IMC≈30): (29,21 Kg/m2 ± 3,1) pour le diabète gestationnel et (29,15 Kg/m2 ± 3,2) pour le diabète de type 2. Les valeurs du contrôle glycémique sont supérieures aux normes: avec 8,5% ± 2,6 > 7% pour l'hémoglobine glycosylée et 1,5 g/l ± 1,3 > 1,10g/l pour la Glycémie à jeun. La différence entre les valeurs de l'hémoglobine glycosylée entre les hommes (8,5 7% ± 2,6) et les femmes (8,1% ± 2,3) n'est pas significative (P > 0,05), même chose pour la Glycémie capillaire à jeun: pour les hommes (1,44 g/l ± 1,1) et les femmes (1,43 g/l ± 1,2). Les coefficients de corrélation de Pearson sont hautement significatifs (P<0,005); d'une part entre IMC et la Glycémie à jeun (r = 0,5) et d'autre part entre IMC et les valeurs de l'Hémoglobine glycosylée (r = 0,4). Conclusion L'ensemble des diabétiques présente des valeurs de l'IMC et du contrôle glycémique, supérieures aux normes. Des recherches approfondies sont nécessaires sur ces diabétiques afin de dresser un programme urgent de remédiation. PMID:28904714
Lin, Liyuan; Han, Xiaojiao; Chen, Yicun; Wu, Qingke; Wang, Yangdong
2013-12-01
Quantitative real-time PCR has emerged as a highly sensitive and widely used method for detection of gene expression profiles, via which accurate detection depends on reliable normalization. Since no single control is appropriate for all experimental treatments, it is generally advocated to select suitable internal controls prior to use for normalization. This study reported the evaluation of the expression stability of twelve potential reference genes in different tissue/organs and six fruit developmental stages of Litsea cubeba in order to screen the superior internal reference genes for data normalization. Two softwares-geNorm, and NormFinder-were used to identify stability of these candidate genes. The cycle threshold difference and coefficient of variance were also calculated to evaluate the expression stability of candidate genes. F-BOX, EF1α, UBC, and TUA were selected as the most stable reference genes across 11 sample pools. F-BOX, EF1α, and EIF4α exhibited the highest expression stability in different tissue/organs and different fruit developmental stages. Besides, a combination of two stable reference genes would be sufficient for gene expression normalization in different fruit developmental stages. In addition, the relative expression profiles of DXS and DXR were evaluated by EF1α, UBC, and SAMDC. The results further validated the reliability of stable reference genes and also highlighted the importance of selecting suitable internal controls for L. cubeba. These reference genes will be of great importance for transcript normalization in future gene expression studies on L. cubeba.
Assessing Health-Related Quality of Life of Chinese Adults in Heilongjiang Using EQ-5D-3L.
Huang, Weidong; Yu, Hongjuan; Liu, Chaojie; Liu, Guoxiang; Wu, Qunhong; Zhou, Jin; Zhang, Xin; Zhao, Xiaowen; Shi, Linmei; Xu, Xiaoxue
2017-02-23
This study aimed to assess health-related quality of life (HRQOL) of Heilongjiang adult populations by using the EuroQol five-dimension three-level (EQ-5D-3L) questionnaire and to identify factors associated with HRQOL. Data from the National Health Services Survey (NHSS) 2008 in Heilongjiang province were obtained. Results of EQ-5D-3L questionnaires completed by 11,523 adult respondents (18 years or older) were converted to health index scores using a recently developed Chinese value set. Multivariate linear regression and logistic regression models were established to determine demographic, socioeconomic, health, and lifestyle factors that were associated with HRQOL and reported problems in the five dimensions of EQ-5D-3L. The Heilongjiang population had a mean EQ-5D-3L index score of 0.959. Lower EQ-5D-3L index scores were associated with older age, lower levels of education, chronic conditions, temporary accommodation, poverty, unemployment, and lack of regular physical activities. Older respondents and those who were unemployed, had chronic conditions, and lived in poverty were more likely to report problems in all of the five health dimensions. Higher educational attainment was associated with lower odds of reporting health problems in mobility, pain/discomfort, and anxiety/depression. Low socioeconomic status is associated with poor HRQOL. Regional population norms for EQ-5D-3L are needed for health economic studies due to great socioeconomic disparities across regions in China. Overall, the Heilongjiang population has a similar level of HRQOL compared with the national average.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Technical note: an R package for fitting sparse neural networks with application in animal breeding.
Wang, Yangfan; Mi, Xue; Rosa, Guilherme J M; Chen, Zhihui; Lin, Ping; Wang, Shi; Bao, Zhenmin
2018-05-04
Neural networks (NNs) have emerged as a new tool for genomic selection (GS) in animal breeding. However, the properties of NN used in GS for the prediction of phenotypic outcomes are not well characterized due to the problem of over-parameterization of NN and difficulties in using whole-genome marker sets as high-dimensional NN input. In this note, we have developed an R package called snnR that finds an optimal sparse structure of a NN by minimizing the square error subject to a penalty on the L1-norm of the parameters (weights and biases), therefore solving the problem of over-parameterization in NN. We have also tested some models fitted in the snnR package to demonstrate their feasibility and effectiveness to be used in several cases as examples. In comparison of snnR to the R package brnn (the Bayesian regularized single layer NNs), with both using the entries of a genotype matrix or a genomic relationship matrix as inputs, snnR has greatly improved the computational efficiency and the prediction ability for the GS in animal breeding because snnR implements a sparse NN with many hidden layers.
Sparse network-based models for patient classification using fMRI
Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina
2015-01-01
Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459
Conservative regularization of compressible dissipationless two-fluid plasmas
NASA Astrophysics Data System (ADS)
Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.
2018-02-01
This paper extends our earlier approach [cf. A. Thyaharaja, Phys. Plasmas 17, 032503 (2010) and Krishnaswami et al., Phys. Plasmas 23, 022308 (2016)] to obtaining à priori bounds on enstrophy in neutral fluids and ideal magnetohydrodynamics. This results in a far-reaching local, three-dimensional, non-linear, dispersive generalization of a KdV-type regularization to compressible/incompressible dissipationless 2-fluid plasmas and models derived therefrom (quasi-neutral, Hall, and ideal MHD). It involves the introduction of vortical and magnetic "twirl" terms λl 2 ( w l + ( q l / m l ) B ) × ( ∇ × w l ) in the ion/electron velocity equations ( l = i , e ) where w l are vorticities. The cut-off lengths λl and number densities nl must satisfy λl 2 n l = C l , where Cl are constants. A novel feature is that the "flow" current ∑ l q l n l v l in Ampère's law is augmented by a solenoidal "twirl" current ∑ l ∇ × ∇ × λl 2 j flow , l . The resulting equations imply conserved linear and angular momenta and a positive definite swirl energy density E * which includes an enstrophic contribution ∑ l ( 1 / 2 ) λl 2 ρ l wl 2 . It is shown that the equations admit a Hamiltonian-Poisson bracket formulation. Furthermore, singularities in ∇ × B are conservatively regularized by adding ( λB 2 / 2 μ 0 ) ( ∇ × B ) 2 to E * . Finally, it is proved that among regularizations that admit a Hamiltonian formulation and preserve the continuity equations along with the symmetries of the ideal model, the twirl term is unique and minimal in non-linearity and space derivatives of velocities.
Regularization by Functions of Bounded Variation and Applications to Image Enhancement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, E.; Kunisch, K.; Pola, C.
1999-09-15
Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.
1980-05-19
Maze20 30OA lCeI SeM- ,aO50 BK.O~K~jlbo10 0 75 indust--iaj. ant-~rpr-,s!s, baicu ria vr the total1 vc lum-i of raservcirs -4t is more than 500 Mn3, or...liquefied gas shcula Le deterrfined in depending cr accepted fcr this indutria1 entezirisp rora fcr stcrage of standby fuel / vr ooellant. DcC = 30CL42901 34...degree cf refractoriness (fcr example, theaters, cinemas , clubs, touses cf culture, therapeutic and childrengs institutions, educational institutions
Injunctive Norms and Alcohol Consumption: A Revised Conceptualization
Krieger, Heather; Neighbors, Clayton; Lewis, Melissa A.; LaBrie, Joseph W.; Foster, Dawn W.; Larimer, Mary E.
2016-01-01
Background Injunctive norms have been found to be important predictors of behaviors in many disciplines with the exception of alcohol research. This exception is likely due to a misconceptualization of injunctive norms for alcohol consumption. To address this, we outline and test a new conceptualization of injunctive norms and personal approval for alcohol consumption. Traditionally, injunctive norms have been assessed using Likert scale ratings of approval perceptions, whereas descriptive norms and individual behaviors are typically measured with behavioral estimates (i.e., number of drinks consumed per week, frequency of drinking, etc.). This makes comparisons between these constructs difficult because they are not similar conceptualizations of drinking behaviors. The present research evaluated a new representation of injunctive norms with anchors comparable to descriptive norms measures. Methods A study and a replication were conducted including 2,559 and 1,189 undergraduate students from three different universities. Participants reported on their alcohol-related consumption behaviors, personal approval of drinking, and descriptive and injunctive norms. Personal approval and injunctive norms were measured using both traditional measures and a new drink-based measure. Results Results from both studies indicated that drink-based injunctive norms were uniquely and positively associated with drinking whereas traditionally assessed injunctive norms were negatively associated with drinking. Analyses also revealed significant unique associations between drink-based injunctive norms and personal approval when controlling for descriptive norms. Conclusions These findings provide support for a modified conceptualization of personal approval and injunctive norms related to alcohol consumption and, importantly, offers an explanation and practical solution for the small and inconsistent findings related to injunctive norms and drinking in past studies. PMID:27030295
Revealing small-scale diffracting discontinuities by an optimization inversion algorithm
NASA Astrophysics Data System (ADS)
Yu, Caixia; Zhao, Jingtao; Wang, Yanfei
2017-02-01
Small-scale diffracting geologic discontinuities play a significant role in studying carbonate reservoirs. The seismic responses of them are coded in diffracted/scattered waves. However, compared with reflections, the energy of these valuable diffractions is generally one or even two orders of magnitude weaker. This means that the information of diffractions is strongly masked by reflections in the seismic images. Detecting the small-scale cavities and tiny faults from the deep carbonate reservoirs, mainly over 6 km, poses an even bigger challenge to seismic diffractions, as the signals of seismic surveyed data are weak and have a low signal-to-noise ratio (SNR). After analyzing the mechanism of the Kirchhoff migration method, the residual of prestack diffractions located in the neighborhood of the first Fresnel aperture is found to remain in the image space. Therefore, a strategy for extracting diffractions in the image space is proposed and a regularized L 2-norm model with a smooth constraint to the local slopes is suggested for predicting reflections. According to the focusing conditions of residual diffractions in the image space, two approaches are provided for extracting diffractions. Diffraction extraction can be directly accomplished by subtracting the predicted reflections from seismic imaging data if the residual diffractions are focused. Otherwise, a diffraction velocity analysis will be performed for refocusing residual diffractions. Two synthetic examples and one field application demonstrate the feasibility and efficiency of the two proposed methods in detecting the small-scale geologic scatterers, tiny faults and cavities.
Longitudinal Relationships Among Perceived Injunctive and Descriptive Norms and Marijuana Use
Napper, Lucy E.; Kenney, Shannon R.; Hummer, Justin F.; Fiorot, Sara; LaBrie, Joseph W.
2016-01-01
Objective: The current study uses longitudinal data to examine the relative influence of perceived descriptive and injunctive norms for proximal and distal referents on marijuana use. Method: Participants were 740 undergraduate students (67% female) who completed web-based surveys at two time points 12 months apart. Time 1 measures included reports of marijuana use, approval, perceived descriptive norms, and perceived injunctive norms for the typical student, close friends, and parents. At Time 2, students reported on their marijuana use. Results: Results of a path analysis suggest that, after we controlled for Time 1 marijuana use, greater perceived friend approval indirectly predicted Time 2 marijuana use as mediated by personal approval. Greater perceived parental approval was both indirectly and directly associated with greater marijuana use at follow-up. Perceived typical-student descriptive norms were neither directly nor indirectly related to Time 2 marijuana use. Conclusions: The findings support the role of proximal injunctive norms in predicting college student marijuana use up to 12 months later. The results indicate the potential importance of developing normative interventions that incorporate the social influences of proximal referents. PMID:27172578
Color TV: total variation methods for restoration of vector-valued images.
Blomgren, P; Chan, T F
1998-01-01
We propose a new definition of the total variation (TV) norm for vector-valued functions that can be applied to restore color and other vector-valued images. The new TV norm has the desirable properties of 1) not penalizing discontinuities (edges) in the image, 2) being rotationally invariant in the image space, and 3) reducing to the usual TV norm in the scalar case. Some numerical experiments on denoising simple color images in red-green-blue (RGB) color space are presented.
Boero, Ferdinando
2013-01-01
Natural history is based on observations, whereas modern ecology is mostly based on experiments aimed at testing hypotheses, either in the field or in a computer. Furthermore, experiments often reveal generalities that are taken as norms. Ecology, however, is a historical discipline and history is driven by both regularities (deriving from norms) and irregularities, or contingencies, which occur when norms are broken. If only norms occured, there would be no history. The current disregard for the importance of contingencies and anecdotes is preventing us from understanding ecological history. We need rules and norms, but we also need records about apparently irrelevant things that, in non-linear systems like ecological ones, might become the drivers of change and, thus, the determinants of history. The same arguments also hold in the field of evolutionary biology, with natural selection being the ecological driver of evolutionary change. It is important that scientists are able to publish potentially important observations, particularly those that are unrelated to their current projects that have no sufficient grounds to be framed into a classical eco-evolutionary paper, and could feasibly impact on the history of the systems in which they occurred. A report on any deviation from the norm would be welcome, from the disappearance of species to their sudden appearance in great quantities. Any event that an "expert eye" (i.e. the eye of a naturalist) might judge as potentially important is worth being reported.
Female non-regular workers in Japan: their current status and health.
Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu
2016-12-07
The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.
Female non-regular workers in Japan: their current status and health
INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu
2016-01-01
The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453
Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.
Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng
2013-01-01
Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.
Yum, Yen Na; Law, Sam-Po; Mo, Kwan Nok; Lau, Dustin; Su, I-Fan; Shum, Mark S K
2016-04-01
While Chinese character reading relies more on addressed phonology relative to alphabetic scripts, skilled Chinese readers also access sublexical phonological units during recognition of phonograms. However, sublexical orthography-to-phonology mapping has not been found among beginning second language (L2) Chinese learners. This study investigated character reading in more advanced Chinese learners whose native writing system is alphabetic. Phonological regularity and consistency were examined in behavioral responses and event-related potentials (ERPs) in lexical decision and delayed naming tasks. Participants were 18 native English speakers who acquired written Chinese after age 5 years and reached grade 4 Chinese reading level. Behaviorally, regular characters were named more accurately than irregular characters, but consistency had no effect. Similar to native Chinese readers, regularity effects emerged early with regular characters eliciting a greater N170 than irregular characters. Regular characters also elicited greater frontal P200 and smaller N400 than irregular characters in phonograms of low consistency. Additionally, regular-consistent characters and irregular-inconsistent characters had more negative amplitudes than irregular-consistent characters in the N400 and LPC time windows. The overall pattern of brain activities revealed distinct regularity and consistency effects in both tasks. Although orthographic neighbors are activated in character processing of L2 Chinese readers, the timing of their impact seems delayed compared with native Chinese readers. The time courses of regularity and consistency effects across ERP components suggest both assimilation and accommodation of the reading network in learning to read a typologically distinct second orthographic system.
Reconstructing cortical current density by exploring sparseness in the transform domain
NASA Astrophysics Data System (ADS)
Ding, Lei
2009-05-01
In the present study, we have developed a novel electromagnetic source imaging approach to reconstruct extended cortical sources by means of cortical current density (CCD) modeling and a novel EEG imaging algorithm which explores sparseness in cortical source representations through the use of L1-norm in objective functions. The new sparse cortical current density (SCCD) imaging algorithm is unique since it reconstructs cortical sources by attaining sparseness in a transform domain (the variation map of cortical source distributions). While large variations are expected to occur along boundaries (sparseness) between active and inactive cortical regions, cortical sources can be reconstructed and their spatial extents can be estimated by locating these boundaries. We studied the SCCD algorithm using numerous simulations to investigate its capability in reconstructing cortical sources with different extents and in reconstructing multiple cortical sources with different extent contrasts. The SCCD algorithm was compared with two L2-norm solutions, i.e. weighted minimum norm estimate (wMNE) and cortical LORETA. Our simulation data from the comparison study show that the proposed sparse source imaging algorithm is able to accurately and efficiently recover extended cortical sources and is promising to provide high-accuracy estimation of cortical source extents.
Cross-linguistic transfer of phonological skills: a Malaysian perspective.
Gomez, Caroline; Reason, Rea
2002-01-01
This study examined the phonological and reading performance in English of Malaysian children whose home language was Bahasa Malaysia (BM). A sample of 69 Malaysian Standard Two pupils (aged 7-8 years) was selected for the study. Since commencing school at the age of 6 years, the children had been learning to read in BM and had subsequently also been learning to read in English for some 12 months. The study was part of a larger scale research programme that fully recognized the limitations of tests that had not been developed and standardized in Malaysia. Nevertheless, as a first step to developing such tests, a comparison with existing norms for the Phonological Assessment Battery (PhAB) and the Wechsler Objective Reading Dimension (WORD) was undertaken in relation to information about the children's L1 and L2 language competencies. Results showed that the children's performance on PhAB was at least comparable to the UK norms while, not surprisingly, they fared less well on WORD. The results are discussed in terms of L1 and L2 transfer, whereby the transparency of written BM and the structured way in which reading is taught in BM facilitates performance on phonological tasks in English. This has implications for identifying children with phonologically based reading difficulties.
A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation
NASA Astrophysics Data System (ADS)
Oruç, Ömer
2018-04-01
In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.
A nation-wide survey of the chemical composition of drinking water in Norway.
Flaten, T P
1991-02-01
Water samples were collected from 384 waterworks that supply 70.9% of the Norwegian population. The samples were collected after water treatment and were analysed for 30 constituents. Although most constituents show wide concentration ranges, Norwegian drinking water is generally soft. The median values obtained are: 0.88 mg Si l-1, 0.06 mg Al l-1, 47 micrograms Fe l-1, 0.69 mg Mg l-1, 2.9 mg Ca l-1, 3.8 mg Na l-1, 6 micrograms Mn l-1, 12 micrograms Cu l-1, 14 micrograms Zn l-1, 9 micrograms Ba l-1, 15 micrograms Sr l-1, 0.14 mg K l-1, 58 micrograms F- l-1, 6.4 mg Cl- l-1, 11 micrograms Br- l-1, 0.46 mg NO3- l-1, 5.3 mg SO4(2-) l-1, 2.4 mg TOC l-1, 6.8 (pH), 5) microseconds cm-1 (conductivity) and 11 mg Pt l-1 (colour). Titanium, Pb, Ni, Co, V, Mo, Cd, Be and Li were seldom or never quantified, due to insufficient sensitivity of the ICP (inductively coupled plasma) method. Norwegian quality criteria, which exist for 17 of the constituents examined, are generally fulfilled, indicating that the chemical quality of drinking water, by and large, is good in Norway. For Fe, Ca, Mn, Cu, pH, TOC and colour, however, the norms for good drinking water are exceeded in more than 9% of the samples, reflecting two of the major problems associated with Norwegian drinking water supplies: (i) many water sources contain high concentrations of humic substances; (ii) in large parts of the country, the waters are soft and acidic, and therefore corrosive towards pipes, plumbing and other installations. Most constituents show marked regional distribution patterns, which are discussed in the light of different mechanisms contributing to the chemical composition of drinking water, namely: chemical weathering of mineral matter; atmospheric supply of salt particles from the sea; anthropogenic pollution (including acid precipitation); corrosion of water pipes and plumbing; water treatment; decomposition of organic matter; and hydrological differences.
Cortical dipole imaging using truncated total least squares considering transfer matrix error.
Hori, Junichi; Takeuchi, Kosuke
2013-01-01
Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Clinical considerations for an infant oral health care program.
Ramos-Gomez, Francisco J
2005-05-01
The American Academy of Pediatric Dentistry and the American Association of Pediatrics recommend dental assessments and evaluations for children during their first year of life. Early dental intervention evaluates a child's risk status based on parental interviews and oral examinations. These early screenings present an opportunity to educate parents about the medical, dental, and cost benefits of preventive--rather than restorative-care and may be more effective in reducing early childhood caries than traditional infectious disease models. A comprehensive infant oral care program includes: (1) risk assessments at regularly scheduled dental visits; (2) preventive treatments such as fluoride varnishes or sealants; (3) parental education on the correct methods to clean the baby's mouth; and (4) incentives to encourage participation in ongoing educational programming. Recruiting mothers during pregnancy improves the likelihood that they will participate in the assessment program. To maximize interest, trust, and success among participating parents, educational and treatment programs must be tailored to the social and cultural norms within the community being served.
Xu, Yuanyuan; Zhu, Xianwen; Gong, Yiqin; Xu, Liang; Wang, Yan; Liu, Liwang
2012-08-03
Real-time quantitative reverse transcription PCR (RT-qPCR) is a rapid and reliable method for gene expression studies. Normalization based on reference genes can increase the reliability of this technique; however, recent studies have shown that almost no single reference gene is universal for all possible experimental conditions. In this study, eight frequently used reference genes were investigated, including Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), Actin2/7 (ACT), Tubulin alpha-5 (TUA), Tubulin beta-1 (TUB), 18S ribosomal RNA (18SrRNA), RNA polymerase-II transcription factor (RPII), Elongation factor 1-b (EF-1b) and Translation elongation factor 2 (TEF2). Expression stability of candidate reference genes was examined across 27 radish samples, representing a range of tissue types, cultivars, photoperiodic and vernalization treatments, and developmental stages. The eight genes in these sample pools displayed a wide range of Ct values and were variably expressed. Two statistical software packages, geNorm and NormFinder showed that TEF2, RPII and ACT appeared to be relatively stable and therefore the most suitable for use as reference genes. These results facilitate selection of desirable reference genes for accurate gene expression studies in radish. Copyright © 2012 Elsevier Inc. All rights reserved.
Sequential Dictionary Learning From Correlated Data: Application to fMRI Data Analysis.
Seghouane, Abd-Krim; Iqbal, Asif
2017-03-22
Sequential dictionary learning via the K-SVD algorithm has been revealed as a successful alternative to conventional data driven methods such as independent component analysis (ICA) for functional magnetic resonance imaging (fMRI) data analysis. fMRI datasets are however structured data matrices with notions of spatio-temporal correlation and temporal smoothness. This prior information has not been included in the K-SVD algorithm when applied to fMRI data analysis. In this paper we propose three variants of the K-SVD algorithm dedicated to fMRI data analysis by accounting for this prior information. The proposed algorithms differ from the K-SVD in their sparse coding and dictionary update stages. The first two algorithms account for the known correlation structure in the fMRI data by using the squared Q, R-norm instead of the Frobenius norm for matrix approximation. The third and last algorithm account for both the known correlation structure in the fMRI data and the temporal smoothness. The temporal smoothness is incorporated in the dictionary update stage via regularization of the dictionary atoms obtained with penalization. The performance of the proposed dictionary learning algorithms are illustrated through simulations and applications on real fMRI data.
Djukanović, Ljubica; Dimković, Nada; Marinković, Jelena; Andrić, Branislav; Bogdanović, Jasmina; Budošan, Ivana; Cvetičanin, Anica; Djordjev, Kosta; Djordjević, Verica; Djurić, Živka; Lilić, Branimir Haviža; Jovanović, Nasta; Jelačić, Rosa; Knežević, Violeta; Kostić, Svetislav; Lazarević, Tatjana; Ljubenović, Stanimir; Marić, Ivko; Marković, Rodoljub; Milenković, Srboljub; Milićević, Olivera; Mitić, Igor; Mićunović, Vesna; Mišković, Milena; Pilipović, Dragana; Plješa, Steva; Radaković, Miroslava; Stanojević, Marina Stojanović; Janković, Biserka Tirmenštajn; Vojinović, Goran; Šefer, Kornelija
2015-01-01
The aims of the study were to determine the percentage of patients on regular hemodialysis (HD) in Serbia failing to meet KDOQI guidelines targets and find out factors associated with the risk of time to death and the association between guidelines adherence and patient outcome. A cohort of 2153 patients on regular HD in 24 centers (55.7% of overall HD population) in Serbia were followed from January 2010 to December 2012. The percentage of patients failing to meet KDOQI guidelines targets of dialysis dose (Kt/V>1.2), hemoglobin (>110g/L), serum phosphorus (1.1-1.8mmol/L), calcium (2.1-2.4mmol/L) and iPTH (150-300pg/mL) was determined. Cox proportional hazards analysis was used to select variables significantly associated with the risk of time to death. The patients were on regular HD for 5.3±5.3 years, dialyzed 11.8±1.9h/week. Kt/V<1.2 had 42.4% of patients, hemoglobin <110g/L had 66.1%, s-phosphorus <1.1mmol/L had 21.7% and >1.8mmol/L 28.6%, s-calcium <2.1mmol/L had 11.7% and >2.4mmol/L 25.3%, iPTH <150pg/mL had 40% and >300pg/mL 39.7% of patients. Using Cox model (adjustment for patient age, gender, duration of HD treatment) age, duration of HD treatment, hemoglobin, iPTH and diabetic nephropathy were selected as significant independent predictors of time to death. When targets of five examined parameters were included in Cox model, target for KtV, hemoglobin and iPTH were found to be significant independent predictors of time to death. Substantial proportion of patients examined failed to meet KDOQI guidelines targets. The relative risk of time to death was associated with being outside the targets for Kt/V, hemoglobin and iPTH. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.
Social Media Use and Access to Digital Technology in US Young Adults in 2016.
Villanti, Andrea C; Johnson, Amanda L; Ilakkuvan, Vinu; Jacobs, Megan A; Graham, Amanda L; Rath, Jessica M
2017-06-07
In 2015, 90% of US young adults with Internet access used social media. Digital and social media are highly prevalent modalities through which young adults explore identity formation, and by extension, learn and transmit norms about health and risk behaviors during this developmental life stage. The purpose of this study was to provide updated estimates of social media use from 2014 to 2016 and correlates of social media use and access to digital technology in data collected from a national sample of US young adults in 2016. Young adult participants aged 18-24 years in Wave 7 (October 2014, N=1259) and Wave 9 (February 2016, N=989) of the Truth Initiative Young Adult Cohort Study were asked about use frequency for 11 social media sites and access to digital devices, in addition to sociodemographic characteristics. Regular use was defined as using a given social media site at least weekly. Weighted analyses estimated the prevalence of use of each social media site, overlap between regular use of specific sites, and correlates of using a greater number of social media sites regularly. Bivariate analyses identified sociodemographic correlates of access to specific digital devices. In 2014, 89.42% (weighted n, 1126/1298) of young adults reported regular use of at least one social media site. This increased to 97.5% (weighted n, 965/989) of young adults in 2016. Among regular users of social media sites in 2016, the top five sites were Tumblr (85.5%), Vine (84.7%), Snapchat (81.7%), Instagram (80.7%), and LinkedIn (78.9%). Respondents reported regularly using an average of 7.6 social media sites, with 85% using 6 or more sites regularly. Overall, 87% of young adults reported access or use of a smartphone with Internet access, 74% a desktop or laptop computer with Internet access, 41% a tablet with Internet access, 29% a smart TV or video game console with Internet access, 11% a cell phone without Internet access, and 3% none of these. Access to all digital devices with Internet was lower in those reporting a lower subjective financial situation; there were also significant differences in access to specific digital devices with Internet by race, ethnicity, and education. The high mean number of social media sites used regularly and the substantial overlap in use of multiple social media sites reflect the rapidly changing social media environment. Mobile devices are a primary channel for social media, and our study highlights disparities in access to digital technologies with Internet access among US young adults by race/ethnicity, education, and subjective financial status. Findings from this study may guide the development and implementation of future health interventions for young adults delivered via the Internet or social media sites. ©Andrea C Villanti, Amanda L Johnson, Vinu Ilakkuvan, Megan A Jacobs, Amanda L Graham, Jessica M Rath. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 07.06.2017.
A Revision on Classical Solutions to the Cauchy Boltzmann Problem for Soft Potentials
NASA Astrophysics Data System (ADS)
Alonso, Ricardo J.; Gamba, Irene M.
2011-05-01
This short note complements the recent paper of the authors (Alonso, Gamba in J. Stat. Phys. 137(5-6):1147-1165, 2009). We revisit the results on propagation of regularity and stability using L p estimates for the gain and loss collision operators which had the exponent range misstated for the loss operator. We show here the correct range of exponents. We require a Lebesgue's exponent α>1 in the angular part of the collision kernel in order to obtain finiteness in some constants involved in the regularity and stability estimates. As a consequence the L p regularity associated to the Cauchy problem of the space inhomogeneous Boltzmann equation holds for a finite range of p≥1 explicitly determined.
Cardiopulmonary fitness in a sample of Malaysian population.
Singh, R; Singh, H J; Sirisinghe, R G
1989-01-01
Lung capacity and maximum oxygen uptake (VO2max) were measured directly in 167 healthy males, from all the main races in Malaysia. Their ages ranged from 13 to 59 years. They were divided into five age groups (A to E), ranging from the second to the sixth decade. Lung capacities were determined using a dry spirometer and VO2max was taken as the maximum rate of oxygen consumption during exhaustive exercise on a cycle ergometer. Mean forced vital capacity (FVC) was 3.3 +/- 0.5 l and it correlated negatively with age. Mean VO2max was 3.2 +/- 0.2 l.min-1 (56.8 +/- 3.5 ml.kg-1.min-1) in Group A (13-19 years) compared to 1.7 +/- 0.2 l.min-1 (28.9 +/- 2.9 ml.kg-1.min-1) in Group E (50-59 years). Regression analysis revealed an age-related decline in VO2max of 0.77 ml.kg-1.min-1.year-1. Multiple regression of the data gave the following equations for the prediction of an individual's VO2max: VO2max (l.min-1) = 1.99 + 0.035 (weight)-0.04 (age), VO2max (ml.kg-1.min-1) = 67.7-0.77 (age), where age is in years, weight in kg. In terms of VO2max as an index of cardiopulmonary performance. Malaysians have a relatively lower capacity when related to the Swedish norms or even to those of some Chilean workers. Malaysians were, however, within the average norms of the American Heart Association's recommendations. Age-related decline in VO2max was also somewhat higher in the Malaysians.
NASA Astrophysics Data System (ADS)
Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie
2012-12-01
Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.
Carter, Patrick M; Bingham, C Raymond; Zakrajsek, Jennifer S; Shope, Jean T; Sayer, Tina B
2014-05-01
Adolescent drivers are at elevated crash risk due to distracted driving behavior (DDB). Understanding parental and peer influences on adolescent DDB may aid future efforts to decrease crash risk. We examined the influence of risk perception, sensation seeking, as well as descriptive and injunctive social norms on adolescent DDB using the theory of normative social behavior. 403 adolescents (aged 16-18 years) and their parents were surveyed by telephone. Survey instruments measured self-reported sociodemographics, DDB, sensation seeking, risk perception, descriptive norms (perceived parent DDB, parent self-reported DDB, and perceived peer DDB), and injunctive norms (parent approval of DDB and peer approval of DDB). Hierarchical multiple linear regression was used to predict the influence of descriptive and injunctive social norms, risk perception, and sensation seeking on adolescent DDB. 92% of adolescents reported regularly engaging in DDB. Adolescents perceived that their parents and peers participated in DDB more frequently than themselves. Adolescent risk perception, parent DDB, perceived parent DDB, and perceived peer DDB were predictive of adolescent DDB in the regression model, but parent approval and peer approval of DDB were not predictive. Risk perception and parental DDB were stronger predictors among males, whereas perceived parental DDB was stronger for female adolescents. Adolescent risk perception and descriptive norms are important predictors of adolescent distracted driving. More study is needed to understand the role of injunctive normative influences on adolescent DDB. Effective public health interventions should address parental role modeling, parental monitoring of adolescent driving, and social marketing techniques that correct misconceptions of norms related to around driver distraction and crash risk. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T; Zhu, L
Purpose: Conventional dual energy CT (DECT) reconstructs CT and basis material images from two full-size projection datasets with different energy spectra. To relax the data requirement, we propose an iterative DECT reconstruction algorithm using one full scan and a second sparse-view scan by utilizing redundant structural information of the same object acquired at two different energies. Methods: We first reconstruct a full-scan CT image using filtered-backprojection (FBP) algorithm. The material similarities of each pixel with other pixels are calculated by an exponential function about pixel value differences. We assume that the material similarities of pixels remains in the second CTmore » scan, although pixel values may vary. An iterative method is designed to reconstruct the second CT image from reduced projections. Under the data fidelity constraint, the algorithm minimizes the L2 norm of the difference between pixel value and its estimation, which is the average of other pixel values weighted by their similarities. The proposed algorithm, referred to as structure preserving iterative reconstruction (SPIR), is evaluated on physical phantoms. Results: On the Catphan600 phantom, SPIR-based DECT method with a second 10-view scan reduces the noise standard deviation of a full-scan FBP CT reconstruction by a factor of 4 with well-maintained spatial resolution, while iterative reconstruction using total-variation regularization (TVR) degrades the spatial resolution at the same noise level. The proposed method achieves less than 1% measurement difference on electron density map compared with the conventional two-full-scan DECT. On an anthropomorphic pediatric phantom, our method successfully reconstructs the complicated vertebra structures and decomposes bone and soft tissue. Conclusion: We develop an effective method to reduce the number of views and therefore data acquisition in DECT. We show that SPIR-based DECT using one full scan and a second 10-view scan can provide high-quality DECT images and accurate electron density maps as conventional two-full-scan DECT.« less
Learning accurate and interpretable models based on regularized random forests regression
2014-01-01
Background Many biology related research works combine data from multiple sources in an effort to understand the underlying problems. It is important to find and interpret the most important information from these sources. Thus it will be beneficial to have an effective algorithm that can simultaneously extract decision rules and select critical features for good interpretation while preserving the prediction performance. Methods In this study, we focus on regression problems for biological data where target outcomes are continuous. In general, models constructed from linear regression approaches are relatively easy to interpret. However, many practical biological applications are nonlinear in essence where we can hardly find a direct linear relationship between input and output. Nonlinear regression techniques can reveal nonlinear relationship of data, but are generally hard for human to interpret. We propose a rule based regression algorithm that uses 1-norm regularized random forests. The proposed approach simultaneously extracts a small number of rules from generated random forests and eliminates unimportant features. Results We tested the approach on some biological data sets. The proposed approach is able to construct a significantly smaller set of regression rules using a subset of attributes while achieving prediction performance comparable to that of random forests regression. Conclusion It demonstrates high potential in aiding prediction and interpretation of nonlinear relationships of the subject being studied. PMID:25350120
Lessons learned from public health campaigns and applied to anti-DWI norms development
DOT National Transportation Integrated Search
1995-05-01
The purpose of this study was to examine norms development in past public health campaigns to direct lessons learned from those efforts to future anti-DNN'l programming. Three campaigns were selected for a multiple case study. The anti-smoking, anti-...
Rice, Whitney S; Turan, Bulent; White, Kari; Turan, Janet M
2017-12-14
The role of unintended pregnancy norms and stigma in contraceptive use among young women is understudied. This study investigated relationships between anticipated reactions from others, perceived stigma, and endorsed stigma concerning unintended pregnancy, with any and dual contraceptive use in this population. From November 2014 to October 2015, young women aged 18-24 years (n = 390) and at risk for unintended pregnancy and sexually transmitted infections participated in a survey at a university and public health clinics in Alabama. Multivariable regression models examined associations of unintended pregnancy norms and stigma with contraceptive use, adjusted for demographic and psychosocial characteristics. Compared to nonusers, more any and dual method users, were White, nulliparous, and from the university and had higher income. In adjusted models, anticipated disapproval of unintended pregnancy by close others was associated with greater contraceptive use (adjusted Odds Ratio [aOR] = 1.54, 95 percent confidence interval [CI] = 1.03-2.30), and endorsement of stigma concerning unintended pregnancy was associated with lower odds of dual method use (aOR = 0.71, 95 percent CI = 0.51-1.00). Unintended pregnancy norms and stigma were associated with contraceptive behavior among young women in Alabama. Findings suggest the potential to promote effective contraceptive use in this population by leveraging close relationships and addressing endorsed stigma.
Gao, Xue-Ke; Zhang, Shuai; Luo, Jun-Yu; Wang, Chun-Yi; Lü, Li-Min; Zhang, Li-Juan; Zhu, Xiang-Zhen; Wang, Li; Lu, Hui; Cui, Jin-Jie
2017-12-30
Lysiphlebia japonica (Ashmead) is a predominant parasitoid of cotton-melon aphids in the fields of northern China with a proven ability to effectively control cotton aphid populations in early summer. For accurate normalization of gene expression in L. japonica using quantitative reverse transcriptase-polymerase chain reaction (RT-qPCR), reference genes with stable gene expression patterns are essential. However, no appropriate reference genes is L. japonica have been investigated to date. In the present study, 12 selected housekeeping genes from L. japonica were cloned. We evaluated the stability of these genes under various experimental treatments by RT-qPCR using four independent (geNorm, NormFinder, BestKeeper and Delta Ct) and one comparative (RefFinder) algorithm. We identified genes showing the most stable levels of expression: DIMT, 18S rRNA, and RPL13 during different stages; AK, RPL13, and TBP among sexes; EF1A, PPI, and RPL27 in different tissues, and EF1A, RPL13, and PPI in adults fed on different diets. Moreover, the expression profile of a target gene (odorant receptor 1, OR1) studied during the developmental stages confirms the reliability of the chosen selected reference genes. This study provides for the first time a comprehensive list of suitable reference genes for gene expression studies in L. japonica and will benefit subsequent genomics and functional genomics research on this natural enemy. Copyright © 2017. Published by Elsevier B.V.