Sample records for l1-norm regularization method

  1. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  2. Regularized Filters for L1-Norm-Based Common Spatial Patterns.

    PubMed

    Wang, Haixian; Li, Xiaomeng

    2016-02-01

    The l1 -norm-based common spatial patterns (CSP-L1) approach is a recently developed technique for optimizing spatial filters in the field of electroencephalogram (EEG)-based brain computer interfaces. The l1 -norm-based expression of dispersion in CSP-L1 alleviates the negative impact of outliers. In this paper, we further improve the robustness of CSP-L1 by taking into account noise which does not necessarily have as large a deviation as with outliers. The noise modelling is formulated by using the waveform length of the EEG time course. With the noise modelling, we then regularize the objective function of CSP-L1, in which the l1-norm is used in two folds: one is the dispersion and the other is the waveform length. An iterative algorithm is designed to resolve the optimization problem of the regularized objective function. A toy illustration and the experiments of classification on real EEG data sets show the effectiveness of the proposed method.

  3. L1-norm locally linear representation regularization multi-source adaptation learning.

    PubMed

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  5. Moving force identification based on redundant concatenated dictionary and weighted l1-norm regularization

    NASA Astrophysics Data System (ADS)

    Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin; Chen, Ze-Peng; Luo, Wen-Feng

    2018-01-01

    Moving force identification (MFI) is an important inverse problem in the field of bridge structural health monitoring (SHM). Reasonable signal structures of moving forces are rarely considered in the existing MFI methods. Interaction forces are complex because they contain both slowly-varying harmonic and impact signals due to bridge vibration and bumps on a bridge deck, respectively. Therefore, the interaction forces are usually hard to be expressed completely and sparsely by using a single basis function set. Based on the redundant concatenated dictionary and weighted l1-norm regularization method, a hybrid method is proposed for MFI in this study. The redundant dictionary consists of both trigonometric functions and rectangular functions used for matching the harmonic and impact signal features of unknown moving forces. The weighted l1-norm regularization method is introduced for formulation of MFI equation, so that the signal features of moving forces can be accurately extracted. The fast iterative shrinkage-thresholding algorithm (FISTA) is used for solving the MFI problem. The optimal regularization parameter is appropriately chosen by the Bayesian information criterion (BIC) method. In order to assess the accuracy and the feasibility of the proposed method, a simply-supported beam bridge subjected to a moving force is taken as an example for numerical simulations. Finally, a series of experimental studies on MFI of a steel beam are performed in laboratory. Both numerical and experimental results show that the proposed method can accurately identify the moving forces with a strong robustness, and it has a better performance than the Tikhonov regularization method. Some related issues are discussed as well.

  6. Reconstruction algorithms based on l1-norm and l2-norm for two imaging models of fluorescence molecular tomography: a comparative study.

    PubMed

    Yi, Huangjian; Chen, Duofang; Li, Wei; Zhu, Shouping; Wang, Xiaorui; Liang, Jimin; Tian, Jie

    2013-05-01

    Fluorescence molecular tomography (FMT) is an important imaging technique of optical imaging. The major challenge of the reconstruction method for FMT is the ill-posed and underdetermined nature of the inverse problem. In past years, various regularization methods have been employed for fluorescence target reconstruction. A comparative study between the reconstruction algorithms based on l1-norm and l2-norm for two imaging models of FMT is presented. The first imaging model is adopted by most researchers, where the fluorescent target is of small size to mimic small tissue with fluorescent substance, as demonstrated by the early detection of a tumor. The second model is the reconstruction of distribution of the fluorescent substance in organs, which is essential to drug pharmacokinetics. Apart from numerical experiments, in vivo experiments were conducted on a dual-modality FMT/micro-computed tomography imaging system. The experimental results indicated that l1-norm regularization is more suitable for reconstructing the small fluorescent target, while l2-norm regularization performs better for the reconstruction of the distribution of fluorescent substance.

  7. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  8. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    NASA Astrophysics Data System (ADS)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  9. Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification

    PubMed Central

    Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong

    2018-01-01

    Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307

  10. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  11. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  12. Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    PubMed Central

    Rahimi, Azar; Xu, Jingjia; Wang, Linwei

    2013-01-01

    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735

  13. Generalizations of Tikhonov's regularized method of least squares to non-Euclidean vector norms

    NASA Astrophysics Data System (ADS)

    Volkov, V. V.; Erokhin, V. I.; Kakaev, V. V.; Onufrei, A. Yu.

    2017-09-01

    Tikhonov's regularized method of least squares and its generalizations to non-Euclidean norms, including polyhedral, are considered. The regularized method of least squares is reduced to mathematical programming problems obtained by "instrumental" generalizations of the Tikhonov lemma on the minimal (in a certain norm) solution of a system of linear algebraic equations with respect to an unknown matrix. Further studies are needed for problems concerning the development of methods and algorithms for solving reduced mathematical programming problems in which the objective functions and admissible domains are constructed using polyhedral vector norms.

  14. L1 norm based common spatial patterns decomposition for scalp EEG BCI.

    PubMed

    Li, Peiyang; Xu, Peng; Zhang, Rui; Guo, Lanjin; Yao, Dezhong

    2013-08-06

    Brain computer interfaces (BCI) is one of the most popular branches in biomedical engineering. It aims at constructing a communication between the disabled persons and the auxiliary equipments in order to improve the patients' life. In motor imagery (MI) based BCI, one of the popular feature extraction strategies is Common Spatial Patterns (CSP). In practical BCI situation, scalp EEG inevitably has the outlier and artifacts introduced by ocular, head motion or the loose contact of electrodes in scalp EEG recordings. Because outlier and artifacts are usually observed with large amplitude, when CSP is solved in view of L2 norm, the effect of outlier and artifacts will be exaggerated due to the imposing of square to outliers, which will finally influence the MI based BCI performance. While L1 norm will lower the outlier effects as proved in other application fields like EEG inverse problem, face recognition, etc. In this paper, we present a new CSP implementation using the L1 norm technique, instead of the L2 norm, to solve the eigen problem for spatial filter estimation with aim to improve the robustness of CSP to outliers. To evaluate the performance of our method, we applied our method as well as the standard CSP and the regularized CSP with Tikhonov regularization (TR-CSP), on both the peer BCI dataset with simulated outliers and the dataset from the MI BCI system developed in our group. The McNemar test is used to investigate whether the difference among the three CSPs is of statistical significance. The results of both the simulation and real BCI datasets consistently reveal that the proposed method has much higher classification accuracies than the conventional CSP and the TR-CSP. By combining L1 norm based Eigen decomposition into Common Spatial Patterns, the proposed approach can effectively improve the robustness of BCI system to EEG outliers and thus be potential for the actual MI BCI application, where outliers are inevitably introduced into EEG recordings.

  15. Linear discriminant analysis based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu

    2013-08-01

    Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.

  16. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  17. Efficient l1 -norm-based low-rank matrix approximations for large-scale problems using alternating rectified gradient method.

    PubMed

    Kim, Eunwoo; Lee, Minsik; Choi, Chong-Ho; Kwak, Nojun; Oh, Songhwai

    2015-02-01

    Low-rank matrix approximation plays an important role in the area of computer vision and image processing. Most of the conventional low-rank matrix approximation methods are based on the l2 -norm (Frobenius norm) with principal component analysis (PCA) being the most popular among them. However, this can give a poor approximation for data contaminated by outliers (including missing data), because the l2 -norm exaggerates the negative effect of outliers. Recently, to overcome this problem, various methods based on the l1 -norm, such as robust PCA methods, have been proposed for low-rank matrix approximation. Despite the robustness of the methods, they require heavy computational effort and substantial memory for high-dimensional data, which is impractical for real-world problems. In this paper, we propose two efficient low-rank factorization methods based on the l1 -norm that find proper projection and coefficient matrices using the alternating rectified gradient method. The proposed methods are applied to a number of low-rank matrix approximation problems to demonstrate their efficiency and robustness. The experimental results show that our proposals are efficient in both execution time and reconstruction performance unlike other state-of-the-art methods.

  18. Discriminant locality preserving projections based on L1-norm maximization.

    PubMed

    Zhong, Fujin; Zhang, Jiashu; Li, Defang

    2014-11-01

    Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.

  19. 1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  20. Application of L1/2 regularization logistic method in heart disease diagnosis.

    PubMed

    Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying

    2014-01-01

    Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.

  1. Automated ambiguity estimation for VLBI Intensive sessions using L1-norm

    NASA Astrophysics Data System (ADS)

    Kareinen, Niko; Hobiger, Thomas; Haas, Rüdiger

    2016-12-01

    Very Long Baseline Interferometry (VLBI) is a space-geodetic technique that is uniquely capable of direct observation of the angle of the Earth's rotation about the Celestial Intermediate Pole (CIP) axis, namely UT1. The daily estimates of the difference between UT1 and Coordinated Universal Time (UTC) provided by the 1-h long VLBI Intensive sessions are essential in providing timely UT1 estimates for satellite navigation systems and orbit determination. In order to produce timely UT1 estimates, efforts have been made to completely automate the analysis of VLBI Intensive sessions. This involves the automatic processing of X- and S-band group delays. These data contain an unknown number of integer ambiguities in the observed group delays. They are introduced as a side-effect of the bandwidth synthesis technique, which is used to combine correlator results from the narrow channels that span the individual bands. In an automated analysis with the c5++ software the standard approach in resolving the ambiguities is to perform a simplified parameter estimation using a least-squares adjustment (L2-norm minimisation). We implement L1-norm as an alternative estimation method in c5++. The implemented method is used to automatically estimate the ambiguities in VLBI Intensive sessions on the Kokee-Wettzell baseline. The results are compared to an analysis set-up where the ambiguity estimation is computed using the L2-norm. For both methods three different weighting strategies for the ambiguity estimation are assessed. The results show that the L1-norm is better at automatically resolving the ambiguities than the L2-norm. The use of the L1-norm leads to a significantly higher number of good quality UT1-UTC estimates with each of the three weighting strategies. The increase in the number of sessions is approximately 5% for each weighting strategy. This is accompanied by smaller post-fit residuals in the final UT1-UTC estimation step.

  2. A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.

    PubMed

    Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin

    2016-01-01

    Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.

  3. Robust L1-norm two-dimensional linear discriminant analysis.

    PubMed

    Li, Chun-Na; Shao, Yuan-Hai; Deng, Nai-Yang

    2015-05-01

    In this paper, we propose an L1-norm two-dimensional linear discriminant analysis (L1-2DLDA) with robust performance. Different from the conventional two-dimensional linear discriminant analysis with L2-norm (L2-2DLDA), where the optimization problem is transferred to a generalized eigenvalue problem, the optimization problem in our L1-2DLDA is solved by a simple justifiable iterative technique, and its convergence is guaranteed. Compared with L2-2DLDA, our L1-2DLDA is more robust to outliers and noises since the L1-norm is used. This is supported by our preliminary experiments on toy example and face datasets, which show the improvement of our L1-2DLDA over L2-2DLDA. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  5. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. L1-norm kernel discriminant analysis via Bayes error bound optimization for robust feature extraction.

    PubMed

    Zheng, Wenming; Lin, Zhouchen; Wang, Haixian

    2014-04-01

    A novel discriminant analysis criterion is derived in this paper under the theoretical framework of Bayes optimality. In contrast to the conventional Fisher's discriminant criterion, the major novelty of the proposed one is the use of L1 norm rather than L2 norm, which makes it less sensitive to the outliers. With the L1-norm discriminant criterion, we propose a new linear discriminant analysis (L1-LDA) method for linear feature extraction problem. To solve the L1-LDA optimization problem, we propose an efficient iterative algorithm, in which a novel surrogate convex function is introduced such that the optimization problem in each iteration is to simply solve a convex programming problem and a close-form solution is guaranteed to this problem. Moreover, we also generalize the L1-LDA method to deal with the nonlinear robust feature extraction problems via the use of kernel trick, and hereafter proposed the L1-norm kernel discriminant analysis (L1-KDA) method. Extensive experiments on simulated and real data sets are conducted to evaluate the effectiveness of the proposed method in comparing with the state-of-the-art methods.

  7. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  8. Suppressing multiples using an adaptive multichannel filter based on L1-norm

    NASA Astrophysics Data System (ADS)

    Shi, Ying; Jing, Hongliang; Zhang, Wenwu; Ning, Dezhi

    2017-08-01

    Adaptive subtraction is an important link for removing surface-related multiples in the wave equation-based method. In this paper, we propose an adaptive multichannel subtraction method based on the L1-norm. We achieve enhanced compensation for the mismatch between the input seismogram and the predicted multiples in terms of the amplitude, phase, frequency band, and travel time. Unlike the conventional L2-norm, the proposed method does not rely on the assumption that the primary and the multiples are orthogonal, and also takes advantage of the fact that the L1-norm is more robust when dealing with outliers. In addition, we propose a frequency band extension via modulation to reconstruct the high frequencies to compensate for the frequency misalignment. We present a parallel computing scheme to accelerate the subtraction algorithm on graphic processing units (GPUs), which significantly reduces the computational cost. The synthetic and field seismic data tests show that the proposed method effectively suppresses the multiples.

  9. Characterizing L1-norm best-fit subspaces

    NASA Astrophysics Data System (ADS)

    Brooks, J. Paul; Dulá, José H.

    2017-05-01

    Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.

  10. Seismic data restoration with a fast L1 norm trust region method

    NASA Astrophysics Data System (ADS)

    Cao, Jingjie; Wang, Yanfei

    2014-08-01

    Seismic data restoration is a major strategy to provide reliable wavefield when field data dissatisfy the Shannon sampling theorem. Recovery by sparsity-promoting inversion often get sparse solutions of seismic data in a transformed domains, however, most methods for sparsity-promoting inversion are line-searching methods which are efficient but are inclined to obtain local solutions. Using trust region method which can provide globally convergent solutions is a good choice to overcome this shortcoming. A trust region method for sparse inversion has been proposed, however, the efficiency should be improved to suitable for large-scale computation. In this paper, a new L1 norm trust region model is proposed for seismic data restoration and a robust gradient projection method for solving the sub-problem is utilized. Numerical results of synthetic and field data demonstrate that the proposed trust region method can get excellent computation speed and is a viable alternative for large-scale computation.

  11. The design of L1-norm visco-acoustic wavefield extrapolators

    NASA Astrophysics Data System (ADS)

    Salam, Syed Abdul; Mousa, Wail A.

    2018-04-01

    Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.

  12. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Hessian-based norm regularization for image restoration with biomedical applications.

    PubMed

    Lefkimmiatis, Stamatios; Bourquard, Aurélien; Unser, Michael

    2012-03-01

    We present nonquadratic Hessian-based regularization methods that can be effectively used for image restoration problems in a variational framework. Motivated by the great success of the total-variation (TV) functional, we extend it to also include second-order differential operators. Specifically, we derive second-order regularizers that involve matrix norms of the Hessian operator. The definition of these functionals is based on an alternative interpretation of TV that relies on mixed norms of directional derivatives. We show that the resulting regularizers retain some of the most favorable properties of TV, i.e., convexity, homogeneity, rotation, and translation invariance, while dealing effectively with the staircase effect. We further develop an efficient minimization scheme for the corresponding objective functions. The proposed algorithm is of the iteratively reweighted least-square type and results from a majorization-minimization approach. It relies on a problem-specific preconditioned conjugate gradient method, which makes the overall minimization scheme very attractive since it can be applied effectively to large images in a reasonable computational time. We validate the overall proposed regularization framework through deblurring experiments under additive Gaussian noise on standard and biomedical images.

  14. Time Series Imputation via L1 Norm-Based Singular Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Kalantari, Mahdi; Yarmohammadi, Masoud; Hassani, Hossein; Silva, Emmanuel Sirimal

    Missing values in time series data is a well-known and important problem which many researchers have studied extensively in various fields. In this paper, a new nonparametric approach for missing value imputation in time series is proposed. The main novelty of this research is applying the L1 norm-based version of Singular Spectrum Analysis (SSA), namely L1-SSA which is robust against outliers. The performance of the new imputation method has been compared with many other established methods. The comparison is done by applying them to various real and simulated time series. The obtained results confirm that the SSA-based methods, especially L1-SSA can provide better imputation in comparison to other methods.

  15. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  16. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    DTIC Science & Technology

    2016-11-22

    structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The

  17. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  18. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  19. 1-norm support vector novelty detection and its sparseness.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper proposes a 1-norm support vector novelty detection (SVND) method and discusses its sparseness. 1-norm SVND is formulated as a linear programming problem and uses two techniques for inducing sparseness, or the 1-norm regularization and the hinge loss function. We also find two upper bounds on the sparseness of 1-norm SVND, or exact support vector (ESV) and kernel Gram matrix rank bounds. The ESV bound indicates that 1-norm SVND has a sparser representation model than SVND. The kernel Gram matrix rank bound can loosely estimate the sparseness of 1-norm SVND. Experimental results show that 1-norm SVND is feasible and effective. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Robust 2DPCA with non-greedy l1 -norm maximization for image analysis.

    PubMed

    Wang, Rong; Nie, Feiping; Yang, Xiaojun; Gao, Feifei; Yao, Minli

    2015-05-01

    2-D principal component analysis based on l1 -norm (2DPCA-L1) is a recently developed approach for robust dimensionality reduction and feature extraction in image domain. Normally, a greedy strategy is applied due to the difficulty of directly solving the l1 -norm maximization problem, which is, however, easy to get stuck in local solution. In this paper, we propose a robust 2DPCA with non-greedy l1 -norm maximization in which all projection directions are optimized simultaneously. Experimental results on face and other datasets confirm the effectiveness of the proposed approach.

  1. Joint L1 and Total Variation Regularization for Fluorescence Molecular Tomography

    PubMed Central

    Dutta, Joyita; Ahn, Sangtae; Li, Changqing; Cherry, Simon R.; Leahy, Richard M.

    2012-01-01

    Fluorescence molecular tomography (FMT) is an imaging modality that exploits the specificity of fluorescent biomarkers to enable 3D visualization of molecular targets and pathways in vivo in small animals. Owing to the high degree of absorption and scattering of light through tissue, the FMT inverse problem is inherently illconditioned making image reconstruction highly susceptible to the effects of noise and numerical errors. Appropriate priors or penalties are needed to facilitate reconstruction and to restrict the search space to a specific solution set. Typically, fluorescent probes are locally concentrated within specific areas of interest (e.g., inside tumors). The commonly used L2 norm penalty generates the minimum energy solution, which tends to be spread out in space. Instead, we present here an approach involving a combination of the L1 and total variation norm penalties, the former to suppress spurious background signals and enforce sparsity and the latter to preserve local smoothness and piecewise constancy in the reconstructed images. We have developed a surrogate-based optimization method for minimizing the joint penalties. The method was validated using both simulated and experimental data obtained from a mouse-shaped phantom mimicking tissue optical properties and containing two embedded fluorescent sources. Fluorescence data was collected using a 3D FMT setup that uses an EMCCD camera for image acquisition and a conical mirror for full-surface viewing. A range of performance metrics were utilized to evaluate our simulation results and to compare our method with the L1, L2, and total variation norm penalty based approaches. The experimental results were assessed using Dice similarity coefficients computed after co-registration with a CT image of the phantom. PMID:22390906

  2. Graph cuts via l1 norm minimization.

    PubMed

    Bhusnurmath, Arvind; Taylor, Camillo J

    2008-10-01

    Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained l1 norm minimization that can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided.

  3. Spectral L2/L1 norm: A new perspective for spectral kurtosis for characterizing non-stationary signals

    NASA Astrophysics Data System (ADS)

    Wang, Dong

    2018-05-01

    Thanks to the great efforts made by Antoni (2006), spectral kurtosis has been recognized as a milestone for characterizing non-stationary signals, especially bearing fault signals. The main idea of spectral kurtosis is to use the fourth standardized moment, namely kurtosis, as a function of spectral frequency so as to indicate how repetitive transients caused by a bearing defect vary with frequency. Moreover, spectral kurtosis is defined based on an analytic bearing fault signal constructed from either a complex filter or Hilbert transform. On the other hand, another attractive work was reported by Borghesani et al. (2014) to mathematically reveal the relationship between the kurtosis of an analytical bearing fault signal and the square of the squared envelope spectrum of the analytical bearing fault signal for explaining spectral correlation for quantification of bearing fault signals. More interestingly, it was discovered that the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum corresponds to the raw 4th order moment. Inspired by the aforementioned works, in this paper, we mathematically show that: (1) spectral kurtosis can be decomposed into squared envelope and squared L2/L1 norm so that spectral kurtosis can be explained as spectral squared L2/L1 norm; (2) spectral L2/L1 norm is formally defined for characterizing bearing fault signals and its two geometrical explanations are made; (3) spectral L2/L1 norm is proportional to the square root of the sum of peaks at cyclic frequencies in the square of the squared envelope spectrum; (4) some extensions of spectral L2/L1 norm for characterizing bearing fault signals are pointed out.

  4. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  5. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm.

    PubMed

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Fluorescence molecular tomography reconstruction via discrete cosine transform-based regularization

    NASA Astrophysics Data System (ADS)

    Shi, Junwei; Liu, Fei; Zhang, Jiulou; Luo, Jianwen; Bai, Jing

    2015-05-01

    Fluorescence molecular tomography (FMT) as a noninvasive imaging modality has been widely used for biomedical preclinical applications. However, FMT reconstruction suffers from severe ill-posedness, especially when a limited number of projections are used. In order to improve the quality of FMT reconstruction results, a discrete cosine transform (DCT) based reweighted L1-norm regularization algorithm is proposed. In each iteration of the reconstruction process, different reweighted regularization parameters are adaptively assigned according to the values of DCT coefficients to suppress the reconstruction noise. In addition, the permission region of the reconstructed fluorophores is adaptively constructed to increase the convergence speed. In order to evaluate the performance of the proposed algorithm, physical phantom and in vivo mouse experiments with a limited number of projections are carried out. For comparison, different L1-norm regularization strategies are employed. By quantifying the signal-to-noise ratio (SNR) of the reconstruction results in the phantom and in vivo mouse experiments with four projections, the proposed DCT-based reweighted L1-norm regularization shows higher SNR than other L1-norm regularizations employed in this work.

  7. Robust Principal Component Analysis Regularized by Truncated Nuclear Norm for Identifying Differentially Expressed Genes.

    PubMed

    Wang, Ya-Xuan; Gao, Ying-Lian; Liu, Jin-Xing; Kong, Xiang-Zhen; Li, Hai-Jun

    2017-09-01

    Identifying differentially expressed genes from the thousands of genes is a challenging task. Robust principal component analysis (RPCA) is an efficient method in the identification of differentially expressed genes. RPCA method uses nuclear norm to approximate the rank function. However, theoretical studies showed that the nuclear norm minimizes all singular values, so it may not be the best solution to approximate the rank function. The truncated nuclear norm is defined as the sum of some smaller singular values, which may achieve a better approximation of the rank function than nuclear norm. In this paper, a novel method is proposed by replacing nuclear norm of RPCA with the truncated nuclear norm, which is named robust principal component analysis regularized by truncated nuclear norm (TRPCA). The method decomposes the observation matrix of genomic data into a low-rank matrix and a sparse matrix. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Thus, the differentially expressed genes can be identified according to the sparse matrix. The experimental results on The Cancer Genome Atlas data illustrate that the TRPCA method outperforms other state-of-the-art methods in the identification of differentially expressed genes.

  8. Arbitrary norm support vector machines.

    PubMed

    Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R

    2009-02-01

    Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.

  9. A comparative study of minimum norm inverse methods for MEG imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leahy, R.M.; Mosher, J.C.; Phillips, J.W.

    1996-07-01

    The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less

  10. Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection

    PubMed Central

    Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin

    2014-01-01

    Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479

  11. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, L; Tan, S; Lu, W

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was usedmore » on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National

  12. The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition

    NASA Astrophysics Data System (ADS)

    Markopoulos, Panos P.; Chachlakis, Dimitris G.; Papalexakis, Evangelos E.

    2018-04-01

    We study rank-1 {L1-norm-based TUCKER2} (L1-TUCKER2) decomposition of 3-way tensors, treated as a collection of $N$ $D \\times M$ matrices that are to be jointly decomposed. Our contributions are as follows. i) We prove that the problem is equivalent to combinatorial optimization over $N$ antipodal-binary variables. ii) We derive the first two algorithms in the literature for its exact solution. The first algorithm has cost exponential in $N$; the second one has cost polynomial in $N$ (under a mild assumption). Our algorithms are accompanied by formal complexity analysis. iii) We conduct numerical studies to compare the performance of exact L1-TUCKER2 (proposed) with standard HOSVD, HOOI, GLRAM, PCA, L1-PCA, and TPCA-L1. Our studies show that L1-TUCKER2 outperforms (in tensor approximation) all the above counterparts when the processed data are outlier corrupted.

  13. Multi-task feature learning by using trace norm regularization

    NASA Astrophysics Data System (ADS)

    Jiangmei, Zhang; Binfeng, Yu; Haibo, Ji; Wang, Kunpeng

    2017-11-01

    Multi-task learning can extract the correlation of multiple related machine learning problems to improve performance. This paper considers applying the multi-task learning method to learn a single task. We propose a new learning approach, which employs the mixture of expert model to divide a learning task into several related sub-tasks, and then uses the trace norm regularization to extract common feature representation of these sub-tasks. A nonlinear extension of this approach by using kernel is also provided. Experiments conducted on both simulated and real data sets demonstrate the advantage of the proposed approach.

  14. An experimental clinical evaluation of EIT imaging with ℓ1 data and image norms.

    PubMed

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-09-01

    Electrical impedance tomography (EIT) produces an image of internal conductivity distributions in a body from current injection and electrical measurements at surface electrodes. Typically, image reconstruction is formulated using regularized schemes in which ℓ2-norms are used for both data misfit and image prior terms. Such a formulation is computationally convenient, but favours smooth conductivity solutions and is sensitive to outliers. Recent studies highlighted the potential of ℓ1-norm and provided the mathematical basis to improve image quality and robustness of the images to data outliers. In this paper, we (i) extended a primal-dual interior point method (PDIPM) algorithm to 2.5D EIT image reconstruction to solve ℓ1 and mixed ℓ1/ℓ2 formulations efficiently, (ii) evaluated the formulation on clinical and experimental data, and (iii) developed a practical strategy to select hyperparameters using the L-curve which requires minimum user-dependence. The PDIPM algorithm was evaluated using clinical and experimental scenarios on human lung and dog breathing with known electrode errors, which requires a rigorous regularization and causes the failure of reconstruction with an ℓ2-norm solution. The results showed that an ℓ1 solution is not only more robust to unavoidable measurement errors in a clinical setting, but it also provides high contrast resolution on organ boundaries.

  15. A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks

    PubMed Central

    Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong

    2016-01-01

    In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298

  16. Schatten Matrix Norm Based Polarimetric SAR Data Regularization Application over Chamonix Mont-Blanc

    NASA Astrophysics Data System (ADS)

    Le, Thu Trang; Atto, Abdourrahmane M.; Trouve, Emmanuel

    2013-08-01

    The paper addresses the filtering of Polarimetry Synthetic Aperture Radar (PolSAR) images. The filtering strategy is based on a regularizing cost function associated with matrix norms called the Schatten p-norms. These norms apply on matrix singular values. The proposed approach is illustrated upon scattering and coherency matrices on RADARSAT-2 PolSAR images over the Chamonix Mont-Blanc site. Several p values of Schatten p-norms are surveyed and their capabilities on filtering PolSAR images is provided in comparison with conventional strategies for filtering PolSAR data.

  17. Concave 1-norm group selection

    PubMed Central

    Jiang, Dingfeng; Huang, Jian

    2015-01-01

    Grouping structures arise naturally in many high-dimensional problems. Incorporation of such information can improve model fitting and variable selection. Existing group selection methods, such as the group Lasso, require correct membership. However, in practice it can be difficult to correctly specify group membership of all variables. Thus, it is important to develop group selection methods that are robust against group mis-specification. Also, it is desirable to select groups as well as individual variables in many applications. We propose a class of concave \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm group penalties that is robust to grouping structure and can perform bi-level selection. A coordinate descent algorithm is developed to calculate solutions of the proposed group selection method. Theoretical convergence of the algorithm is proved under certain regularity conditions. Comparison with other methods suggests the proposed method is the most robust approach under membership mis-specification. Simulation studies and real data application indicate that the \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$1$\\end{document}-norm concave group selection approach achieves better control of false discovery rates. An R package grppenalty implementing the proposed method is available at CRAN. PMID:25417206

  18. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  19. L2-norm multiple kernel learning and its application to biomedical data fusion

    PubMed Central

    2010-01-01

    Background This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as L∞, L1, and L2 MKL. In particular, L2 MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing L∞ MKL method. In real biomedical applications, L2 MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources. Results We provide a theoretical analysis of the relationship between the L2 optimization of kernels in the dual problem with the L2 coefficient regularization in the primal problem. Understanding the dual L2 problem grants a unified view on MKL and enables us to extend the L2 method to a wide range of machine learning problems. We implement L2 MKL for ranking and classification problems and compare its performance with the sparse L∞ and the averaging L1 MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. L2 MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel L2 MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing. Conclusions This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in L∞ MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing L2 kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the

  20. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes

    PubMed Central

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  1. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data.

  2. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational 1-Norm Regularization in the Derivative Domain

    NASA Technical Reports Server (NTRS)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2013-01-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall),and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients(called 1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a database of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  3. WE-G-207-04: Non-Local Total-Variation (NLTV) Combined with Reweighted L1-Norm for Compressed Sensing Based CT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Compressed sensing (CS) has been used for CT (4DCT/CBCT) reconstruction with few projections to reduce dose of radiation. Total-variation (TV) in L1-minimization (min.) with local information is the prevalent technique in CS, while it can be prone to noise. To address the problem, this work proposes to apply a new image processing technique, called non-local TV (NLTV), to CS based CT reconstruction, and incorporate reweighted L1-norm into it for more precise reconstruction. Methods: TV minimizes intensity variations by considering two local neighboring voxels, which can be prone to noise, possibly damaging the reconstructed CT image. NLTV, contrarily, utilizes moremore » global information by computing a weight function of current voxel relative to surrounding search area. In fact, it might be challenging to obtain an optimal solution due to difficulty in defining the weight function with appropriate parameters. Introducing reweighted L1-min., designed for approximation to ideal L0-min., can reduce the dependence on defining the weight function, therefore improving accuracy of the solution. This work implemented the NLTV combined with reweighted L1-min. by Split Bregman Iterative method. For evaluation, a noisy digital phantom and a pelvic CT images are employed to compare the quality of images reconstructed by TV, NLTV and reweighted NLTV. Results: In both cases, conventional and reweighted NLTV outperform TV min. in signal-to-noise ratio (SNR) and root-mean squared errors of the reconstructed images. Relative to conventional NLTV, NLTV with reweighted L1-norm was able to slightly improve SNR, while greatly increasing the contrast between tissues due to additional iterative reweighting process. Conclusion: NLTV min. can provide more precise compressed sensing based CT image reconstruction by incorporating the reweighted L1-norm, while maintaining greater robustness to the noise effect than TV min.« less

  4. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  5. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  6. Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malykhin, Yu V; Ryutin, K S

    2014-11-30

    For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.

  7. Enhanced spatial resolution in fluorescence molecular tomography using restarted L1-regularized nonlinear conjugate gradient algorithm.

    PubMed

    Shi, Junwei; Liu, Fei; Zhang, Guanglei; Luo, Jianwen; Bai, Jing

    2014-04-01

    Owing to the high degree of scattering of light through tissues, the ill-posedness of fluorescence molecular tomography (FMT) inverse problem causes relatively low spatial resolution in the reconstruction results. Unlike L2 regularization, L1 regularization can preserve the details and reduce the noise effectively. Reconstruction is obtained through a restarted L1 regularization-based nonlinear conjugate gradient (re-L1-NCG) algorithm, which has been proven to be able to increase the computational speed with low memory consumption. The algorithm consists of inner and outer iterations. In the inner iteration, L1-NCG is used to obtain the L1-regularized results. In the outer iteration, the restarted strategy is used to increase the convergence speed of L1-NCG. To demonstrate the performance of re-L1-NCG in terms of spatial resolution, simulation and physical phantom studies with fluorescent targets located with different edge-to-edge distances were carried out. The reconstruction results show that the re-L1-NCG algorithm has the ability to resolve targets with an edge-to-edge distance of 0.1 cm at a depth of 1.5 cm, which is a significant improvement for FMT.

  8. Downscaling Satellite Precipitation with Emphasis on Extremes: A Variational ℓ1-Norm Regularization in the Derivative Domain

    NASA Astrophysics Data System (ADS)

    Foufoula-Georgiou, E.; Ebtehaj, A. M.; Zhang, S. Q.; Hou, A. Y.

    2014-05-01

    The increasing availability of precipitation observations from space, e.g., from the Tropical Rainfall Measuring Mission (TRMM) and the forthcoming Global Precipitation Measuring (GPM) Mission, has fueled renewed interest in developing frameworks for downscaling and multi-sensor data fusion that can handle large data sets in computationally efficient ways while optimally reproducing desired properties of the underlying rainfall fields. Of special interest is the reproduction of extreme precipitation intensities and gradients, as these are directly relevant to hazard prediction. In this paper, we present a new formalism for downscaling satellite precipitation observations, which explicitly allows for the preservation of some key geometrical and statistical properties of spatial precipitation. These include sharp intensity gradients (due to high-intensity regions embedded within lower-intensity areas), coherent spatial structures (due to regions of slowly varying rainfall), and thicker-than-Gaussian tails of precipitation gradients and intensities. Specifically, we pose the downscaling problem as a discrete inverse problem and solve it via a regularized variational approach (variational downscaling) where the regularization term is selected to impose the desired smoothness in the solution while allowing for some steep gradients (called ℓ1-norm or total variation regularization). We demonstrate the duality between this geometrically inspired solution and its Bayesian statistical interpretation, which is equivalent to assuming a Laplace prior distribution for the precipitation intensities in the derivative (wavelet) space. When the observation operator is not known, we discuss the effect of its misspecification and explore a previously proposed dictionary-based sparse inverse downscaling methodology to indirectly learn the observation operator from a data base of coincidental high- and low-resolution observations. The proposed method and ideas are illustrated in case

  9. Human action recognition with group lasso regularized-support vector machine

    NASA Astrophysics Data System (ADS)

    Luo, Huiwu; Lu, Huanzhang; Wu, Yabei; Zhao, Fei

    2016-05-01

    The bag-of-visual-words (BOVW) and Fisher kernel are two popular models in human action recognition, and support vector machine (SVM) is the most commonly used classifier for the two models. We show two kinds of group structures in the feature representation constructed by BOVW and Fisher kernel, respectively, since the structural information of feature representation can be seen as a prior for the classifier and can improve the performance of the classifier, which has been verified in several areas. However, the standard SVM employs L2-norm regularization in its learning procedure, which penalizes each variable individually and cannot express the structural information of feature representation. We replace the L2-norm regularization with group lasso regularization in standard SVM, and a group lasso regularized-support vector machine (GLRSVM) is proposed. Then, we embed the group structural information of feature representation into GLRSVM. Finally, we introduce an algorithm to solve the optimization problem of GLRSVM by alternating directions method of multipliers. The experiments evaluated on KTH, YouTube, and Hollywood2 datasets show that our method achieves promising results and improves the state-of-the-art methods on KTH and YouTube datasets.

  10. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  11. Brain abnormality segmentation based on l1-norm minimization

    NASA Astrophysics Data System (ADS)

    Zeng, Ke; Erus, Guray; Tanwar, Manoj; Davatzikos, Christos

    2014-03-01

    We present a method that uses sparse representations to model the inter-individual variability of healthy anatomy from a limited number of normal medical images. Abnormalities in MR images are then defined as deviations from the normal variation. More precisely, we model an abnormal (pathological) signal y as the superposition of a normal part ~y that can be sparsely represented under an example-based dictionary, and an abnormal part r. Motivated by a dense error correction scheme recently proposed for sparse signal recovery, we use l1- norm minimization to separate ~y and r. We extend the existing framework, which was mainly used on robust face recognition in a discriminative setting, to address challenges of brain image analysis, particularly the high dimensionality and low sample size problem. The dictionary is constructed from local image patches extracted from training images aligned using smooth transformations, together with minor perturbations of those patches. A multi-scale sliding-window scheme is applied to capture anatomical variations ranging from fine and localized to coarser and more global. The statistical significance of the abnormality term r is obtained by comparison to its empirical distribution through cross-validation, and is used to assign an abnormality score to each voxel. In our validation experiments the method is applied for segmenting abnormalities on 2-D slices of FLAIR images, and we obtain segmentation results consistent with the expert-defined masks.

  12. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    PubMed

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  13. Variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction

    NASA Astrophysics Data System (ADS)

    Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le

    2018-01-01

    The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.

  14. MEG Source Imaging Method using Fast L1 Minimum-norm and its Applications to Signals with Brain Noise and Human Resting-state Source Amplitude Images

    PubMed Central

    Huang, Ming-Xiong; Huang, Charles W.; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L.; Baker, Dewleen G.; Song, Tao; Harrington, Deborah L.; Theilmann, Rebecca J.; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M.; Edgar, J. Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T.; Drake, Angela; Lee, Roland R.

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL’s performance of was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL’s performance was then examined in the analysis of human mediannerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer’s problems of signal leaking and distorted source time-courses. PMID:24055704

  15. A norm knockout method on indirect reciprocity to reveal indispensable norms

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya

    2017-03-01

    Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out.

  16. Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.

    PubMed

    Aparin, Vladimir

    2012-03-01

    This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.

  17. A norm knockout method on indirect reciprocity to reveal indispensable norms

    PubMed Central

    Yamamoto, Hitoshi; Okada, Isamu; Uchida, Satoshi; Sasaki, Tatsuya

    2017-01-01

    Although various norms for reciprocity-based cooperation have been suggested that are evolutionarily stable against invasion from free riders, the process of alternation of norms and the role of diversified norms remain unclear in the evolution of cooperation. We clarify the co-evolutionary dynamics of norms and cooperation in indirect reciprocity and also identify the indispensable norms for the evolution of cooperation. Inspired by the gene knockout method, a genetic engineering technique, we developed the norm knockout method and clarified the norms necessary for the establishment of cooperation. The results of numerical investigations revealed that the majority of norms gradually transitioned to tolerant norms after defectors are eliminated by strict norms. Furthermore, no cooperation emerges when specific norms that are intolerant to defectors are knocked out. PMID:28276485

  18. Stabilizing l1-norm prediction models by supervised feature grouping.

    PubMed

    Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha

    2016-02-01

    Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  1. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  2. Fast and accurate matrix completion via truncated nuclear norm regularization.

    PubMed

    Hu, Yao; Zhang, Debing; Ye, Jieping; Li, Xuelong; He, Xiaofei

    2013-09-01

    Recovering a large matrix from a small subset of its entries is a challenging problem arising in many real applications, such as image inpainting and recommender systems. Many existing approaches formulate this problem as a general low-rank matrix approximation problem. Since the rank operator is nonconvex and discontinuous, most of the recent theoretical studies use the nuclear norm as a convex relaxation. One major limitation of the existing approaches based on nuclear norm minimization is that all the singular values are simultaneously minimized, and thus the rank may not be well approximated in practice. In this paper, we propose to achieve a better approximation to the rank of matrix by truncated nuclear norm, which is given by the nuclear norm subtracted by the sum of the largest few singular values. In addition, we develop a novel matrix completion algorithm by minimizing the Truncated Nuclear Norm. We further develop three efficient iterative procedures, TNNR-ADMM, TNNR-APGL, and TNNR-ADMMAP, to solve the optimization problem. TNNR-ADMM utilizes the alternating direction method of multipliers (ADMM), while TNNR-AGPL applies the accelerated proximal gradient line search method (APGL) for the final optimization. For TNNR-ADMMAP, we make use of an adaptive penalty according to a novel update rule for ADMM to achieve a faster convergence rate. Our empirical study shows encouraging results of the proposed algorithms in comparison to the state-of-the-art matrix completion algorithms on both synthetic and real visual datasets.

  3. Two conditions for equivalence of 0-norm solution and 1-norm solution in sparse representation.

    PubMed

    Li, Yuanqing; Amari, Shun-Ichi

    2010-07-01

    In sparse representation, two important sparse solutions, the 0-norm and 1-norm solutions, have been receiving much of attention. The 0-norm solution is the sparsest, however it is not easy to obtain. Although the 1-norm solution may not be the sparsest, it can be easily obtained by the linear programming method. In many cases, the 0-norm solution can be obtained through finding the 1-norm solution. Many discussions exist on the equivalence of the two sparse solutions. This paper analyzes two conditions for the equivalence of the two sparse solutions. The first condition is necessary and sufficient, however, difficult to verify. Although the second is necessary but is not sufficient, it is easy to verify. In this paper, we analyze the second condition within the stochastic framework and propose a variant. We then prove that the equivalence of the two sparse solutions holds with high probability under the variant of the second condition. Furthermore, in the limit case where the 0-norm solution is extremely sparse, the second condition is also a sufficient condition with probability 1.

  4. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  5. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  6. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  7. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    NASA Astrophysics Data System (ADS)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  8. Regularization of Instantaneous Frequency Attribute Computations

    NASA Astrophysics Data System (ADS)

    Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.

    2014-12-01

    We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.

  9. Quantum Ergodicity and L p Norms of Restrictions of Eigenfunctions

    NASA Astrophysics Data System (ADS)

    Hezari, Hamid

    2018-02-01

    We prove an analogue of Sogge's local L p estimates for L p norms of restrictions of eigenfunctions to submanifolds, and use it to show that for quantum ergodic eigenfunctions one can get improvements of the results of Burq-Gérard-Tzvetkov, Hu, and Chen-Sogge. The improvements are logarithmic on negatively curved manifolds (without boundary) and by o(1) for manifolds (with or without boundary) with ergodic geodesic flows. In the case of ergodic billiards with piecewise smooth boundary, we get o(1) improvements on L^∞ estimates of Cauchy data away from a shrinking neighborhood of the corners, and as a result using the methods of Ghosh et al., Jung and Zelditch, Jung and Zelditch, we get that the number of nodal domains of 2-dimensional ergodic billiards tends to infinity as λ \\to ∞. These results work only for a full density subsequence of any given orthonormal basis of eigenfunctions. We also present an extension of the L p estimates of Burq-Gérard-Tzvetkov, Hu, Chen-Sogge for the restrictions of Dirichlet and Neumann eigenfunctions to compact submanifolds of the interior of manifolds with piecewise smooth boundary. This part does not assume ergodicity on the manifolds.

  10. On the sparseness of 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, Weida

    2010-04-01

    There is some empirical evidence available showing that 1-norm Support Vector Machines (1-norm SVMs) have good sparseness; however, both how good sparseness 1-norm SVMs can reach and whether they have a sparser representation than that of standard SVMs are not clear. In this paper we take into account the sparseness of 1-norm SVMs. Two upper bounds on the number of nonzero coefficients in the decision function of 1-norm SVMs are presented. First, the number of nonzero coefficients in 1-norm SVMs is at most equal to the number of only the exact support vectors lying on the +1 and -1 discriminating surfaces, while that in standard SVMs is equal to the number of support vectors, which implies that 1-norm SVMs have better sparseness than that of standard SVMs. Second, the number of nonzero coefficients is at most equal to the rank of the sample matrix. A brief review of the geometry of linear programming and the primal steepest edge pricing simplex method are given, which allows us to provide the proof of the two upper bounds and evaluate their tightness by experiments. Experimental results on toy data sets and the UCI data sets illustrate our analysis. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Robust subspace clustering via joint weighted Schatten-p norm and Lq norm minimization

    NASA Astrophysics Data System (ADS)

    Zhang, Tao; Tang, Zhenmin; Liu, Qing

    2017-05-01

    Low-rank representation (LRR) has been successfully applied to subspace clustering. However, the nuclear norm in the standard LRR is not optimal for approximating the rank function in many real-world applications. Meanwhile, the L21 norm in LRR also fails to characterize various noises properly. To address the above issues, we propose an improved LRR method, which achieves low rank property via the new formulation with weighted Schatten-p norm and Lq norm (WSPQ). Specifically, the nuclear norm is generalized to be the Schatten-p norm and different weights are assigned to the singular values, and thus it can approximate the rank function more accurately. In addition, Lq norm is further incorporated into WSPQ to model different noises and improve the robustness. An efficient algorithm based on the inexact augmented Lagrange multiplier method is designed for the formulated problem. Extensive experiments on face clustering and motion segmentation clearly demonstrate the superiority of the proposed WSPQ over several state-of-the-art methods.

  12. Fast Algorithms for Earth Mover Distance Based on Optimal Transport and L1 Regularization II

    DTIC Science & Technology

    2016-09-01

    of optimal transport, the EMD problem can be reformulated as a familiar L1 minimization. We use a regularization which gives us a unique solution for...plays a central role in many applications, including image processing, computer vision and statistics etc. [13, 17, 20, 24]. The EMD is a metric defined

  13. A unified framework for penalized statistical muon tomography reconstruction with edge preservation priors of lp norm type

    NASA Astrophysics Data System (ADS)

    Yu, Baihui; Zhao, Ziran; Wang, Xuewu; Wu, Dufan; Zeng, Zhi; Zeng, Ming; Wang, Yi; Cheng, Jianping

    2016-01-01

    The Tsinghua University MUon Tomography facilitY (TUMUTY) has been built up and it is utilized to reconstruct the special objects with complex structure. Since fine image is required, the conventional Maximum likelihood Scattering and Displacement (MLSD) algorithm is employed. However, due to the statistical characteristics of muon tomography and the data incompleteness, the reconstruction is always instable and accompanied with severe noise. In this paper, we proposed a Maximum a Posterior (MAP) algorithm for muon tomography regularization, where an edge-preserving prior on the scattering density image is introduced to the object function. The prior takes the lp norm (p>0) of the image gradient magnitude, where p=1 and p=2 are the well-known total-variation (TV) and Gaussian prior respectively. The optimization transfer principle is utilized to minimize the object function in a unified framework. At each iteration the problem is transferred to solving a cubic equation through paraboloidal surrogating. To validate the method, the French Test Object (FTO) is imaged by both numerical simulation and TUMUTY. The proposed algorithm is used for the reconstruction where different norms are detailedly studied, including l2, l1, l0.5, and an l2-0.5 mixture norm. Compared with MLSD method, MAP achieves better image quality in both structure preservation and noise reduction. Furthermore, compared with the previous work where one dimensional image was acquired, we achieve the relatively clear three dimensional images of FTO, where the inner air hole and the tungsten shell is visible.

  14. Joint L2,1 Norm and Fisher Discrimination Constrained Feature Selection for Rational Synthesis of Microporous Aluminophosphates.

    PubMed

    Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong

    2017-04-01

    Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Regularizing portfolio optimization

    NASA Astrophysics Data System (ADS)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  16. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Second Language Perception and Production of English Regular Past Tense: L1 Influence in Phonology and Morphosyntax

    ERIC Educational Resources Information Center

    Chen, Wen-Hsin

    2016-01-01

    The goal of this study is to provide a better understanding of the influence from first language (L1) phonology and morphosyntax on second language (L2) production and perception of English regular past tense morphology. (Abstract shortened by ProQuest.) [The dissertation citations contained here are published with the permission of ProQuest LLC.…

  18. Bilateral filter regularized accelerated Demons for improved discontinuity preserving registration.

    PubMed

    Demirović, D; Šerifović-Trbalić, A; Prljača, N; Cattin, Ph C

    2015-03-01

    The classical accelerated Demons algorithm uses Gaussian smoothing to penalize oscillatory motion in the displacement fields during registration. This well known method uses the L2 norm for regularization. Whereas the L2 norm is known for producing well behaving smooth deformation fields it cannot properly deal with discontinuities often seen in the deformation field as the regularizer cannot differentiate between discontinuities and smooth part of motion field. In this paper we propose replacement the Gaussian filter of the accelerated Demons with a bilateral filter. In contrast the bilateral filter not only uses information from displacement field but also from the image intensities. In this way we can smooth the motion field depending on image content as opposed to the classical Gaussian filtering. By proper adjustment of two tunable parameters one can obtain more realistic deformations in a case of discontinuity. The proposed approach was tested on 2D and 3D datasets and showed significant improvements in the Target Registration Error (TRE) for the well known POPI dataset. Despite the increased computational complexity, the improved registration result is justified in particular abdominal data sets where discontinuities often appear due to sliding organ motion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    PubMed

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi

  2. An iterative algorithm for L1-TV constrained regularization in image restoration

    NASA Astrophysics Data System (ADS)

    Chen, K.; Loli Piccolomini, E.; Zama, F.

    2015-11-01

    We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.

  3. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  4. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  5. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data

  6. Joint Smoothed l₀-Norm DOA Estimation Algorithm for Multiple Measurement Vectors in MIMO Radar.

    PubMed

    Liu, Jing; Zhou, Weidong; Juwono, Filbert H

    2017-05-08

    Direction-of-arrival (DOA) estimation is usually confronted with a multiple measurement vector (MMV) case. In this paper, a novel fast sparse DOA estimation algorithm, named the joint smoothed l 0 -norm algorithm, is proposed for multiple measurement vectors in multiple-input multiple-output (MIMO) radar. To eliminate the white or colored Gaussian noises, the new method first obtains a low-complexity high-order cumulants based data matrix. Then, the proposed algorithm designs a joint smoothed function tailored for the MMV case, based on which joint smoothed l 0 -norm sparse representation framework is constructed. Finally, for the MMV-based joint smoothed function, the corresponding gradient-based sparse signal reconstruction is designed, thus the DOA estimation can be achieved. The proposed method is a fast sparse representation algorithm, which can solve the MMV problem and perform well for both white and colored Gaussian noises. The proposed joint algorithm is about two orders of magnitude faster than the l 1 -norm minimization based methods, such as l 1 -SVD (singular value decomposition), RV (real-valued) l 1 -SVD and RV l 1 -SRACV (sparse representation array covariance vectors), and achieves better DOA estimation performance.

  7. Age specific serum anti-Müllerian hormone levels in 1,298 Korean women with regular menstruation

    PubMed Central

    Yoo, Ji Hee; Cha, Sun Wha; Park, Chan Woo; Yang, Kwang Moon; Song, In Ok; Koong, Mi Kyoung; Kang, Inn Soo

    2011-01-01

    Objective To determine the age specific serum anti-Müllerian hormone (AMH) reference values in Korean women with regular menstruation. Methods Between May, 2010 and January, 2011, the serum AMH levels were evaluated in a total of 1,298 women who have regular menstrual cycles aged between 20 and 50 years. Women were classified into 6 categories by age: 20-31 years, 32-34 years, 35-37 years, 38-40 years, 41-43 years, above 43 years. Measurement of serum AMH was measured by commercial enzyme-linked immunoassay. Results The serum AMH levels correlated negatively with age. The median AMH level of each age group was 4.20 ng/mL, 3.70 ng/mL, 2.60 ng/mL, 1.50 ng/mL, 1.30 ng/mL, and 0.60 ng/mL, respectively. The AMH values in the lower 5th percentile of each age group were 1.19 ng/mL, 0.60 ng/mL, 0.42 ng/mL, 0.27 ng/mL, 0.14 ng/mL, and 0.10 ng/mL, respectively. Conclusion This study determined reference values of serum AMH in Korean women with regular menstruation. These values can be applied to clinical evaluation and treatment of infertile women. PMID:22384425

  8. Vector-valued Lizorkin-Triebel spaces and sharp trace theory for functions in Sobolev spaces with mixed \\pmb{L_p}-norm for parabolic problems

    NASA Astrophysics Data System (ADS)

    Weidemaier, P.

    2005-06-01

    The trace problem on the hypersurface y_n=0 is investigated for a function u=u(y,t) \\in L_q(0,T;W_{\\underline p}^{\\underline m}(\\mathbb R_+^n)) with \\partial_t u \\in L_q(0,T; L_{\\underline p}(\\mathbb R_+^n)), that is, Sobolev spaces with mixed Lebesgue norm L_{\\underline p,q}(\\mathbb R^n_+\\times(0,T))=L_q(0,T;L_{\\underline p}(\\mathbb R_+^n)) are considered; here \\underline p=(p_1,\\dots,p_n) is a vector and \\mathbb R^n_+=\\mathbb R^{n-1} \\times (0,\\infty). Such function spaces are useful in the context of parabolic equations. They allow, in particular, different exponents of summability in space and time. It is shown that the sharp regularity of the trace in the time variable is characterized by the Lizorkin-Triebel space F_{q,p_n}^{1-1/(p_nm_n)}(0,T;L_{\\widetilde{\\underline p}}(\\mathbb R^{n-1})), \\underline p=(\\widetilde{\\underline p},p_n). A similar result is established for first order spatial derivatives of u. These results allow one to determine the exact spaces for the data in the inhomogeneous Dirichlet and Neumann problems for parabolic equations of the second order if the solution is in the space L_q(0,T; W_p^2(\\Omega)) \\cap W_q^1(0,T;L_p(\\Omega)) with p \\le q.

  9. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  10. Geometric artifacts reduction for cone-beam CT via L0-norm minimization without dedicated phantoms.

    PubMed

    Gong, Changcheng; Cai, Yufang; Zeng, Li

    2018-01-01

    For cone-beam computed tomography (CBCT), transversal shifts of the rotation center exist inevitably, which will result in geometric artifacts in CT images. In this work, we propose a novel geometric calibration method for CBCT, which can also be used in micro-CT. The symmetry property of the sinogram is used for the first calibration, and then L0-norm of the gradient image from the reconstructed image is used as the cost function to be minimized for the second calibration. An iterative search method is adopted to pursue the local minimum of the L0-norm minimization problem. The transversal shift value is updated with affirmatory step size within a search range determined by the first calibration. In addition, graphic processing unit (GPU)-based FDK algorithm and acceleration techniques are designed to accelerate the calibration process of the presented new method. In simulation experiments, the mean absolute difference (MAD) and the standard deviation (SD) of the transversal shift value were less than 0.2 pixels between the noise-free and noisy projection images, which indicated highly accurate calibration applying the new calibration method. In real data experiments, the smaller entropies of the corrected images also indicated that higher resolution image was acquired using the corrected projection data and the textures were well protected. Study results also support the feasibility of applying the proposed method to other imaging modalities.

  11. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  12. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  13. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  14. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  15. Intraventricular vector flow mapping—a Doppler-based regularized problem with automatic model selection

    NASA Astrophysics Data System (ADS)

    Assi, Kondo Claude; Gay, Etienne; Chnafa, Christophe; Mendez, Simon; Nicoud, Franck; Abascal, Juan F. P. J.; Lantelme, Pierre; Tournoux, François; Garcia, Damien

    2017-09-01

    We propose a regularized least-squares method for reconstructing 2D velocity vector fields within the left ventricular cavity from single-view color Doppler echocardiographic images. Vector flow mapping is formulated as a quadratic optimization problem based on an {{\\ell }2} -norm minimization of a cost function composed of a Doppler data-fidelity term and a regularizer. The latter contains three physically interpretable expressions related to 2D mass conservation, Dirichlet boundary conditions, and smoothness. A finite difference discretization of the continuous problem was adopted in a polar coordinate system, leading to a sparse symmetric positive-definite system. The three regularization parameters were determined automatically by analyzing the L-hypersurface, a generalization of the L-curve. The performance of the proposed method was numerically evaluated using (1) a synthetic flow composed of a mixture of divergence-free and curl-free flow fields and (2) simulated flow data from a patient-specific CFD (computational fluid dynamics) model of a human left heart. The numerical evaluations showed that the vector flow fields reconstructed from the Doppler components were in good agreement with the original velocities, with a relative error less than 20%. It was also demonstrated that a perturbation of the domain contour has little effect on the rebuilt velocity fields. The capability of our intraventricular vector flow mapping (iVFM) algorithm was finally illustrated on in vivo echocardiographic color Doppler data acquired in patients. The vortex that forms during the rapid filling was clearly deciphered. This improved iVFM algorithm is expected to have a significant clinical impact in the assessment of diastolic function.

  16. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  17. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  18. Optimal Time-decay Estimates for the Compressible Navier-Stokes Equations in the Critical L p Framework

    NASA Astrophysics Data System (ADS)

    Danchin, Raphaël; Xu, Jiang

    2017-04-01

    The global existence issue for the isentropic compressible Navier-Stokes equations in the critical regularity framework was addressed in Danchin (Invent Math 141(3):579-614, 2000) more than 15 years ago. However, whether (optimal) time-decay rates could be shown in critical spaces has remained an open question. Here we give a positive answer to that issue not only in the L 2 critical framework of Danchin (Invent Math 141(3):579-614, 2000) but also in the general L p critical framework of Charve and Danchin (Arch Ration Mech Anal 198(1):233-271, 2010), Chen et al. (Commun Pure Appl Math 63(9):1173-1224, 2010), Haspot (Arch Ration Mech Anal 202(2):427-460, 2011): we show that under a mild additional decay assumption that is satisfied if, for example, the low frequencies of the initial data are in {L^{p/2}(Rd)}, the L p norm (the slightly stronger dot B^0_{p,1} norm in fact) of the critical global solutions decays like t^{-d(1/p - 1/4} for {tto+∞,} exactly as firstly observed by Matsumura and Nishida in (Proc Jpn Acad Ser A 55:337-342, 1979) in the case p = 2 and d = 3, for solutions with high Sobolev regularity. Our method relies on refined time weighted inequalities in the Fourier space, and is likely to be effective for other hyperbolic/parabolic systems that are encountered in fluid mechanics or mathematical physics.

  19. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  20. Image interpolation via regularized local linear regression.

    PubMed

    Liu, Xianming; Zhao, Debin; Xiong, Ruiqin; Ma, Siwei; Gao, Wen; Sun, Huifang

    2011-12-01

    The linear regression model is a very attractive tool to design effective image interpolation schemes. Some regression-based image interpolation algorithms have been proposed in the literature, in which the objective functions are optimized by ordinary least squares (OLS). However, it is shown that interpolation with OLS may have some undesirable properties from a robustness point of view: even small amounts of outliers can dramatically affect the estimates. To address these issues, in this paper we propose a novel image interpolation algorithm based on regularized local linear regression (RLLR). Starting with the linear regression model where we replace the OLS error norm with the moving least squares (MLS) error norm leads to a robust estimator of local image structure. To keep the solution stable and avoid overfitting, we incorporate the l(2)-norm as the estimator complexity penalty. Moreover, motivated by recent progress on manifold-based semi-supervised learning, we explicitly consider the intrinsic manifold structure by making use of both measured and unmeasured data points. Specifically, our framework incorporates the geometric structure of the marginal probability distribution induced by unmeasured samples as an additional local smoothness preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results on benchmark test images demonstrate that the proposed method achieves very competitive performance with the state-of-the-art interpolation algorithms, especially in image edge structure preservation. © 2011 IEEE

  1. Rational Approximations with Hankel-Norm Criterion

    DTIC Science & Technology

    1980-01-01

    REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van

  2. Spectral Regularization Algorithms for Learning Large Incomplete Matrices.

    PubMed

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-03-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.

  3. Spectral Regularization Algorithms for Learning Large Incomplete Matrices

    PubMed Central

    Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert

    2010-01-01

    We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465

  4. Low-illumination image denoising method for wide-area search of nighttime sea surface

    NASA Astrophysics Data System (ADS)

    Song, Ming-zhu; Qu, Hong-song; Zhang, Gui-xiang; Tao, Shu-ping; Jin, Guang

    2018-05-01

    In order to suppress complex mixing noise in low-illumination images for wide-area search of nighttime sea surface, a model based on total variation (TV) and split Bregman is proposed in this paper. A fidelity term based on L1 norm and a fidelity term based on L2 norm are designed considering the difference between various noise types, and the regularization mixed first-order TV and second-order TV are designed to balance the influence of details information such as texture and edge for sea surface image. The final detection result is obtained by using the high-frequency component solved from L1 norm and the low-frequency component solved from L2 norm through wavelet transform. The experimental results show that the proposed denoising model has perfect denoising performance for artificially degraded and low-illumination images, and the result of image quality assessment index for the denoising image is superior to that of the contrastive models.

  5. The hypergraph regularity method and its applications

    PubMed Central

    Rödl, V.; Nagle, B.; Skokan, J.; Schacht, M.; Kohayakawa, Y.

    2005-01-01

    Szemerédi's regularity lemma asserts that every graph can be decomposed into relatively few random-like subgraphs. This random-like behavior enables one to find and enumerate subgraphs of a given isomorphism type, yielding the so-called counting lemma for graphs. The combined application of these two lemmas is known as the regularity method for graphs and has proved useful in graph theory, combinatorial geometry, combinatorial number theory, and theoretical computer science. Here, we report on recent advances in the regularity method for k-uniform hypergraphs, for arbitrary k ≥ 2. This method, purely combinatorial in nature, gives alternative proofs of density theorems originally due to E. Szemerédi, H. Furstenberg, and Y. Katznelson. Further results in extremal combinatorics also have been obtained with this approach. The two main components of the regularity method for k-uniform hypergraphs, the regularity lemma and the counting lemma, have been obtained recently: Rödl and Skokan (based on earlier work of Frankl and Rödl) generalized Szemerédi's regularity lemma to k-uniform hypergraphs, and Nagle, Rödl, and Schacht succeeded in proving a counting lemma accompanying the Rödl–Skokan hypergraph regularity lemma. The counting lemma is proved by reducing the counting problem to a simpler one previously investigated by Kohayakawa, Rödl, and Skokan. Similar results were obtained independently by W. T. Gowers, following a different approach. PMID:15919821

  6. The convergence analysis of SpikeProp algorithm with smoothing L1∕2 regularization.

    PubMed

    Zhao, Junhong; Zurada, Jacek M; Yang, Jie; Wu, Wei

    2018-07-01

    Unlike the first and the second generation artificial neural networks, spiking neural networks (SNNs) model the human brain by incorporating not only synaptic state but also a temporal component into their operating model. However, their intrinsic properties require expensive computation during training. This paper presents a novel algorithm to SpikeProp for SNN by introducing smoothing L 1∕2 regularization term into the error function. This algorithm makes the network structure sparse, with some smaller weights that can be eventually removed. Meanwhile, the convergence of this algorithm is proved under some reasonable conditions. The proposed algorithms have been tested for the convergence speed, the convergence rate and the generalization on the classical XOR-problem, Iris problem and Wisconsin Breast Cancer classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Sparse Recovery via l1 and L1 Optimization

    DTIC Science & Technology

    2014-11-01

    problem, with t being the descent direc- tion, obtaining ut = uxx + f − 1 µ p(u) (6) as an evolution equation. We can hope that these L1 regularized (or...implementation. He considered a wide class of second–order elliptic equations and, with Friedman [14], an extension to parabolic equa- tions. In [15, 16...obtaining an elliptic PDE, or by gradi- ent descent to obtain a parabolic PDE. Addition- ally, some PDEs can be rewritten using the L1 subgradient such as the

  8. A LSQR-type method provides a computationally efficient automated optimal choice of regularization parameter in diffuse optical tomography.

    PubMed

    Prakash, Jaya; Yalavarthy, Phaneendra K

    2013-03-01

    Developing a computationally efficient automated method for the optimal choice of regularization parameter in diffuse optical tomography. The least-squares QR (LSQR)-type method that uses Lanczos bidiagonalization is known to be computationally efficient in performing the reconstruction procedure in diffuse optical tomography. The same is effectively deployed via an optimization procedure that uses the simplex method to find the optimal regularization parameter. The proposed LSQR-type method is compared with the traditional methods such as L-curve, generalized cross-validation (GCV), and recently proposed minimal residual method (MRM)-based choice of regularization parameter using numerical and experimental phantom data. The results indicate that the proposed LSQR-type and MRM-based methods performance in terms of reconstructed image quality is similar and superior compared to L-curve and GCV-based methods. The proposed method computational complexity is at least five times lower compared to MRM-based method, making it an optimal technique. The LSQR-type method was able to overcome the inherent limitation of computationally expensive nature of MRM-based automated way finding the optimal regularization parameter in diffuse optical tomographic imaging, making this method more suitable to be deployed in real-time.

  9. A new weak Galerkin finite element method for elliptic interface problems

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu; ...

    2016-08-26

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  10. A new weak Galerkin finite element method for elliptic interface problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  11. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  12. Autoregressive model in the Lp norm space for EEG analysis.

    PubMed

    Li, Peiyang; Wang, Xurui; Li, Fali; Zhang, Rui; Ma, Teng; Peng, Yueheng; Lei, Xu; Tian, Yin; Guo, Daqing; Liu, Tiejun; Yao, Dezhong; Xu, Peng

    2015-01-30

    The autoregressive (AR) model is widely used in electroencephalogram (EEG) analyses such as waveform fitting, spectrum estimation, and system identification. In real applications, EEGs are inevitably contaminated with unexpected outlier artifacts, and this must be overcome. However, most of the current AR models are based on the L2 norm structure, which exaggerates the outlier effect due to the square property of the L2 norm. In this paper, a novel AR object function is constructed in the Lp (p≤1) norm space with the aim to compress the outlier effects on EEG analysis, and a fast iteration procedure is developed to solve this new AR model. The quantitative evaluation using simulated EEGs with outliers proves that the proposed Lp (p≤1) AR can estimate the AR parameters more robustly than the Yule-Walker, Burg and LS methods, under various simulated outlier conditions. The actual application to the resting EEG recording with ocular artifacts also demonstrates that Lp (p≤1) AR can effectively address the outliers and recover a resting EEG power spectrum that is more consistent with its physiological basis. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  14. Improved sparse decomposition based on a smoothed L0 norm using a Laplacian kernel to select features from fMRI data.

    PubMed

    Zhang, Chuncheng; Song, Sutao; Wen, Xiaotong; Yao, Li; Long, Zhiying

    2015-04-30

    Feature selection plays an important role in improving the classification accuracy of multivariate classification techniques in the context of fMRI-based decoding due to the "few samples and large features" nature of functional magnetic resonance imaging (fMRI) data. Recently, several sparse representation methods have been applied to the voxel selection of fMRI data. Despite the low computational efficiency of the sparse representation methods, they still displayed promise for applications that select features from fMRI data. In this study, we proposed the Laplacian smoothed L0 norm (LSL0) approach for feature selection of fMRI data. Based on the fast sparse decomposition using smoothed L0 norm (SL0) (Mohimani, 2007), the LSL0 method used the Laplacian function to approximate the L0 norm of sources. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of LSL0 for the sparse source estimation and feature selection. Simulated results indicated that LSL0 produced more accurate source estimation than SL0 at high noise levels. The classification accuracy using voxels that were selected by LSL0 was higher than that by SL0 in both simulated and real fMRI experiment. Moreover, both LSL0 and SL0 showed higher classification accuracy and required less time than ICA and t-test for the fMRI decoding. LSL0 outperformed SL0 in sparse source estimation at high noise level and in feature selection. Moreover, LSL0 and SL0 showed better performance than ICA and t-test for feature selection. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (01) as a surrogate for the rank of matrix. Following past research in sparse recovery, we know that non-convex l(p)-norm (0

    1) is a better substitute for the NP hard l(0)-norm than the convex l(1)-norm. Motivated by these studies, we propose improvements over the previous studies by replacing the l(1)-norm sparsity penalty by the lp-norm. Thus, we reconstruct the dynamic MRI sequence by solving a least squares minimization problem regularized by l(p)-norm as the sparsity penalty and Schatten-q norm as the low-rank penalty. There are no efficient algorithms to solve the said problems. In this paper, we derive efficient algorithms to solve them. The experiments have been carried out on Dynamic Contrast Enhanced (DCE) MRI datasets. Both quantitative and qualitative analysis indicates the superiority of our proposed improvement over the existing methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  17. Validity of the EQ-5D-5L and reference norms for the Spanish population.

    PubMed

    Hernandez, Gimena; Garin, Olatz; Pardo, Yolanda; Vilagut, Gemma; Pont, Àngels; Suárez, Mónica; Neira, Montse; Rajmil, Luís; Gorostiza, Inigo; Ramallo-Fariña, Yolanda; Cabases, Juan; Alonso, Jordi; Ferrer, Montse

    2018-05-16

    The EuroQol 5 dimensions 5 levels (EQ-5D-5L) is the new version of EQ-5D, developed to improve its discriminatory capacity. This study aims to evaluate the construct validity of the Spanish version and provide index and dimension population-based reference norms for the new EQ-5D-5L. Data were obtained from the 2011/2012 Spanish National Health Survey, with a representative sample (n = 20,587) of non-institutionalized Spanish adults (≥ 18 years). The EQ-5D-5L index was calculated by using the Spanish value set. Construct validity was evaluated by comparing known groups with estimators obtained through regression models, adjusted by age and gender. Sampling weights were applied to restore the representativeness of the sample and to calculate the norms stratified by gender and age groups. We calculated the percentages and standard errors of dimensions, and the deciles, percentiles 5 and 95, means, and 95% confidence intervals of the health index. All the hypotheses established a priori for known groups were confirmed (P < 0.001). The EQ-5D-5L index indicated worse health in groups with lower education level (from 0.94 to 0.87), higher number of chronic conditions (0.96-0.79), probable psychiatric disorder (0.94 vs 0.80), strong limitations (0.96-0.46), higher number of days of restriction (0.93-0.64) or confinement to bed (0.92-0.49), and hospitalized in the previous 12 months (0.92 vs 0.81). The EQ-5D-5L is a valid instrument to measure perceived health in the Spanish-speaking population. The representative population-based norms provided here will help improve the interpretation of results obtained with the new EQ-5D-5L.

  18. Quantifying social norms: by coupling the ecosystem management concept and semi-quantitative sociological methods

    NASA Astrophysics Data System (ADS)

    Zhang, D.; Xu, H.

    2012-12-01

    Over recent decades, human-induced environmental changes have steadily and rapidly grown in intensity and impact to where they now often exceed natural impacts. As one of important components of human activities, social norms play key roles in environmental and natural resources management. But the lack of relevant quantitative data about social norms greatly limits our scientific understanding of the complex linkages between humans and nature, and hampers our solving of pressing environmental and social problems. In this study, we built a quantified method by coupling the ecosystem management concept, semi-quantitative sociological methods and mathematical statistics. We got the quantified value of social norms from two parts, whether the content of social norms coincide with the concept of ecosystem management (content value) and how about the performance after social norms were put into implementation (implementation value) . First, we separately identified 12 core elements of ecosystem management and 16 indexes of social norms, and then matched them one by one. According to their matched degree, we got the content value of social norms. Second, we selected 8 key factors that can represent the performance of social norms after they were put into implementation, and then we got the implementation value by Delph method. Adding these two parts values, we got the final value of each social norms. Third, we conducted a case study in Heihe river basin, the second largest inland river in China, by selecting 12 official edicts related to the river basin ecosystem management of Heihe River Basin. By doing so, we first got the qualified data of social norms which can be directly applied to the research that involved observational or experimental data collection of natural processes. Second, each value was supported by specific contents, so it can assist creating a clear road map for building or revising management and policy guidelines. For example, in this case study

  19. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  20. Translation norms for English and Spanish: The role of lexical variables, word class, and L2 proficiency in negotiating translation ambiguity

    PubMed Central

    Prior, Anat; MacWhinney, Brian; Kroll, Judith F.

    2014-01-01

    We present a set of translation norms for 670 English and 760 Spanish nouns, verbs and class ambiguous items that varied in their lexical properties in both languages, collected from 80 bilingual participants. Half of the words in each language received more than a single translation across participants. Cue word frequency and imageability were both negatively correlated with number of translations. Word class predicted number of translations: Nouns had fewer translations than did verbs, which had fewer translations than class-ambiguous items. The translation probability of specific responses was positively correlated with target word frequency and imageability, and with its form overlap with the cue word. Translation choice was modulated by L2 proficiency: Less proficient bilinguals tended to produce lower probability translations than more proficient bilinguals, but only in forward translation, from L1 to L2. These findings highlight the importance of translation ambiguity as a factor influencing bilingual representation and performance. The norms can also provide an important resource to assist researchers in the selection of experimental materials for studies of bilingual and monolingual language performance. These norms may be downloaded from www.psychonomic.org/archive. PMID:18183923

  1. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  2. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package

    PubMed Central

    Reid, Stephen; Tibshirani, Rob

    2014-01-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587

  3. So it is, so it shall be: Group regularities license children’s prescriptive judgments

    PubMed Central

    Roberts, Steven O.; Gelman, Susan A.; Ho, Arnold K.

    2016-01-01

    When do descriptive regularities (what characteristics individuals have) become prescriptive norms (what characteristics individuals should have)? We examined children’s (4–13 years) and adults’ use of group regularities to make prescriptive judgments, employing novel groups (Hibbles and Glerks) that engaged in morally neutral behaviors (e.g., eating different kinds of berries). Participants were introduced to conforming or non-conforming individuals (e.g., a Hibble who ate berries more typical of a Glerk). Children negatively evaluated non-conformity, with negative evaluations declining with age (Study 1). These effects were replicable across competitive and cooperative intergroup contexts (Study 2), and stemmed from reasoning about group regularities rather than reasoning about individual regularities (Study 3). These data provide new insights into children’s group concepts and have important implications for understanding the development of stereotyping and norm enforcement. PMID:27914116

  4. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  5. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  6. What's in a norm? Sources and processes of norm change.

    PubMed

    Paluck, Elizabeth Levy

    2009-03-01

    This reply to the commentary by E. Staub and L. A. Pearlman (2009) revisits the field experimental results of E. L. Paluck (2009). It introduces further evidence and theoretical elaboration supporting Paluck's conclusion that exposure to a reconciliation-themed radio soap opera changed perceptions of social norms and behaviors, not beliefs. Experimental and longitudinal survey evidence reinforces the finding that the radio program affected socially shared perceptions of typical or prescribed behavior-that is, social norms. Specifically, measurements of perceptions of social norms called into question by Staub and Pearlman are shown to correlate with perceptions of public opinion and public, not private, behaviors. Although measurement issues and the mechanisms of the radio program's influence merit further testing, theory and evidence point to social interactions and emotional engagement, not individual education, as the likely mechanisms of change. The present exchange makes salient what is at stake in this debate: a model of change based on learning and personal beliefs versus a model based on group influence and social norms. These theoretical models recommend very different strategies for prejudice and conflict reduction. Future field experiments should attempt to adjudicate between these models by testing relevant policies in real-world settings.

  7. Participation in regular leisure-time physical activity among individuals with type 2 diabetes not meeting Canadian guidelines: the influence of intention, perceived behavioral control, and moral norm.

    PubMed

    Boudreau, François; Godin, Gaston

    2014-12-01

    Most people with type 2 diabetes do not engage in regular leisure-time physical activity. The theory of planned behavior and moral norm construct can enhance our understanding of physical activity intention and behavior among this population. This study aims to identify the determinants of both intention and behavior to participate in regular leisure-time physical activity among individuals with type 2 diabetes who not meet Canada's physical activity guidelines. By using secondary data analysis of a randomized computer-tailored print-based intervention, participants (n = 200) from the province of Quebec (Canada) completed and returned a baseline questionnaire measuring their attitude, perceived behavioral control, and moral norm. One month later, they self-reported their level of leisure-time physical activity. A hierarchical regression equation showed that attitude (beta = 0.10, P < 0.05), perceived behavioral control (beta = 0.37, P < 0.001), and moral norm (beta = 0.45, P < 0.001) were significant determinants of intention, with the final model explaining 63% of the variance. In terms of behavioral prediction, intention (beta = 0.34, P < 0.001) and perceived behavioral control (beta = 0.16, P < 0.05) added 17% to the variance, after controlling the effects of the experimental condition (R (2) = 0.04, P < 0.05) and past participation in leisure-time physical activity (R (2) = 0.22, P < 0.001). The final model explained 43% of the behavioral variance. Finally, the bootstrapping procedure indicated that the influence of moral norm on behavior was mediated by intention and perceived behavioral control. The determinants investigated offered an excellent starting point for designing appropriate counseling messages to promote leisure-time physical activity among individuals with type 2 diabetes.

  8. The Total Variation Regularized L1 Model for Multiscale Decomposition

    DTIC Science & Technology

    2006-01-01

    L1 fidelity term, and presented impressive and successful applications of the TV-L1 model to impulsive noise removal and outlier identification. She...used to filter 1D signal [3], to remove impulsive (salt-n- pepper) noise [35], to extract textures from natural images [45], to remove varying...34, 35, 36] discovery of the usefulness of this model for removing impul- sive noise , Chan and Esedoglu’s [17] further analysis of this model, and a

  9. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-04-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modeling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modeling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a preconditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  10. Time-domain least-squares migration using the Gaussian beam summation method

    NASA Astrophysics Data System (ADS)

    Yang, Jidong; Zhu, Hejun; McMechan, George; Yue, Yubo

    2018-07-01

    With a finite recording aperture, a limited source spectrum and unbalanced illumination, traditional imaging methods are insufficient to generate satisfactory depth profiles with high resolution and high amplitude fidelity. This is because traditional migration uses the adjoint operator of the forward modelling rather than the inverse operator. We propose a least-squares migration approach based on the time-domain Gaussian beam summation, which helps to balance subsurface illumination and improve image resolution. Based on the Born approximation for the isotropic acoustic wave equation, we derive a linear time-domain Gaussian beam modelling operator, which significantly reduces computational costs in comparison with the spectral method. Then, we formulate the corresponding adjoint Gaussian beam migration, as the gradient of an L2-norm waveform misfit function. An L1-norm regularization is introduced to the inversion to enhance the robustness of least-squares migration, and an approximated diagonal Hessian is used as a pre-conditioner to speed convergence. Synthetic and field data examples demonstrate that the proposed approach improves imaging resolution and amplitude fidelity in comparison with traditional Gaussian beam migration.

  11. The L sub 1 finite element method for pure convection problems

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan

    1991-01-01

    The least squares (L sub 2) finite element method is introduced for 2-D steady state pure convection problems with smooth solutions. It is proven that the L sub 2 method has the same stability estimate as the original equation, i.e., the L sub 2 method has better control of the streamline derivative. Numerical convergence rates are given to show that the L sub 2 method is almost optimal. This L sub 2 method was then used as a framework to develop an iteratively reweighted L sub 2 finite element method to obtain a least absolute residual (L sub 1) solution for problems with discontinuous solutions. This L sub 1 finite element method produces a nonoscillatory, nondiffusive and highly accurate numerical solution that has a sharp discontinuity in one element on both coarse and fine meshes. A robust reweighting strategy was also devised to obtain the L sub 1 solution in a few iterations. A number of examples solved by using triangle and bilinear elements are presented.

  12. Accelerating 4D flow MRI by exploiting vector field divergence regularization.

    PubMed

    Santelli, Claudio; Loecher, Michael; Busch, Julia; Wieben, Oliver; Schaeffter, Tobias; Kozerke, Sebastian

    2016-01-01

    To improve velocity vector field reconstruction from undersampled four-dimensional (4D) flow MRI by penalizing divergence of the measured flow field. Iterative image reconstruction in which magnitude and phase are regularized separately in alternating iterations was implemented. The approach allows incorporating prior knowledge of the flow field being imaged. In the present work, velocity data were regularized to reduce divergence, using either divergence-free wavelets (DFW) or a finite difference (FD) method using the ℓ1-norm of divergence and curl. The reconstruction methods were tested on a numerical phantom and in vivo data. Results of the DFW and FD approaches were compared with data obtained with standard compressed sensing (CS) reconstruction. Relative to standard CS, directional errors of vector fields and divergence were reduced by 55-60% and 38-48% for three- and six-fold undersampled data with the DFW and FD methods. Velocity vector displays of the numerical phantom and in vivo data were found to be improved upon DFW or FD reconstruction. Regularization of vector field divergence in image reconstruction from undersampled 4D flow data is a valuable approach to improve reconstruction accuracy of velocity vector fields. © 2014 Wiley Periodicals, Inc.

  13. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    NASA Astrophysics Data System (ADS)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  14. Regularity of random attractors for fractional stochastic reaction-diffusion equations on Rn

    NASA Astrophysics Data System (ADS)

    Gu, Anhui; Li, Dingshi; Wang, Bixiang; Yang, Han

    2018-06-01

    We investigate the regularity of random attractors for the non-autonomous non-local fractional stochastic reaction-diffusion equations in Hs (Rn) with s ∈ (0 , 1). We prove the existence and uniqueness of the tempered random attractor that is compact in Hs (Rn) and attracts all tempered random subsets of L2 (Rn) with respect to the norm of Hs (Rn). The main difficulty is to show the pullback asymptotic compactness of solutions in Hs (Rn) due to the noncompactness of Sobolev embeddings on unbounded domains and the almost sure nondifferentiability of the sample paths of the Wiener process. We establish such compactness by the ideas of uniform tail-estimates and the spectral decomposition of solutions in bounded domains.

  15. OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE.

    PubMed

    Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S

    2017-05-01

    Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order- k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k }. We derive general inequalities between the l p -norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm ( p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations.

  16. Exploring L1 model space in search of conductivity bounds for the MT problem

    NASA Astrophysics Data System (ADS)

    Wheelock, B. D.; Parker, R. L.

    2013-12-01

    Geophysical inverse problems of the type encountered in electromagnetic techniques are highly non-unique. As a result, any single inverted model, though feasible, is at best inconclusive and at worst misleading. In this paper, we use modified inversion methods to establish bounds on electrical conductivity within a model of the earth. Our method consists of two steps, each making use of the 1-norm in model regularization. Both 1-norm minimization problems are framed without approximation as non-negative least-squares (NNLS) problems. First, we must identify a parsimonious set of regions within the model for which upper and lower bounds on average conductivity will be sought. This is accomplished by minimizing the 1-norm of spatial variation, which produces a model with a limited number of homogeneous regions; in fact, the number of homogeneous regions will never be greater than the number of data, regardless of the number of free parameters supplied. The second step establishes bounds for each of these regions with pairs of inversions. The new suite of inversions also uses a 1-norm penalty, but applied to the conductivity values themselves, rather than the spatial variation thereof. In the bounding step we use the 1-norm of our model parameters because it is proportional to average conductivity. For a lower bound on average conductivity, the 1-norm within a bounding region is minimized. For an upper bound on average conductivity, the 1-norm everywhere outside a bounding region is minimized. The latter minimization has the effect of concentrating conductance into the bounding region. Taken together, these bounds are a measure of the uncertainty in the associated region of our model. Starting with a blocky inverse solution is key in the selection of the bounding regions. Of course, there is a tradeoff between resolution and uncertainty: an increase in resolution (smaller bounding regions), results in greater uncertainty (wider bounds). Minimization of the 1-norm of

  17. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  18. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  19. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    PubMed

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  20. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  1. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  2. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  3. NEIGHBORHOOD NORMS AND SUBSTANCE USE AMONG TEENS

    PubMed Central

    Musick, Kelly; Seltzer, Judith A.; Schwartz, Christine R.

    2008-01-01

    This paper uses new data from the Los Angeles Family and Neighborhood Survey (L.A. FANS) to examine how neighborhood norms shape teenagers’ substance use. Specifically, it takes advantage of clustered data at the neighborhood level to relate adult neighbors’ attitudes and behavior with respect to smoking, drinking, and drugs, which we treat as norms, to teenagers’ own smoking, drinking, and drug use. We use hierarchical linear models to account for parents’ attitudes and behavior and other characteristics of individuals and families. We also investigate how the association between neighborhood norms and teen behavior depends on: (1) the strength of norms, as measured by consensus in neighbors’ attitudes and conformity in their behavior; (2) the willingness and ability of neighbors to enforce norms, for instance, by monitoring teens’ activities; and (3) the degree to which teens are exposed to their neighbors. We find little association between neighborhood norms and teen substance use, regardless of how we condition the relationship. We discuss possible theoretical and methodological explanations for this finding. PMID:18496598

  4. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  5. Population norms for the AQoL derived from the 2007 Australian National Survey of Mental Health and Wellbeing.

    PubMed

    Hawthorne, Graeme; Korn, Sam; Richardson, Jeff

    2013-02-01

    To provide Australian health-related quality of life (HRQoL) population norms, based on utility scores from the Assessment of Quality of Life (AQoL) measure, a participant-reported outcomes (PRO) instrument. The data were from the 2007 National Survey of Mental Health and Wellbeing. AQoL scores were analysed by age cohorts, gender, other demographic characteristics, and mental and physical health variables. The AQoL utility score mean was 0.81 (95%CI 0.81-0.82), and 47% obtained scores indicating a very high HRQoL (>0.90). HRQoL gently declined by age group, with older adults' scores indicating lower HRQoL. Based on effect sizes (ESs), there were small losses in HRQoL associated with other demographic variables (e.g. by lack of labour force participation, ES(median) : 0.27). Those with current mental health syndromes reported moderate losses in HRQoL (ES(median) : 0.64), while those with physical health conditions generally also reported moderate losses in HRQoL (ES(median) : 0.41). This study has provided contemporary Australian population norms for HRQoL that may be used by researchers as indicators allowing interpretation and estimation of population health (e.g. estimation of the burden of disease), cross comparison between studies, the identification of health inequalities, and to provide benchmarks for health care interventions. © 2013 The Authors. ANZJPH © 2013 Public Health Association of Australia.

  6. Norm overlap between many-body states: Uncorrelated overlap between arbitrary Bogoliubov product states

    NASA Astrophysics Data System (ADS)

    Bally, B.; Duguet, T.

    2018-02-01

    Background: State-of-the-art multi-reference energy density functional calculations require the computation of norm overlaps between different Bogoliubov quasiparticle many-body states. It is only recently that the efficient and unambiguous calculation of such norm kernels has become available under the form of Pfaffians [L. M. Robledo, Phys. Rev. C 79, 021302 (2009), 10.1103/PhysRevC.79.021302]. Recently developed particle-number-restored Bogoliubov coupled-cluster (PNR-BCC) and particle-number-restored Bogoliubov many-body perturbation (PNR-BMBPT) ab initio theories [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] make use of generalized norm kernels incorporating explicit many-body correlations. In PNR-BCC and PNR-BMBPT, the Bogoliubov states involved in the norm kernels differ specifically via a global gauge rotation. Purpose: The goal of this work is threefold. We wish (i) to propose and implement an alternative to the Pfaffian method to compute unambiguously the norm overlap between arbitrary Bogoliubov quasiparticle states, (ii) to extend the first point to explicitly correlated norm kernels, and (iii) to scrutinize the analytical content of the correlated norm kernels employed in PNR-BMBPT. Point (i) constitutes the purpose of the present paper while points (ii) and (iii) are addressed in a forthcoming paper. Methods: We generalize the method used in another work [T. Duguet and A. Signoracci, J. Phys. G 44, 015103 (2017), 10.1088/0954-3899/44/1/015103] in such a way that it is applicable to kernels involving arbitrary pairs of Bogoliubov states. The formalism is presently explicated in detail in the case of the uncorrelated overlap between arbitrary Bogoliubov states. The power of the method is numerically illustrated and benchmarked against known results on the basis of toy models of increasing complexity. Results: The norm overlap between arbitrary Bogoliubov product states is obtained under a closed

  7. Metric freeness and projectivity for classical and quantum normed modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helemskii, A Ya

    2013-07-31

    In functional analysis, there are several diverse approaches to the notion of projective module. We show that a certain general categorical scheme contains all basic versions as special cases. In this scheme, the notion of free object comes to the foreground, and, in the best categories, projective objects are precisely retracts of free ones. We are especially interested in the so-called metric version of projectivity and characterize the metrically free classical and quantum (= operator) normed modules. Informally speaking, so-called extremal projectivity, which was known earlier, is interpreted as a kind of 'asymptotical metric projectivity'. In addition, we answer themore » following specific question in the geometry of normed spaces: what is the structure of metrically projective modules in the simplest case of normed spaces? We prove that metrically projective normed spaces are precisely the subspaces of l{sub 1}(M) (where M is a set) that are denoted by l{sub 1}{sup 0}(M) and consist of finitely supported functions. Thus, in this case, projectivity coincides with freeness. Bibliography: 28 titles.« less

  8. Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.

    PubMed

    Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing

    2012-11-01

    In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.

  9. WEAK GALERKIN METHODS FOR SECOND ORDER ELLIPTIC INTERFACE PROBLEMS

    PubMed Central

    MU, LIN; WANG, JUNPING; WEI, GUOWEI; YE, XIU; ZHAO, SHAN

    2013-01-01

    Weak Galerkin methods refer to general finite element methods for partial differential equations (PDEs) in which differential operators are approximated by their weak forms as distributions. Such weak forms give rise to desirable flexibilities in enforcing boundary and interface conditions. A weak Galerkin finite element method (WG-FEM) is developed in this paper for solving elliptic PDEs with discontinuous coefficients and interfaces. Theoretically, it is proved that high order numerical schemes can be designed by using the WG-FEM with polynomials of high order on each element. Extensive numerical experiments have been carried to validate the WG-FEM for solving second order elliptic interface problems. High order of convergence is numerically confirmed in both L2 and L∞ norms for the piecewise linear WG-FEM. Special attention is paid to solve many interface problems, in which the solution possesses a certain singularity due to the nonsmoothness of the interface. A challenge in research is to design nearly second order numerical methods that work well for problems with low regularity in the solution. The best known numerical scheme in the literature is of order O(h) to O(h1.5) for the solution itself in L∞ norm. It is demonstrated that the WG-FEM of the lowest order, i.e., the piecewise constant WG-FEM, is capable of delivering numerical approximations that are of order O(h1.75) to O(h2) in the L∞ norm for C1 or Lipschitz continuous interfaces associated with a C1 or H2 continuous solution. PMID:24072935

  10. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking

  11. Multi-normed spaces based on non-discrete measures and their tensor products

    NASA Astrophysics Data System (ADS)

    Helemskii, A. Ya.

    2018-04-01

    Lambert discovered a new type of structures situated, in a sense, between normed spaces and abstract operator spaces. His definition was based on the notion of amplifying a normed space by means of the spaces \\ell_2^n. Later, several mathematicians studied more general structures (`p-multi- normed spaces') introduced by means of the spaces \\ell_p^n, 1≤ p≤∞. We pass from \\ell_p to L_p(X,μ) with an arbitrary measure. This becomes possible in the framework of the non- coordinate approach to the notion of amplification. In the case of a discrete counting measure, this approach is equivalent to the approach in the papers mentioned. Two categories arise. One consists of amplifications by means of an arbitrary normed space, and the other consists of p-convex amplifications by means of L_p(X,μ). Each of them has its own tensor product of objects (the existence of each product is proved by a separate explicit construction). As a final result, we show that the `p-convex' tensor product has an especially transparent form for the minimal L_p-amplifications of L_q-spaces, where q is conjugate to p. Namely, tensoring L_q(Y,ν) and L_q(Z,λ), we obtain L_q(Y× Z, ν×λ).

  12. Extending the Mertonian Norms: Scientists' Subscription to Norms of Research

    ERIC Educational Resources Information Center

    Anderson, Melissa S.; Ronning, Emily A.; De Vries, Raymond; Martinson, Brian C.

    2010-01-01

    This analysis, based on focus groups and a national survey, assesses scientists' subscription to the Mertonian norms of science and associated counternorms. It also supports extension of these norms to governance (as opposed to administration), as a norm of decision-making, and quality (as opposed to quantity), as an evaluative norm. (Contains 1

  13. Improving absolute gravity estimates by the L p -norm approximation of the ballistic trajectory

    NASA Astrophysics Data System (ADS)

    Nagornyi, V. D.; Svitlov, S.; Araya, A.

    2016-04-01

    Iteratively re-weighted least squares (IRLS) were used to simulate the L p -norm approximation of the ballistic trajectory in absolute gravimeters. Two iterations of the IRLS delivered sufficient accuracy of the approximation without a significant bias. The simulations were performed on different samplings and perturbations of the trajectory. For the platykurtic distributions of the perturbations, the L p -approximation with 3  <  p  <  4 was found to yield several times more precise gravity estimates compared to the standard least-squares. The simulation results were confirmed by processing real gravity observations performed at the excessive noise conditions.

  14. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  15. Low Dose CT Reconstruction via Edge-preserving Total Variation Regularization

    PubMed Central

    Tian, Zhen; Jia, Xun; Yuan, Kehong; Pan, Tinsu; Jiang, Steve B.

    2014-01-01

    High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from highly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, the low contrast structures tend to be smoothed out by the TV regularization, posing a great challenge for the TV method. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to better preserve the edges, which is realized by introducing a penalty weight to the original total variation norm. During the reconstruction process, the pixels at edges would be gradually identified and given small penalty weight. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of low contrast structures and therefore maintain acceptable spatial resolution. PMID:21860076

  16. Regularized spherical polar fourier diffusion MRI with optimal dictionary learning.

    PubMed

    Cheng, Jian; Jiang, Tianzi; Deriche, Rachid; Shen, Dinggang; Yap, Pew-Thian

    2013-01-01

    Compressed Sensing (CS) takes advantage of signal sparsity or compressibility and allows superb signal reconstruction from relatively few measurements. Based on CS theory, a suitable dictionary for sparse representation of the signal is required. In diffusion MRI (dMRI), CS methods proposed for reconstruction of diffusion-weighted signal and the Ensemble Average Propagator (EAP) utilize two kinds of Dictionary Learning (DL) methods: 1) Discrete Representation DL (DR-DL), and 2) Continuous Representation DL (CR-DL). DR-DL is susceptible to numerical inaccuracy owing to interpolation and regridding errors in a discretized q-space. In this paper, we propose a novel CR-DL approach, called Dictionary Learning - Spherical Polar Fourier Imaging (DL-SPFI) for effective compressed-sensing reconstruction of the q-space diffusion-weighted signal and the EAP. In DL-SPFI, a dictionary that sparsifies the signal is learned from the space of continuous Gaussian diffusion signals. The learned dictionary is then adaptively applied to different voxels using a weighted LASSO framework for robust signal reconstruction. Compared with the start-of-the-art CR-DL and DR-DL methods proposed by Merlet et al. and Bilgic et al., respectively, our work offers the following advantages. First, the learned dictionary is proved to be optimal for Gaussian diffusion signals. Second, to our knowledge, this is the first work to learn a voxel-adaptive dictionary. The importance of the adaptive dictionary in EAP reconstruction will be demonstrated theoretically and empirically. Third, optimization in DL-SPFI is only performed in a small subspace resided by the SPF coefficients, as opposed to the q-space approach utilized by Merlet et al. We experimentally evaluated DL-SPFI with respect to L1-norm regularized SPFI (L1-SPFI), which uses the original SPF basis, and the DR-DL method proposed by Bilgic et al. The experiment results on synthetic and real data indicate that the learned dictionary produces

  17. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  18. Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hojin; Becker, Stephen; Lee, Rena

    2013-07-15

    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of themore » objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of

  19. X-Ray Phase Imaging for Breast Cancer Detection

    DTIC Science & Technology

    2010-09-01

    regularization seeks the minimum- norm , least squares solution for phase retrieval. The retrieval result with Tikhonov regularization is still unsatisfactory...of norm , that can effectively reflect the accuracy of the retrieved data as an image, if ‖δ Ik+1−δ Ik‖ is less than a predefined threshold value β...pointed out that the proper norm for images is the total variation (TV) norm , which is the L1 norm of the gradient of the image function, and not the

  20. The L0 Regularized Mumford-Shah Model for Bias Correction and Segmentation of Medical Images.

    PubMed

    Duan, Yuping; Chang, Huibin; Huang, Weimin; Zhou, Jiayin; Lu, Zhongkang; Wu, Chunlin

    2015-11-01

    We propose a new variant of the Mumford-Shah model for simultaneous bias correction and segmentation of images with intensity inhomogeneity. First, based on the model of images with intensity inhomogeneity, we introduce an L0 gradient regularizer to model the true intensity and a smooth regularizer to model the bias field. In addition, we derive a new data fidelity using the local intensity properties to allow the bias field to be influenced by its neighborhood. Second, we use a two-stage segmentation method, where the fast alternating direction method is implemented in the first stage for the recovery of true intensity and bias field and a simple thresholding is used in the second stage for segmentation. Different from most of the existing methods for simultaneous bias correction and segmentation, we estimate the bias field and true intensity without fixing either the number of the regions or their values in advance. Our method has been validated on medical images of various modalities with intensity inhomogeneity. Compared with the state-of-art approaches and the well-known brain software tools, our model is fast, accurate, and robust with initializations.

  1. Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.

    PubMed

    Reid, Stephen; Tibshirani, Rob

    2014-07-01

    We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.

  2. J.-L. Lions' problem concerning maximal regularity of equations governed by non-autonomous forms

    NASA Astrophysics Data System (ADS)

    Fackler, Stephan

    2017-05-01

    An old problem due to J.-L. Lions going back to the 1960s asks whether the abstract Cauchy problem associated to non-autonomous forms has maximal regularity if the time dependence is merely assumed to be continuous or even measurable. We give a negative answer to this question and discuss the minimal regularity needed for positive results.

  3. Inference of Gene Regulatory Networks Incorporating Multi-Source Biological Knowledge via a State Space Model with L1 Regularization

    PubMed Central

    Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya

    2014-01-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid

  4. 39 CFR 6.1 - Regular meetings, annual meeting.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  5. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization.

    PubMed

    Zhang, Chunyuan; Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms.

  6. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4

  7. Verbal Inflectional Morphology in L1 and L2 Spanish: A Frequency Effects Study Examining Storage versus Composition

    PubMed Central

    Bowden, Harriet Wood; Gelfand, Matthew P.; Sanz, Cristina; Ullman, Michael T.

    2009-01-01

    This study examines the storage vs. composition of Spanish inflected verbal forms in L1 and L2 speakers of Spanish. L2 participants were selected to have mid-to-advanced proficiency, high classroom experience, and low immersion experience, typical of medium-to-advanced foreign language learners. Participants were shown the infinitival forms of verbs from either Class I (the default class, which takes new verbs) or Classes II and III (non-default classes), and were asked to produce either first-person singular present-tense or imperfect forms, in separate tasks. In the present tense, the L1 speakers showed inflected-form frequency effects (i.e., higher frequency forms were produced faster, which is taken as a reflection of storage) for stem-changing (irregular) verb-forms from both Class I (e.g., pensar-pienso) and Classes II and III (e.g., perder-pierdo), as well as for non-stem-changing (regular) forms in Classes II/III (e.g., vender-vendo), in which the regular transformation does not appear to constitute a default. In contrast, Class I regulars (e.g., pescar-pesco), whose non-stem-changing transformation constitutes a default (e.g., it is applied to new verbs), showed no frequency effects. L2 speakers showed frequency effects for all four conditions (Classes I and II/III, regulars and irregulars). In the imperfect tense, the L1 speakers showed frequency effects for Class II/III (-ía-suffixed) but not Class I (-aba-suffixed) forms, even though both involve non-stem-change (regular) default transformations. The L2 speakers showed frequency effects for both types of forms. The pattern of results was not explained by a wide range of potentially confounding experimental and statistical factors, and does not appear to be compatible with single-mechanism models, which argue that all linguistic forms are learned and processed in associative memory. The findings are consistent with a dual-system view in which both verb class and regularity influence the storage vs

  8. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  9. Iterative Nonlocal Total Variation Regularization Method for Image Restoration

    PubMed Central

    Xu, Huanyu; Sun, Quansen; Luo, Nan; Cao, Guo; Xia, Deshen

    2013-01-01

    In this paper, a Bregman iteration based total variation image restoration algorithm is proposed. Based on the Bregman iteration, the algorithm splits the original total variation problem into sub-problems that are easy to solve. Moreover, non-local regularization is introduced into the proposed algorithm, and a method to choose the non-local filter parameter locally and adaptively is proposed. Experiment results show that the proposed algorithms outperform some other regularization methods. PMID:23776560

  10. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  11. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  12. Traction cytometry: regularization in the Fourier approach and comparisons with finite element method.

    PubMed

    Kulkarni, Ankur H; Ghosh, Prasenjit; Seetharaman, Ashwin; Kondaiah, Paturu; Gundiah, Namrata

    2018-05-09

    Traction forces exerted by adherent cells are quantified using displacements of embedded markers on polyacrylamide substrates due to cell contractility. Fourier Transform Traction Cytometry (FTTC) is widely used to calculate tractions but has inherent limitations due to errors in the displacement fields; these are mitigated through a regularization parameter (γ) in the Reg-FTTC method. An alternate finite element (FE) approach computes tractions on a domain using known boundary conditions. Robust verification and recovery studies are lacking but essential in assessing the accuracy and noise sensitivity of the traction solutions from the different methods. We implemented the L2 regularization method and defined a maximum curvature point in the traction with γ plot as the optimal regularization parameter (γ*) in the Reg-FTTC approach. Traction reconstructions using γ* yield accurate values of low and maximum tractions (Tmax) in the presence of up to 5% noise. Reg-FTTC is hence a clear improvement over the FTTC method but is inadequate to reconstruct low stresses such as those at nascent focal adhesions. FE, implemented using a node-by-node comparison, showed an intermediate reconstruction compared to Reg-FTTC. We performed experiments using mouse embryonic fibroblast (MEF) and compared results between these approaches. Tractions from FTTC and FE showed differences of ∼92% and 22% as compared to Reg-FTTC. Selection of an optimum value of γ for each cell reduced variability in the computed tractions as compared to using a single value of γ for all the MEF cells in this study.

  13. Full Waveform Inversion Using Student's t Distribution: a Numerical Study for Elastic Waveform Inversion and Simultaneous-Source Method

    NASA Astrophysics Data System (ADS)

    Jeong, Woodon; Kang, Minji; Kim, Shinwoong; Min, Dong-Joo; Kim, Won-Ki

    2015-06-01

    Seismic full waveform inversion (FWI) has primarily been based on a least-squares optimization problem for data residuals. However, the least-squares objective function can suffer from its weakness and sensitivity to noise. There have been numerous studies to enhance the robustness of FWI by using robust objective functions, such as l 1-norm-based objective functions. However, the l 1-norm can suffer from a singularity problem when the residual wavefield is very close to zero. Recently, Student's t distribution has been applied to acoustic FWI to give reasonable results for noisy data. Student's t distribution has an overdispersed density function compared with the normal distribution, and is thus useful for data with outliers. In this study, we investigate the feasibility of Student's t distribution for elastic FWI by comparing its basic properties with those of the l 2-norm and l 1-norm objective functions and by applying the three methods to noisy data. Our experiments show that the l 2-norm is sensitive to noise, whereas the l 1-norm and Student's t distribution objective functions give relatively stable and reasonable results for noisy data. When noise patterns are complicated, i.e., due to a combination of missing traces, unexpected outliers, and random noise, FWI based on Student's t distribution gives better results than l 1- and l 2-norm FWI. We also examine the application of simultaneous-source methods to acoustic FWI based on Student's t distribution. Computing the expectation of the coefficients of gradient and crosstalk noise terms and plotting the signal-to-noise ratio with iteration, we were able to confirm that crosstalk noise is suppressed as the iteration progresses, even when simultaneous-source FWI is combined with Student's t distribution. From our experiments, we conclude that FWI based on Student's t distribution can retrieve subsurface material properties with less distortion from noise than l 1- and l 2-norm FWI, and the simultaneous

  14. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  15. Background field removal technique based on non-regularized variable kernels sophisticated harmonic artifact reduction for phase data for quantitative susceptibility mapping.

    PubMed

    Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2018-06-11

    We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.

  16. Exploring local regularities for 3D object recognition

    NASA Astrophysics Data System (ADS)

    Tian, Huaiwen; Qin, Shengfeng

    2016-11-01

    In order to find better simplicity measurements for 3D object recognition, a new set of local regularities is developed and tested in a stepwise 3D reconstruction method, including localized minimizing standard deviation of angles(L-MSDA), localized minimizing standard deviation of segment magnitudes(L-MSDSM), localized minimum standard deviation of areas of child faces (L-MSDAF), localized minimum sum of segment magnitudes of common edges (L-MSSM), and localized minimum sum of areas of child face (L-MSAF). Based on their effectiveness measurements in terms of form and size distortions, it is found that when two local regularities: L-MSDA and L-MSDSM are combined together, they can produce better performance. In addition, the best weightings for them to work together are identified as 10% for L-MSDSM and 90% for L-MSDA. The test results show that the combined usage of L-MSDA and L-MSDSM with identified weightings has a potential to be applied in other optimization based 3D recognition methods to improve their efficacy and robustness.

  17. OCT despeckling via weighted nuclear norm constrained non-local low-rank representation

    NASA Astrophysics Data System (ADS)

    Tang, Chang; Zheng, Xiao; Cao, Lijuan

    2017-10-01

    As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.

  18. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    PubMed Central

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L 2 and L 1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L 1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  19. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection (PREPRINT)

    DTIC Science & Technology

    2011-01-01

    Minimum Error Bounded Efficient `1 Tracker with Occlusion Detection Xue Mei\\ ∗ Haibin Ling† Yi Wu†[ Erik Blasch‡ Li Bai] \\Assembly Test Technology...proposed BPR-L1 tracker is tested on several challenging benchmark sequences involving chal- lenges such as occlusion and illumination changes. In all...point method de - pends on the value of the regularization parameter λ. In the experiments, we found that the total number of PCG is a few hundred. The

  20. Primal-dual convex optimization in large deformation diffeomorphic metric mapping: LDDMM meets robust regularizers

    NASA Astrophysics Data System (ADS)

    Hernandez, Monica

    2017-12-01

    This paper proposes a method for primal-dual convex optimization in variational large deformation diffeomorphic metric mapping problems formulated with robust regularizers and robust image similarity metrics. The method is based on Chambolle and Pock primal-dual algorithm for solving general convex optimization problems. Diagonal preconditioning is used to ensure the convergence of the algorithm to the global minimum. We consider three robust regularizers liable to provide acceptable results in diffeomorphic registration: Huber, V-Huber and total generalized variation. The Huber norm is used in the image similarity term. The primal-dual equations are derived for the stationary and the non-stationary parameterizations of diffeomorphisms. The resulting algorithms have been implemented for running in the GPU using Cuda. For the most memory consuming methods, we have developed a multi-GPU implementation. The GPU implementations allowed us to perform an exhaustive evaluation study in NIREP and LPBA40 databases. The experiments showed that, for all the considered regularizers, the proposed method converges to diffeomorphic solutions while better preserving discontinuities at the boundaries of the objects compared to baseline diffeomorphic registration methods. In most cases, the evaluation showed a competitive performance for the robust regularizers, close to the performance of the baseline diffeomorphic registration methods.

  1. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  2. A Weighted Difference of Anisotropic and Isotropic Total Variation for Relaxed Mumford-Shah Image Segmentation

    DTIC Science & Technology

    2016-05-01

    norm does not cap - ture the geometry completely. The L1L2 in (c) does a better job than TV while L1 in (b) and L1−0.5L2 in (d) capture the squares most...and isotropic total variation (TV) norms into a relaxed formu- lation of the two phase Mumford-Shah (MS) model for image segmentation. We show...results exceeding those obtained by the MS model when using the standard TV norm to regular- ize partition boundaries. In particular, examples illustrating

  3. Image registration using stationary velocity fields parameterized by norm-minimizing Wendland kernel

    NASA Astrophysics Data System (ADS)

    Pai, Akshay; Sommer, Stefan; Sørensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads

    2015-03-01

    Interpolating kernels are crucial to solving a stationary velocity field (SVF) based image registration problem. This is because, velocity fields need to be computed in non-integer locations during integration. The regularity in the solution to the SVF registration problem is controlled by the regularization term. In a variational formulation, this term is traditionally expressed as a squared norm which is a scalar inner product of the interpolating kernels parameterizing the velocity fields. The minimization of this term using the standard spline interpolation kernels (linear or cubic) is only approximative because of the lack of a compatible norm. In this paper, we propose to replace such interpolants with a norm-minimizing interpolant - the Wendland kernel which has the same computational simplicity like B-Splines. An application on the Alzheimer's disease neuroimaging initiative showed that Wendland SVF based measures separate (Alzheimer's disease v/s normal controls) better than both B-Spline SVFs (p<0.05 in amygdala) and B-Spline freeform deformation (p<0.05 in amygdala and cortical gray matter).

  4. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  5. [Psychological Distress and Acceptance of Violence Legitimizing Masculinity Norms among Adolescents].

    PubMed

    Klein, Eva M; Wölfling, Klaus; Beutel, Manfred E; Dreier, Michael; Müller, Kai W

    2017-04-01

    The proportion of adolescent migrants in Germany aged 15-20 years has risen to about 29.5% in 2014 according to Federal census statistics. The purpose of the current study was to describe and to compare the psychological strains of adolescent 1 st and 2 nd generation migrants with non-migrants in a representative school survey. Acceptance of violence legitimizing masculinity norms was explored and its correlation with psychological strain was analyzed. Self-reported data of psychological strain (internalizing and externalizing problems) and acceptance of violence legitimizing masculinity were gathered among 8 518 pupils aged 12-19 years across different school types. Among the surveyed adolescents, 27.6% reported a migration background (5.8% 1 st generation migrants; 21.8% 2 nd generation migrants). Particularly 1 st generation migrants scored higher in internalizing and externalizing problems than 2 nd generation migrants or non-migrants. The differences, however, were small. Adolescents with migration background suffered from educational disadvantage, especially 1 st generation migrants. Male adolescents reported significantly higher acceptance of violence legitimizing masculinity norms than their female counterparts. Strong agreement with the measured concept of masculinity was found among pupils of lower secondary school and adolescents reported regularly tobacco and cannabis consumption. The acceptance of violence legitimizing masculinity norms was greater among migrants, particularly 1 st generation migrants, than non-migrants. Overall, high acceptance of violence legitimizing masculinity norms was related to externalizing problems, which can be understood as dysfunctional coping mechanisms of social disadvantage and a lack of prospects. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Characterization of the L4-L5-S1 motion segment using the stepwise reduction method.

    PubMed

    Jaramillo, Héctor Enrique; Puttlitz, Christian M; McGilvray, Kirk; García, José J

    2016-05-03

    The two aims of this study were to generate data for a more accurate calibration of finite element models including the L5-S1 segment, and to find mechanical differences between the L4-L5 and L5-S1 segments. Then, the range of motion (ROM) and facet forces for the L4-S1 segment were measured using the stepwise reduction method. This consists of sequentially testing and reducing each segment in nine stages by cutting the ligaments, facet capsules, and removing the nucleus. Five L4-S1 human segments (median: 65 years, range: 53-84 years, SD=11.0 years) were loaded under a maximum pure moment of 8Nm. The ROM was measured using stereo-photogrammetry via tracking of three markers and the facet contact forces (CF) were measured using a Tekscan system. The ROM for the L4-L5 segment and all stages showed good agreement with published data. The major differences in ROM between the L4-L5 and L5-S1 segments were found for lateral bending and all stages, for which the L4-L5 ROM was about 1.5-3 times higher than that of the L5-S1 segment, consistent with L5-S1 facet CF about 1.3 to 4 times higher than those measured for the L4-L5 segment. For the other movements and few stages, the L4-L5 ROM was significantly lower that of the L5-S1 segment. ROM and CF provide important baseline data for more accurate calibration of FE models and to understand the role that their structures play in lower lumbar spine mechanics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Perceptual dehumanization of faces is activated by norm violations and facilitates norm enforcement.

    PubMed

    Fincher, Katrina M; Tetlock, Philip E

    2016-02-01

    This article uses methods drawn from perceptual psychology to answer a basic social psychological question: Do people process the faces of norm violators differently from those of others--and, if so, what is the functional significance? Seven studies suggest that people process these faces different and the differential processing makes it easier to punish norm violators. Studies 1 and 2 use a recognition-recall paradigm that manipulated facial-inversion and spatial frequency to show that people rely upon face-typical processing less when they perceive norm violators' faces. Study 3 uses a facial composite task to demonstrate that the effect is actor dependent, not action dependent, and to suggest that configural processing is the mechanism of perceptual change. Studies 4 and 5 use offset faces to show that configural processing is only attenuated when they belong to perpetrators who are culpable. Studies 6 and 7 show that people find it easier to punish inverted faces and harder to punish faces displayed in low spatial frequency. Taken together, these data suggest a bidirectional flow of causality between lower-order perceptual and higher-order cognitive processes in norm enforcement. PsycINFO Database Record (c) 2016 APA, all rights reserved.

  8. Regularization of soft-X-ray imaging in the DIII-D tokamak

    DOE PAGES

    Wingen, A.; Shafer, M. W.; Unterberg, E. A.; ...

    2015-03-02

    We developed an image inversion scheme for the soft X-ray imaging system (SXRIS) diagnostic at the DIII-D tokamak in order to obtain the local soft X-ray emission at a poloidal cross-section from the spatially line-integrated image taken by the SXRIS camera. The scheme uses the Tikhonov regularization method since the inversion problem is generally ill-posed. The regularization technique uses the generalized singular value decomposition to determine a solution that depends on a free regularization parameter. The latter has to be chosen carefully, and the so called {\\it L-curve} method to find the optimum regularization parameter is outlined. A representative testmore » image is used to study the properties of the inversion scheme with respect to inversion accuracy, amount/strength of regularization, image noise and image resolution. Moreover, the optimum inversion parameters are identified, while the L-curve method successfully computes the optimum regularization parameter. Noise is found to be the most limiting issue, but sufficient regularization is still possible at noise to signal ratios up to 10%-15%. Finally, the inversion scheme is applied to measured SXRIS data and the line-integrated SXRIS image is successfully inverted.« less

  9. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  10. Semi-automated brain tumor segmentation on multi-parametric MRI using regularized non-negative matrix factorization.

    PubMed

    Sauwen, Nicolas; Acou, Marjan; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Huffel, Sabine Van

    2017-05-04

    Segmentation of gliomas in multi-parametric (MP-)MR images is challenging due to their heterogeneous nature in terms of size, appearance and location. Manual tumor segmentation is a time-consuming task and clinical practice would benefit from (semi-) automated segmentation of the different tumor compartments. We present a semi-automated framework for brain tumor segmentation based on non-negative matrix factorization (NMF) that does not require prior training of the method. L1-regularization is incorporated into the NMF objective function to promote spatial consistency and sparseness of the tissue abundance maps. The pathological sources are initialized through user-defined voxel selection. Knowledge about the spatial location of the selected voxels is combined with tissue adjacency constraints in a post-processing step to enhance segmentation quality. The method is applied to an MP-MRI dataset of 21 high-grade glioma patients, including conventional, perfusion-weighted and diffusion-weighted MRI. To assess the effect of using MP-MRI data and the L1-regularization term, analyses are also run using only conventional MRI and without L1-regularization. Robustness against user input variability is verified by considering the statistical distribution of the segmentation results when repeatedly analyzing each patient's dataset with a different set of random seeding points. Using L1-regularized semi-automated NMF segmentation, mean Dice-scores of 65%, 74 and 80% are found for active tumor, the tumor core and the whole tumor region. Mean Hausdorff distances of 6.1 mm, 7.4 mm and 8.2 mm are found for active tumor, the tumor core and the whole tumor region. Lower Dice-scores and higher Hausdorff distances are found without L1-regularization and when only considering conventional MRI data. Based on the mean Dice-scores and Hausdorff distances, segmentation results are competitive with state-of-the-art in literature. Robust results were found for most patients, although

  11. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  12. Social Norms Information Enhances the Efficacy of an Appearance-based Sun Protection Intervention

    PubMed Central

    Kulik, James A; Butler, Heather; Gerrard, Meg; Gibbons, Frederick X; Mahler, Heike

    2008-01-01

    This experiment examined whether the efficacy of an appearance-based sun protection intervention could be enhanced by the addition of social norms information. Southern California college students (N=125, predominantly female) were randomly assigned to either an appearance-based sun protection intervention-that consisted of a photograph depicting underlying sun damage to their skin (UV photo) and information about photoaging or to a control condition. Those assigned to the intervention were further randomized to receive information about what one should do to prevent photoaging (injunctive norms information), information about the number of their peers who currently use regular sun protection (descriptive norms information), both injunctive and descriptive norms information, or neither type of norms information. The results demonstrated that those who received the UV Photo/photoaging information intervention expressed greater sun protection intentions and subsequently reported greater sun protection behaviors than did controls. Further, the addition of both injunctive and descriptive norms information increased self-reported sun protection behaviors during the subsequent month. PMID:18448221

  13. Are social norms associated with smoking in French university students? A survey report on smoking correlates

    PubMed Central

    Riou França, Lionel; Dautzenberg, Bertrand; Falissard, Bruno; Reynaud, Michel

    2009-01-01

    Background Knowledge of the correlates of smoking is a first step to successful prevention interventions. The social norms theory hypothesises that students' smoking behaviour is linked to their perception of norms for use of tobacco. This study was designed to test the theory that smoking is associated with perceived norms, controlling for other correlates of smoking. Methods In a pencil-and-paper questionnaire, 721 second-year students in sociology, medicine, foreign language or nursing studies estimated the number of cigarettes usually smoked in a month. 31 additional covariates were included as potential predictors of tobacco use. Multiple imputation was used to deal with missing values among covariates. The strength of the association of each variable with tobacco use was quantified by the inclusion frequencies of the variable in 1000 bootstrap sample backward selections. Being a smoker and the number of cigarettes smoked by smokers were modelled separately. Results We retain 8 variables to predict the risk of smoking and 6 to predict the quantities smoked by smokers. The risk of being a smoker is increased by cannabis use, binge drinking, being unsupportive of smoke-free universities, perceived friends' approval of regular smoking, positive perceptions about tobacco, a high perceived prevalence of smoking among friends, reporting not being disturbed by people smoking in the university, and being female. The quantity of cigarettes smoked by smokers is greater for smokers reporting never being disturbed by smoke in the university, unsupportive of smoke-free universities, perceiving that their friends approve of regular smoking, having more negative beliefs about the tobacco industry, being sociology students and being among the older students. Conclusion Other substance use, injunctive norms (friends' approval) and descriptive norms (friends' smoking prevalence) are associated with tobacco use. University-based prevention campaigns should take multiple

  14. Regularity theory for general stable operators

    NASA Astrophysics Data System (ADS)

    Ros-Oton, Xavier; Serra, Joaquim

    2016-06-01

    We establish sharp regularity estimates for solutions to Lu = f in Ω ⊂Rn, L being the generator of any stable and symmetric Lévy process. Such nonlocal operators L depend on a finite measure on S n - 1, called the spectral measure. First, we study the interior regularity of solutions to Lu = f in B1. We prove that if f is Cα then u belong to C α + 2 s whenever α + 2 s is not an integer. In case f ∈L∞, we show that the solution u is C2s when s ≠ 1 / 2, and C 2 s - ɛ for all ɛ > 0 when s = 1 / 2. Then, we study the boundary regularity of solutions to Lu = f in Ω, u = 0 in Rn ∖ Ω, in C 1 , 1 domains Ω. We show that solutions u satisfy u /ds ∈C s - ɛ (Ω ‾) for all ɛ > 0, where d is the distance to ∂Ω. Finally, we show that our results are sharp by constructing two counterexamples.

  15. Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.

    PubMed

    Pang, Jiahao; Cheung, Gene

    2017-04-01

    Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.

  16. SU-E-J-67: Evaluation of Breathing Patterns for Respiratory-Gated Radiation Therapy Using Respiration Regularity Index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheong, K; Lee, M; Kang, S

    2014-06-01

    Purpose: Despite the importance of accurately estimating the respiration regularity of a patient in motion compensation treatment, an effective and simply applicable method has rarely been reported. The authors propose a simple respiration regularity index based on parameters derived from a correspondingly simplified respiration model. Methods: In order to simplify a patient's breathing pattern while preserving the data's intrinsic properties, we defined a respiration model as a power of cosine form with a baseline drift. According to this respiration formula, breathing-pattern fluctuation could be explained using four factors: sample standard deviation of respiration period, sample standard deviation of amplitude andmore » the results of simple regression of the baseline drift (slope and standard deviation of residuals of a respiration signal. Overall irregularity (δ) was defined as a Euclidean norm of newly derived variable using principal component analysis (PCA) for the four fluctuation parameters. Finally, the proposed respiration regularity index was defined as ρ=ln(1+(1/ δ))/2, a higher ρ indicating a more regular breathing pattern. Subsequently, we applied it to simulated and clinical respiration signals from real-time position management (RPM; Varian Medical Systems, Palo Alto, CA) and investigated respiration regularity. Moreover, correlations between the regularity of the first session and the remaining fractions were investigated using Pearson's correlation coefficient. Results: The respiration regularity was determined based on ρ; patients with ρ<0.3 showed worse regularity than the others, whereas ρ>0.7 was suitable for respiratory-gated radiation therapy (RGRT). Fluctuations in breathing cycle and amplitude were especially determinative of ρ. If the respiration regularity of a patient's first session was known, it could be estimated through subsequent sessions. Conclusions: Respiration regularity could be objectively determined using a

  17. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  18. Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.

    PubMed

    Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen

    2017-09-04

    The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.

  19. Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques

    PubMed Central

    Bergquist, Magnus; Nilsson, Andreas; Hansla, André

    2017-01-01

    Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants (n = 347) were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1) and higher personal norms for non-targeted pro-environmental behaviors (Study 2). These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal. PMID:29218026

  20. Contests versus Norms: Implications of Contest-Based and Norm-Based Intervention Techniques.

    PubMed

    Bergquist, Magnus; Nilsson, Andreas; Hansla, André

    2017-01-01

    Interventions using either contests or norms can promote environmental behavioral change. Yet research on the implications of contest-based and norm-based interventions is lacking. Based on Goal-framing theory, we suggest that a contest-based intervention frames a gain goal promoting intensive but instrumental behavioral engagement. In contrast, the norm-based intervention was expected to frame a normative goal activating normative obligations for targeted and non-targeted behavior and motivation to engage in pro-environmental behaviors in the future. In two studies participants ( n = 347) were randomly assigned to either a contest- or a norm-based intervention technique. Participants in the contest showed more intensive engagement in both studies. Participants in the norm-based intervention tended to report higher intentions for future energy conservation (Study 1) and higher personal norms for non-targeted pro-environmental behaviors (Study 2). These findings suggest that contest-based intervention technique frames a gain goal, while norm-based intervention frames a normative goal.

  1. Phase retrieval using regularization method in intensity correlation imaging

    NASA Astrophysics Data System (ADS)

    Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin

    2014-11-01

    Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition

  2. Low-dose cerebral perfusion computed tomography image restoration via low-rank and total variation regularizations

    PubMed Central

    Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2016-01-01

    Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948

  3. Injunctive Norms and Alcohol Consumption: A Revised Conceptualization

    PubMed Central

    Krieger, Heather; Neighbors, Clayton; Lewis, Melissa A.; LaBrie, Joseph W.; Foster, Dawn W.; Larimer, Mary E.

    2016-01-01

    Background Injunctive norms have been found to be important predictors of behaviors in many disciplines with the exception of alcohol research. This exception is likely due to a misconceptualization of injunctive norms for alcohol consumption. To address this, we outline and test a new conceptualization of injunctive norms and personal approval for alcohol consumption. Traditionally, injunctive norms have been assessed using Likert scale ratings of approval perceptions, whereas descriptive norms and individual behaviors are typically measured with behavioral estimates (i.e., number of drinks consumed per week, frequency of drinking, etc.). This makes comparisons between these constructs difficult because they are not similar conceptualizations of drinking behaviors. The present research evaluated a new representation of injunctive norms with anchors comparable to descriptive norms measures. Methods A study and a replication were conducted including 2,559 and 1,189 undergraduate students from three different universities. Participants reported on their alcohol-related consumption behaviors, personal approval of drinking, and descriptive and injunctive norms. Personal approval and injunctive norms were measured using both traditional measures and a new drink-based measure. Results Results from both studies indicated that drink-based injunctive norms were uniquely and positively associated with drinking whereas traditionally assessed injunctive norms were negatively associated with drinking. Analyses also revealed significant unique associations between drink-based injunctive norms and personal approval when controlling for descriptive norms. Conclusions These findings provide support for a modified conceptualization of personal approval and injunctive norms related to alcohol consumption and, importantly, offers an explanation and practical solution for the small and inconsistent findings related to injunctive norms and drinking in past studies. PMID:27030295

  4. Regularized maximum pure-state input-output fidelity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Ernst, Moritz F.; Klesse, Rochus

    2017-12-01

    As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.

  5. Weighted low-rank sparse model via nuclear norm minimization for bearing fault detection

    NASA Astrophysics Data System (ADS)

    Du, Zhaohui; Chen, Xuefeng; Zhang, Han; Yang, Boyuan; Zhai, Zhi; Yan, Ruqiang

    2017-07-01

    It is a fundamental task in the machine fault diagnosis community to detect impulsive signatures generated by the localized faults of bearings. The main goal of this paper is to exploit the low-rank physical structure of periodic impulsive features and further establish a weighted low-rank sparse model for bearing fault detection. The proposed model mainly consists of three basic components: an adaptive partition window, a nuclear norm regularization and a weighted sequence. Firstly, due to the periodic repetition mechanism of impulsive feature, an adaptive partition window could be designed to transform the impulsive feature into a data matrix. The highlight of partition window is to accumulate all local feature information and align them. Then, all columns of the data matrix share similar waveforms and a core physical phenomenon arises, i.e., these singular values of the data matrix demonstrates a sparse distribution pattern. Therefore, a nuclear norm regularization is enforced to capture that sparse prior. However, the nuclear norm regularization treats all singular values equally and thus ignores one basic fact that larger singular values have more information volume of impulsive features and should be preserved as much as possible. Therefore, a weighted sequence with adaptively tuning weights inversely proportional to singular amplitude is adopted to guarantee the distribution consistence of large singular values. On the other hand, the proposed model is difficult to solve due to its non-convexity and thus a new algorithm is developed to search one satisfying stationary solution through alternatively implementing one proximal operator operation and least-square fitting. Moreover, the sensitivity analysis and selection principles of algorithmic parameters are comprehensively investigated through a set of numerical experiments, which shows that the proposed method is robust and only has a few adjustable parameters. Lastly, the proposed model is applied to the

  6. On the Global Regularity of a Helical-Decimated Version of the 3D Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Biferale, Luca; Titi, Edriss S.

    2013-06-01

    We study the global regularity, for all time and all initial data in H 1/2, of a recently introduced decimated version of the incompressible 3D Navier-Stokes (dNS) equations. The model is based on a projection of the dynamical evolution of Navier-Stokes (NS) equations into the subspace where helicity (the L 2-scalar product of velocity and vorticity) is sign-definite. The presence of a second (beside energy) sign-definite inviscid conserved quadratic quantity, which is equivalent to the H 1/2-Sobolev norm, allows us to demonstrate global existence and uniqueness, of space-periodic solutions, together with continuity with respect to the initial conditions, for this decimated 3D model. This is achieved thanks to the establishment of two new estimates, for this 3D model, which show that the H 1/2 and the time average of the square of the H 3/2 norms of the velocity field remain finite. Such two additional bounds are known, in the spirit of the work of H. Fujita and T. Kato (Arch. Ration. Mech. Anal. 16:269-315, 1964; Rend. Semin. Mat. Univ. Padova 32:243-260, 1962), to be sufficient for showing well-posedness for the 3D NS equations. Furthermore, they are directly linked to the helicity evolution for the dNS model, and therefore with a clear physical meaning and consequences.

  7. From Norm Adoption to Norm Internalization

    NASA Astrophysics Data System (ADS)

    Conte, Rosaria; Andrighetto, Giulia; Villatoro, Daniel

    In this presentation, advances in modeling the mental dynamics of norms will be presented. In particular, the process from norm-adoption, possibly yielding new normative goals, to different forms of norm compliance will be focused upon, including norm internalization, which is at study in social-behavioral sciences and moral philosophy since long. Of late, the debate was revamped within the rationality approach pointing to the role of norm internalization as a less costly and more reliable enforcement system than social control. So far, poor attention was paid to the mental underpinnings of internalization. In this presentation, a rich cognitive model of different types, degrees and factors of internalization is shown. The initial implementation of this model on EMIL-A, a normative agent architecture developed and applied to the.

  8. Impact of Norm Perceptions and Guilt on Audience Response to Anti-Smoking Norm PSAs: The Case of Korean Male Smokers

    ERIC Educational Resources Information Center

    Lee, Hyegyu; Paek, Hye-Jin

    2013-01-01

    Objective: To examine how norm appeals and guilt influence smokers' behavioural intention. Design: Quasi-experimental design. Setting: South Korea. Method: Two hundred and fifty-five male smokers were randomly assigned to descriptive, injunctive, or subjective anti-smoking norm messages. After they viewed the norm messages, their norm perceptions,…

  9. Competitive testing of health behavior theories: how do benefits, barriers, subjective norm, and intention influence mammography behavior?

    PubMed Central

    Murphy, Caitlin C.; Vernon, Sally W.; Diamond, Pamela M.; Tiro, Jasmin A.

    2013-01-01

    Background Competitive hypothesis testing may explain differences in predictive power across multiple health behavior theories. Purpose We tested competing hypotheses of the Health Belief Model (HBM) and Theory of Reasoned Action (TRA) to quantify pathways linking subjective norm, benefits, barriers, intention, and mammography behavior. Methods We analyzed longitudinal surveys of women veterans randomized to the control group of a mammography intervention trial (n=704). We compared direct, partial mediation, and full mediation models with Satorra-Bentler χ2 difference testing. Results Barriers had a direct and indirect negative effect on mammography behavior; intention only partially mediated barriers. Benefits had little to no effect on behavior and intention; however, it was negatively correlated with barriers. Subjective norm directly affected behavior and indirectly affected intention through barriers. Conclusions Our results provide empiric support for different assertions of HBM and TRA. Future interventions should test whether building subjective norm and reducing negative attitudes increases regular mammography. PMID:23868613

  10. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  11. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  12. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  13. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 3 2011-07-01 2011-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  14. 29 CFR 778.209 - Method of inclusion of bonus in regular rate.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Method of inclusion of bonus in regular rate. 778.209 Section 778.209 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF... COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of...

  15. Computer-Delivered Social Norm Message Increases Pain Tolerance

    PubMed Central

    Pulvers, Kim; Schroeder, Jacquelyn; Limas, Eleuterio F.; Zhu, Shu-Hong

    2013-01-01

    Background Few experimental studies have been conducted on social determinants of pain tolerance. Purpose This study tests a brief, computer-delivered social norm message for increasing pain tolerance. Methods Healthy young adults (N=260; 44 % Caucasian; 27 % Hispanic) were randomly assigned into a 2 (social norm)×2 (challenge) cold pressor study, stratified by gender. They received standard instructions or standard instructions plus a message that contained artifically elevated information about typical performance of others. Results Those receiving a social norm message displayed significantly higher pain tolerance, F(1, 255)=26.95, p<.001, ηp2=.10 and pain threshold F(1, 244)=9.81, p=.002, ηp2=.04, but comparable pain intensity, p>.05. There were no interactions between condition and gender on any outcome variables, p>.05. Conclusions Social norms can significantly increase pain tolerance, even with a brief verbal message delivered by a video. PMID:24146086

  16. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  17. A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; Fang, Zhichao

    2014-01-01

    We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153

  18. Regularity estimates up to the boundary for elliptic systems of difference equations

    NASA Technical Reports Server (NTRS)

    Strikwerda, J. C.; Wade, B. A.; Bube, K. P.

    1986-01-01

    Regularity estimates up to the boundary for solutions of elliptic systems of finite difference equations were proved. The regularity estimates, obtained for boundary fitted coordinate systems on domains with smooth boundary, involve discrete Sobolev norms and are proved using pseudo-difference operators to treat systems with variable coefficients. The elliptic systems of difference equations and the boundary conditions which are considered are very general in form. The regularity of a regular elliptic system of difference equations was proved equivalent to the nonexistence of eigensolutions. The regularity estimates obtained are analogous to those in the theory of elliptic systems of partial differential equations, and to the results of Gustafsson, Kreiss, and Sundstrom (1972) and others for hyperbolic difference equations.

  19. Health-related quality of life measured using the EQ-5D-5L: South Australian population norms.

    PubMed

    McCaffrey, Nikki; Kaambwa, Billingsley; Currow, David C; Ratcliffe, Julie

    2016-09-20

    Although a five level version of the widely-used EuroQol 5 dimensions (EQ-5D) instrument has been developed, population norms are not yet available for Australia to inform the future valuation of health in economic evaluations. The aim of this study was to estimate HrQOL normative values for the EQ-5D-5L preference-based measure in a large, randomly selected, community sample in South Australia. The EQ-5D-5L instrument was included in the 2013 South Australian Health Omnibus Survey, an interviewer-administered, face-to-face, cross-sectional survey. Respondents rated their level of impairment across dimensions (mobility, self-care, usual activities, pain/discomfort, and anxiety/depression) and global health rating on a visual analogue scale (EQ-VAS). Utility scores were derived using the newly-developed UK general population-based algorithm and relationships between utility and EQ-VAS scores and socio-demographic factors were also explored using multivariate regression analyses. Ultimately, 2,908 adults participated in the survey (63.4 % participation rate). The mean utility and EQ-VAS scores were 0.91 (95 CI 0.90, 0.91) and 78.55 (95 % CI 77.95, 79.15), respectively. Almost half of respondents reported no problems across all dimensions (42.8 %), whereas only 7.2 % rated their health >90 on the EQ-VAS (100 = the best health you can imagine). Younger age, male gender, longer duration of education, higher annual household income, employment and marriage/de facto relationships were all independent, statistically significant predictors of better health status (p < 0.01) measured with the EQ-VAS. Only age and employment status were associated with higher utility scores, indicating fundamental differences between these measures of health status. This is the first Australian study to apply the EQ-5D-5L in a large, community sample. Overall, findings are consistent with EQ-5D-5L utility and VAS scores reported for other countries and indicate that the majority

  20. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  1. Norms and stigma around unintended pregnancy in Alabama: Associations with recent contraceptive use and dual method use among young women.

    PubMed

    Rice, Whitney S; Turan, Bulent; White, Kari; Turan, Janet M

    2017-12-14

    The role of unintended pregnancy norms and stigma in contraceptive use among young women is understudied. This study investigated relationships between anticipated reactions from others, perceived stigma, and endorsed stigma concerning unintended pregnancy, with any and dual contraceptive use in this population. From November 2014 to October 2015, young women aged 18-24 years (n = 390) and at risk for unintended pregnancy and sexually transmitted infections participated in a survey at a university and public health clinics in Alabama. Multivariable regression models examined associations of unintended pregnancy norms and stigma with contraceptive use, adjusted for demographic and psychosocial characteristics. Compared to nonusers, more any and dual method users, were White, nulliparous, and from the university and had higher income. In adjusted models, anticipated disapproval of unintended pregnancy by close others was associated with greater contraceptive use (adjusted Odds Ratio [aOR] = 1.54, 95 percent confidence interval [CI] = 1.03-2.30), and endorsement of stigma concerning unintended pregnancy was associated with lower odds of dual method use (aOR = 0.71, 95 percent CI = 0.51-1.00). Unintended pregnancy norms and stigma were associated with contraceptive behavior among young women in Alabama. Findings suggest the potential to promote effective contraceptive use in this population by leveraging close relationships and addressing endorsed stigma.

  2. Reconstructing Norms

    ERIC Educational Resources Information Center

    Gorgorio, Nuria; Planas, Nuria

    2005-01-01

    Starting from the constructs "cultural scripts" and "social representations", and on the basis of the empirical research we have been developing until now, we revisit the construct norms from a sociocultural perspective. Norms, both sociomathematical norms and norms of the mathematical practice, as cultural scripts influenced…

  3. Water Residence Time estimation by 1D deconvolution in the form of a l2 -regularized inverse problem with smoothness, positivity and causality constraints

    NASA Astrophysics Data System (ADS)

    Meresescu, Alina G.; Kowalski, Matthieu; Schmidt, Frédéric; Landais, François

    2018-06-01

    The Water Residence Time distribution is the equivalent of the impulse response of a linear system allowing the propagation of water through a medium, e.g. the propagation of rain water from the top of the mountain towards the aquifers. We consider the output aquifer levels as the convolution between the input rain levels and the Water Residence Time, starting with an initial aquifer base level. The estimation of Water Residence Time is important for a better understanding of hydro-bio-geochemical processes and mixing properties of wetlands used as filters in ecological applications, as well as protecting fresh water sources for wells from pollutants. Common methods of estimating the Water Residence Time focus on cross-correlation, parameter fitting and non-parametric deconvolution methods. Here we propose a 1D full-deconvolution, regularized, non-parametric inverse problem algorithm that enforces smoothness and uses constraints of causality and positivity to estimate the Water Residence Time curve. Compared to Bayesian non-parametric deconvolution approaches, it has a fast runtime per test case; compared to the popular and fast cross-correlation method, it produces a more precise Water Residence Time curve even in the case of noisy measurements. The algorithm needs only one regularization parameter to balance between smoothness of the Water Residence Time and accuracy of the reconstruction. We propose an approach on how to automatically find a suitable value of the regularization parameter from the input data only. Tests on real data illustrate the potential of this method to analyze hydrological datasets.

  4. Dutch taboo norms.

    PubMed

    Roest, Sander A; Visser, Tessa A; Zeelenberg, René

    2018-04-01

    This article provides norms for general taboo, personal taboo, insult, valence, and arousal for 672 Dutch words, including 202 taboo words. Norms were collected using a 7-point Likert scale and based on ratings by psychology students from the Erasmus University Rotterdam in The Netherlands. The sample consisted of 87 psychology students (58 females, 29 males). We obtained high reliability based on split-half analyses. Our norms show high correlations with arousal and valence ratings collected by another Dutch word-norms study (Moors et al.,, Behavior Research Methods, 45, 169-177, 2013). Our results show that the previously found quadratic relation (i.e., U-shaped pattern) between valence and arousal also holds when only taboo words are considered. Additionally, words rated high on taboo tended to be rated low on valence, but some words related to sex rated high on both taboo and valence. Words that rated high on taboo rated high on insult, again with the exception of words related to sex many of which rated low on insult. Finally, words rated high on taboo and insult rated high on arousal. The Dutch Taboo Norms (DTN) database is a useful tool for researchers interested in the effects of taboo words on cognitive processing. The data associated with this paper can be accessed via the Open Science Framework ( https://osf.io/vk782/ ).

  5. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  6. A test of the perceived norms model to explain drinking patterns among university student athletes.

    PubMed

    Thombs, D L

    2000-09-01

    The author tested the ability of perceived drinking norms to discriminate among drinking patterns in a sample of National Collegiate Athletic Association (NCAA) Division I student athletes. He used an anonymous questionnaire to assess 297 athletes, representing 18 teams, at a public university in the Midwest. Alcohol use patterns showed considerable variation, with many athletes (37.1%) abstaining during their season of competition. A discriminant function analysis revealed that higher levels of alcohol involvement are disproportionately found among athletes who began drinking regularly at an early age. Perceived drinking norms were less important in the discrimination of student athlete drinker groups. Women and those with higher grade point averages were somewhat more likely to refrain from in-season drinking than other survey respondents.

  7. The importance of being fractional in mixing: optimal choice of the index s in H-s norm

    NASA Astrophysics Data System (ADS)

    Vermach, Lukas; Caulfield, C. P.

    2015-11-01

    A natural measure of homogeneity of a mixture is the variance of the concentration field, which in the case of a zero-mean field is the L2-norm. Mathew et al. (Physica D, 2005) introduced a new multi-scale measure to quantify mixing referred to as the mix-norm, which is equivalent to the H - 1 / 2 norm, the Sobolev norm of negative fractional index. Unlike the L2-norm, the mix-norm is not conserved by the advection equation and thus captures mixing even in the non-diffusive systems. Furthermore, the mix-norm is consistent with the ergodic definition of mixing and Lin et al. (JFM, 2011) showed that this property extends to any norm from the class H-s , s > 0 . We consider a zero-mean passive scalar field organised into two layers of different concentrations advected by a flow field in a torus. We solve two non-linear optimisation problems. We identify the optimal initial perturbation of the velocity field with given initial energy as well as the optimal forcing with given total action (the time integral of the kinetic energy of the flow) which both yield maximal mixing by a target time horizon. We analyse sensitivity of the results with respect to s-variation and thus address the importance of the choice of the fractional index This work was supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/H023348/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis.

  8. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  9. Generalized Bregman distances and convergence rates for non-convex regularization methods

    NASA Astrophysics Data System (ADS)

    Grasmair, Markus

    2010-11-01

    We generalize the notion of Bregman distance using concepts from abstract convexity in order to derive convergence rates for Tikhonov regularization with non-convex regularization terms. In particular, we study the non-convex regularization of linear operator equations on Hilbert spaces, showing that the conditions required for the application of the convergence rates results are strongly related to the standard range conditions from the convex case. Moreover, we consider the setting of sparse regularization, where we show that a rate of order δ1/p holds, if the regularization term has a slightly faster growth at zero than |t|p.

  10. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  11. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian

  12. A generalized Condat's algorithm of 1D total variation regularization

    NASA Astrophysics Data System (ADS)

    Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly

    2017-09-01

    A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.

  13. A regularized vortex-particle mesh method for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  14. Regularization of the double period method for experimental data processing

    NASA Astrophysics Data System (ADS)

    Belov, A. A.; Kalitkin, N. N.

    2017-11-01

    In physical and technical applications, an important task is to process experimental curves measured with large errors. Such problems are solved by applying regularization methods, in which success depends on the mathematician's intuition. We propose an approximation based on the double period method developed for smooth nonperiodic functions. Tikhonov's stabilizer with a squared second derivative is used for regularization. As a result, the spurious oscillations are suppressed and the shape of an experimental curve is accurately represented. This approach offers a universal strategy for solving a broad class of problems. The method is illustrated by approximating cross sections of nuclear reactions important for controlled thermonuclear fusion. Tables recommended as reference data are obtained. These results are used to calculate the reaction rates, which are approximated in a way convenient for gasdynamic codes. These approximations are superior to previously known formulas in the covered temperature range and accuracy.

  15. English Language Learners' Nonword Repetition Performance: The Influence of Age, L2 Vocabulary Size, Length of L2 Exposure, and L1 Phonology.

    PubMed

    Duncan, Tamara Sorenson; Paradis, Johanne

    2016-02-01

    This study examined individual differences in English language learners' (ELLs) nonword repetition (NWR) accuracy, focusing on the effects of age, English vocabulary size, length of exposure to English, and first-language (L1) phonology. Participants were 75 typically developing ELLs (mean age 5;8 [years;months]) whose exposure to English began on average at age 4;4. Children spoke either a Chinese language or South Asian language as an L1 and were given English standardized tests for NWR and receptive vocabulary. Although the majority of ELLs scored within or above the monolingual normal range (71%), 29% scored below. Mixed logistic regression modeling revealed that a larger English vocabulary, longer English exposure, South Asian L1, and older age all had significant and positive effects on ELLs' NWR accuracy. Error analyses revealed the following L1 effect: onset consonants were produced more accurately than codas overall, but this effect was stronger for the Chinese group whose L1s have a more limited coda inventory compared with English. ELLs' NWR performance is influenced by a number of factors. Consideration of these factors is important in deciding whether monolingual norm referencing is appropriate for ELL children.

  16. Normative misperceptions of tobacco use among university students in seven European countries: baseline findings of the 'Social Norms Intervention for the prevention of Polydrug usE' study.

    PubMed

    Pischke, Claudia R; Helmer, Stefanie M; McAlaney, John; Bewick, Bridgette M; Vriesacker, Bart; Van Hal, Guido; Mikolajczyk, Rafael T; Akvardar, Yildiz; Guillen-Grima, Francisco; Salonna, Ferdinand; Orosova, Olga; Dohrmann, Solveig; Dempsey, Robert C; Zeeb, Hajo

    2015-12-01

    Research conducted in North America suggests that students tend to overestimate tobacco use among their peers. This perceived norm may impact personal tobacco use. It remains unclear how these perceptions influence tobacco use among European students. The two aims were to investigate possible self-other discrepancies regarding personal use and attitudes towards use and to evaluate if perceptions of peer use and peer approval of use are associated with personal use and approval of tobacco use. The EU-funded 'Social Norms Intervention for the prevention of Polydrug usE' study was conducted in Belgium, Denmark, Germany, Slovak Republic, Spain, Turkey and United Kingdom. In total, 4482 students (71% female) answered an online survey including questions on personal and perceived tobacco use and personal and perceived attitudes towards tobacco use. Across all countries, the majority of students perceived tobacco use of their peers to be higher than their own use. The perception that the majority (>50%) of peers used tobacco regularly in the past two months was significantly associated with higher odds for personal regular use (OR: 2.66, 95% CI: 1.90-3.73). The perception that the majority of peers approve of tobacco use was significantly associated with higher odds for personal approval of tobacco use (OR: 6.49, 95% CI: 4.54-9.28). Perceived norms are an important predictor of personal tobacco use and attitudes towards use. Interventions addressing perceived norms may be a viable method to change attitudes and tobacco use among European students, and may be a component of future tobacco control policy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (01), which can be employed to obtain a sparser solution than the L1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted Lp thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based Lp regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based Lp case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work. PMID:29244777

  18. Fast Algorithms for Earth Mover’s Distance Based on Optimal Transport and L1 Type Regularization I

    DTIC Science & Technology

    2016-09-01

    which EMD can be reformulated as a familiar homogeneous degree 1 regularized minimization. The new minimization problem is very similar to problems which...which is also named the Monge problem or the Wasserstein metric, plays a central role in many applications, including image processing, computer vision

  19. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  20. Effects of L-glutamate on 1F Helix aspersa neurons

    NASA Astrophysics Data System (ADS)

    Bernal-Martínez, Juan; Ortega Soto, Arturo

    2004-09-01

    The aim of this work is to characterize the effect of L-glut and related compounds on the electrical properties of 1F identified neurons of the garden snail Helix aspersa. We used intracellular recording experiments with regular microelectrodes, in current clamp conditions. We report here that the putative L-glut receptor present in 1F Helix neurons has some similarities with the L-glut receptor present in vertebrates, regarding ionic permeability and biophysical properties. However, these responses show different pharmacological properties from those receptors found in vertebrates and mammals.

  1. Chinese translation norms for 1,429 English words.

    PubMed

    Wen, Yun; van Heuven, Walter J B

    2017-06-01

    We present Chinese translation norms for 1,429 English words. Chinese-English bilinguals (N = 28) were asked to provide the first Chinese translation that came to mind for 1,429 English words. The results revealed that 71 % of the English words received more than one correct translation indicating the large amount of translation ambiguity when translating from English to Chinese. The relationship between translation ambiguity and word frequency, concreteness and language proficiency was investigated. Although the significant correlations were not strong, results revealed that English word frequency was positively correlated with the number of alternative translations, whereas English word concreteness was negatively correlated with the number of translations. Importantly, regression analyses showed that the number of Chinese translations was predicted by word frequency and concreteness. Furthermore, an interaction between these predictors revealed that the number of translations was more affected by word frequency for more concrete words than for less concrete words. In addition, mixed-effects modelling showed that word frequency, concreteness and English language proficiency were all significant predictors of whether or not a dominant translation was provided. Finally, correlations between the word frequencies of English words and their Chinese dominant translations were higher for translation-unambiguous pairs than for translation-ambiguous pairs. The translation norms are made available in a database together with lexical information about the words, which will be a useful resource for researchers investigating Chinese-English bilingual language processing.

  2. Social Norms: Do We Love Norms Too Much?

    PubMed

    Bell, David C; Cox, Mary L

    2015-03-01

    Social norms are often cited as the cause of many social phenomena, especially as an explanation for prosocial family and relationship behaviors. And yet maybe we love the idea of social norms too much, as suggested by our failure to subject them to rigorous test. Compared to the detail in social norms theoretical orientations, there is very little detail in tests of normative theories. To provide guidance to researchers who invoke social norms as explanations, we catalog normative orientations that have been proposed to account for consistent patterns of action. We call on researchers to conduct tests of normative theories and the processes such theories assert.

  3. Social Norms: Do We Love Norms Too Much?

    PubMed Central

    Bell, David C.; Cox, Mary L.

    2014-01-01

    Social norms are often cited as the cause of many social phenomena, especially as an explanation for prosocial family and relationship behaviors. And yet maybe we love the idea of social norms too much, as suggested by our failure to subject them to rigorous test. Compared to the detail in social norms theoretical orientations, there is very little detail in tests of normative theories. To provide guidance to researchers who invoke social norms as explanations, we catalog normative orientations that have been proposed to account for consistent patterns of action. We call on researchers to conduct tests of normative theories and the processes such theories assert. PMID:25937833

  4. Regularization method for large eddy simulations of shock-turbulence interactions

    NASA Astrophysics Data System (ADS)

    Braun, N. O.; Pullin, D. I.; Meiron, D. I.

    2018-05-01

    The rapid change in scales over a shock has the potential to introduce unique difficulties in Large Eddy Simulations (LES) of compressible shock-turbulence flows if the governing model does not sufficiently capture the spectral distribution of energy in the upstream turbulence. A method for the regularization of LES of shock-turbulence interactions is presented which is constructed to enforce that the energy content in the highest resolved wavenumbers decays as k - 5 / 3, and is computed locally in physical-space at low computational cost. The application of the regularization to an existing subgrid scale model is shown to remove high wavenumber errors while maintaining agreement with Direct Numerical Simulations (DNS) of forced and decaying isotropic turbulence. Linear interaction analysis is implemented to model the interaction of a shock with isotropic turbulence from LES. Comparisons to analytical models suggest that the regularization significantly improves the ability of the LES to predict amplifications in subgrid terms over the modeled shockwave. LES and DNS of decaying, modeled post shock turbulence are also considered, and inclusion of the regularization in shock-turbulence LES is shown to improve agreement with lower Reynolds number DNS.

  5. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (01), which can be employed to obtain a sparser solution than the L₁ regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L₁ regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  6. An entropy regularization method applied to the identification of wave distribution function for an ELF hiss event

    NASA Astrophysics Data System (ADS)

    Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé

    2006-06-01

    An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.

  7. Conservative regularization of compressible dissipationless two-fluid plasmas

    NASA Astrophysics Data System (ADS)

    Krishnaswami, Govind S.; Sachdev, Sonakshi; Thyagaraja, A.

    2018-02-01

    This paper extends our earlier approach [cf. A. Thyaharaja, Phys. Plasmas 17, 032503 (2010) and Krishnaswami et al., Phys. Plasmas 23, 022308 (2016)] to obtaining à priori bounds on enstrophy in neutral fluids and ideal magnetohydrodynamics. This results in a far-reaching local, three-dimensional, non-linear, dispersive generalization of a KdV-type regularization to compressible/incompressible dissipationless 2-fluid plasmas and models derived therefrom (quasi-neutral, Hall, and ideal MHD). It involves the introduction of vortical and magnetic "twirl" terms λl 2 ( w l + ( q l / m l ) B ) × ( ∇ × w l ) in the ion/electron velocity equations ( l = i , e ) where w l are vorticities. The cut-off lengths λl and number densities nl must satisfy λl 2 n l = C l , where Cl are constants. A novel feature is that the "flow" current ∑ l q l n l v l in Ampère's law is augmented by a solenoidal "twirl" current ∑ l ∇ × ∇ × λl 2 j flow , l . The resulting equations imply conserved linear and angular momenta and a positive definite swirl energy density E * which includes an enstrophic contribution ∑ l ( 1 / 2 ) λl 2 ρ l wl 2 . It is shown that the equations admit a Hamiltonian-Poisson bracket formulation. Furthermore, singularities in ∇ × B are conservatively regularized by adding ( λB 2 / 2 μ 0 ) ( ∇ × B ) 2 to E * . Finally, it is proved that among regularizations that admit a Hamiltonian formulation and preserve the continuity equations along with the symmetries of the ideal model, the twirl term is unique and minimal in non-linearity and space derivatives of velocities.

  8. Quality of life of the Indonesian general population: Test-retest reliability and population norms of the EQ-5D-5L and WHOQOL-BREF.

    PubMed

    Purba, Fredrick Dermawan; Hunfeld, Joke A M; Iskandarsyah, Aulia; Fitriana, Titi Sahidah; Sadarjoen, Sawitri S; Passchier, Jan; Busschbach, Jan J V

    2018-01-01

    The objective of this study is to obtain population norms and to assess test-retest reliability of EQ-5D-5L and WHOQOL-BREF for the Indonesian population. A representative sample of 1056 people aged 17-75 years was recruited from the Indonesian general population. We used a multistage stratified quota sampling method with respect to residence, gender, age, education level, religion and ethnicity. Respondents completed EQ-5D-5L and WHOQOL-BREF with help from an interviewer. Norms data for both instruments were reported. For the test-retest evaluations, a sub-sample of 206 respondents completed both instruments twice. The total sample and test-retest sub-sample were representative of the Indonesian general population. The EQ-5D-5L shows almost perfect agreement between the two tests (Gwet's AC: 0.85-0.99 and percentage agreement: 90-99%) regarding the five dimensions. However, the agreement of EQ-VAS and index scores can be considered as poor (ICC: 0.45 and 0.37 respectively). For the WHOQOL-BREF, ICCs of the four domains were between 0.70 and 0.79, which indicates moderate to good agreement. For EQ-5D-5L, it was shown that female and older respondents had lower EQ-index scores, whilst rural, younger and higher-educated respondents had higher EQ-VAS scores. For WHOQOL-BREF: male, younger, higher-educated, high-income respondents had the highest scores in most of the domains, overall quality of life, and health satisfaction. This study provides representative estimates of self-reported health status and quality of life for the general Indonesian population as assessed by the EQ-5D-5L and WHOQOL-BREF instruments. The descriptive system of the EQ-5D-5L and the WHOQOL-BREF have high test-retest reliability while the EQ-VAS and the index score of EQ-5D-5L show poor agreement between the two tests. Our results can be useful to researchers and clinicians who can compare their findings with respect to these concepts with those of the Indonesian general population.

  9. Influence of PD-L1 cross-linking on cell death in PD-L1-expressing cell lines and bovine lymphocytes

    PubMed Central

    Ikebuchi, Ryoyo; Konnai, Satoru; Okagawa, Tomohiro; Yokoyama, Kazumasa; Nakajima, Chie; Suzuki, Yasuhiko; Murata, Shiro; Ohashi, Kazuhiko

    2014-01-01

    Programmed death-ligand 1 (PD-L1) blockade is accepted as a novel strategy for the reactivation of exhausted T cells that express programmed death-1 (PD-1). However, the mechanism of PD-L1-mediated inhibitory signalling after PD-L1 cross-linking by anti-PD-L1 monoclonal antibody (mAb) or PD-1–immunogloblin fusion protein (PD-1-Ig) is still unknown, although it may induce cell death of PD-L1+ cells required for regular immune reactions. In this study, PD-1-Ig or anti-PD-L1 mAb treatment was tested in cell lines that expressed PD-L1 and bovine lymphocytes to investigate whether the treatment induces immune reactivation or PD-L1-mediated cell death. PD-L1 cross-linking by PD-1-Ig or anti-PD-L1 mAb primarily increased the number of dead cells in PD-L1high cells, but not in PD-L1low cells; these cells were prepared from Cos-7 cells in which bovine PD-L1 expression was induced by transfection. The PD-L1-mediated cell death also occurred in Cos-7 and HeLa cells transfected with vectors only encoding the extracellular region of PD-L1. In bovine lymphocytes, the anti-PD-L1 mAb treatment up-regulated interferon-γ (IFN-γ) production, whereas PD-1-Ig treatment decreased this cytokine production and cell proliferation. The IFN-γ production in B-cell-depleted peripheral blood mononuclear cells was not reduced by PD-1-Ig treatment and the percentages of dead cells in PD-L1+ B cells were increased by PD-1-Ig treatment, indicating that PD-1-Ig-induced immunosuppression in bovine lymphocytes could be caused by PD-L1-mediated B-cell death. This study provides novel information for the understanding of signalling through PD-L1. PMID:24405267

  10. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  11. Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Dunca, Argus A.

    2017-12-01

    This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.

  12. Alcohol Use Disorders and Perceived Drinking Norms: Ethnic Differences in Israeli Adults

    PubMed Central

    Shmulewitz, Dvora; Wall, Melanie M.; Keyes, Katherine M.; Aharonovich, Efrat; Aivadyan, Christina; Greenstein, Eliana; Spivak, Baruch; Weizman, Abraham; Frisch, Amos; Hasin, Deborah

    2012-01-01

    Objective: Individuals’ perceptions of drinking acceptability in their society (perceived injunctive drinking norms) are widely assumed to explain ethnic group differences in drinking and alcohol use disorders (AUDs), but this has never been formally tested. Immigrants to Israel from the former Soviet Union (FSU) are more likely to drink and report AUD symptoms than other Israelis. We tested perceived drinking norms as a mediator of differences between FSU immigrants and other Israelis in drinking and AUDs. Method: Adult household residents (N = 1,349) selected from the Israeli population register were assessed with a structured interview measuring drinking, AUD symptoms, and perceived drinking norms. Regression analyses were used to produce odds ratios (OR) and risk ratios (RR) and 95% confidence intervals (CI) to test differences between FSU immigrants and other Israelis on binary and graded outcomes. Mediation of FSU effects by perceived drinking norms was tested with bootstrapping procedures. Results: FSU immigrants were more likely than other Israelis to be current drinkers (OR = 2.39, CI [1.61, 3.55]), have higher maximum number of drinks per day (RR = 1.88, CI [1.64, 2.16]), have any AUD (OR = 1.75, CI [1.16, 2.64]), score higher on a continuous measure of AUD (RR = 1.44, CI [1.12, 1.84]), and perceive more permissive drinking norms (p < .0001). For all four drinking variables, the FSU group effect was at least partially mediated by perceived drinking norms. Conclusions: This is the first demonstration that drinking norms mediate ethnic differences in AUDs. This work contributes to understanding ethnic group differences in drinking and AUDs, potentially informing etiologic research and public policy aimed at reducing alcohol-related harm. PMID:23036217

  13. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations

    PubMed Central

    Li, Laquan; Wang, Jian; Lu, Wei; Tan, Shan

    2016-01-01

    Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin’s lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the

  14. Control of NORM at Eugene Island 341-A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shuler, P.J.; Baudoin, D.A.; Weintritt, D.J.

    1995-12-31

    A field study at Eugene island 341-A, an offshore production platform in the Gulf of Mexico, was conducted to develop strategies for the cost-effective prevention of NORM (Naturally Occurring Radioactive Materials) deposits. The specific objectives of this study were to: (1) Determine the root cause for the NORM deposits at this facility, utilizing different diagnostic techniques. (2) Consider all engineering options that are designed to prevent NORM from forming. (3) Determine the most cost-effective engineering solution. An overall objective was to generalize the diagnostics and control methods developed for Eugene Island 341-A to other oil and gas production facilities, especiallymore » to platforms located in the Gulf of Mexico. This study determined that the NORM deposits found at Eugene Island 341-A stem from commingling incompatible produced waters at the surface. Wells completed in Sand Block A have a water containing a relatively high concentration of barium, while those formation brines in Sand Blocks B and C are high in sulfate. When these waters mix at the start of the fluid treatment facilities on the platform, barium sulfate forms. Radium that is present in the produced brines co-precipitates with the barium, thereby creating a radioactive barium sulfate scale deposit (NORM).« less

  15. On split regular Hom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Albuquerque, Helena; Barreiro, Elisabete; Calderón, A. J.; Sánchez, José M.

    2018-06-01

    We introduce the class of split regular Hom-Lie superalgebras as the natural extension of the one of split Hom-Lie algebras and Lie superalgebras, and study its structure by showing that an arbitrary split regular Hom-Lie superalgebra L is of the form L = U +∑jIj with U a linear subspace of a maximal abelian graded subalgebra H and any Ij a well described (split) ideal of L satisfying [Ij ,Ik ] = 0 if j ≠ k. Under certain conditions, the simplicity of L is characterized and it is shown that L is the direct sum of the family of its simple ideals.

  16. Lateral prefrontal/orbitofrontal cortex has different roles in norm compliance in gain and loss domains: a transcranial direct current stimulation study.

    PubMed

    Yin, Yunlu; Yu, Hongbo; Su, Zhongbin; Zhang, Yuan; Zhou, Xiaolin

    2017-09-01

    Sanction is used by almost all known human societies to enforce fairness norm in resource distribution. Previous studies have consistently shown that the lateral prefrontal cortex (lPFC) and the adjacent orbitofrontal cortex (lOFC) play a causal role in mediating the effect of sanction threat on norm compliance. However, most of these studies were conducted in gain domain in which resources are distributed. Little is known about the mechanisms underlying norm compliance in loss domain in which individual sacrifices are needed. Here we employed a modified version of dictator game (DG) and high-definition transcranial direct current stimulation (HD-tDCS) to investigate to what extent lPFC/lOFC is involved in norm compliance (with and without sanction threat) in both gain- and loss-sharing contexts. Participants allocated a fixed total amount of monetary gain or loss between themselves and an anonymous partner in multiple rounds of the game. A computer program randomly decided whether a given round involved sanction threat for the participants. Results showed that disruption of the right lPFC/lOFC by tDCS increased the voluntary norm compliance in the gain domain, but not in the loss domain; tDCS on lPFC/lOFC had no effect on compliance under sanction threat in either the gain or loss domain. Our findings reveal a context-dependent nature of norm compliance and differential roles of lPFC/lOFC in norm compliance in gain and loss domains. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Optimized star sensors laboratory calibration method using a regularization neural network.

    PubMed

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  18. Nuclear norm-based 2-DPCA for extracting features from images.

    PubMed

    Zhang, Fanlong; Yang, Jian; Qian, Jianjun; Xu, Yong

    2015-10-01

    The 2-D principal component analysis (2-DPCA) is a widely used method for image feature extraction. However, it can be equivalently implemented via image-row-based principal component analysis. This paper presents a structured 2-D method called nuclear norm-based 2-DPCA (N-2-DPCA), which uses a nuclear norm-based reconstruction error criterion. The nuclear norm is a matrix norm, which can provide a structured 2-D characterization for the reconstruction error image. The reconstruction error criterion is minimized by converting the nuclear norm-based optimization problem into a series of F-norm-based optimization problems. In addition, N-2-DPCA is extended to a bilateral projection-based N-2-DPCA (N-B2-DPCA). The virtue of N-B2-DPCA over N-2-DPCA is that an image can be represented with fewer coefficients. N-2-DPCA and N-B2-DPCA are applied to face recognition and reconstruction and evaluated using the Extended Yale B, CMU PIE, FRGC, and AR databases. Experimental results demonstrate the effectiveness of the proposed methods.

  19. Cross-label Suppression: a Discriminative and Fast Dictionary Learning with Group Regularization.

    PubMed

    Wang, Xiudong; Gu, Yuantao

    2017-05-10

    This paper addresses image classification through learning a compact and discriminative dictionary efficiently. Given a structured dictionary with each atom (columns in the dictionary matrix) related to some label, we propose crosslabel suppression constraint to enlarge the difference among representations for different classes. Meanwhile, we introduce group regularization to enforce representations to preserve label properties of original samples, meaning the representations for the same class are encouraged to be similar. Upon the cross-label suppression, we don't resort to frequently-used `0-norm or `1- norm for coding, and obtain computational efficiency without losing the discriminative power for categorization. Moreover, two simple classification schemes are also developed to take full advantage of the learnt dictionary. Extensive experiments on six data sets including face recognition, object categorization, scene classification, texture recognition and sport action categorization are conducted, and the results show that the proposed approach can outperform lots of recently presented dictionary algorithms on both recognition accuracy and computational efficiency.

  20. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  1. Graph-Based Norm Explanation

    NASA Astrophysics Data System (ADS)

    Croitoru, Madalina; Oren, Nir; Miles, Simon; Luck, Michael

    Norms impose obligations, permissions and prohibitions on individual agents operating as part of an organisation. Typically, the purpose of such norms is to ensure that an organisation acts in some socially (or mutually) beneficial manner, possibly at the expense of individual agent utility. In this context, agents are normaware if they are able to reason about which norms are applicable to them, and to decide whether to comply with or ignore them. While much work has focused on the creation of norm-aware agents, much less has been concerned with aiding system designers in understanding the effects of norms on a system. The ability to understand such norm effects can aid the designer in avoiding incorrect norm specification, eliminating redundant norms and reducing normative conflict. In this paper, we address the problem of norm understanding by providing explanations as to why a norm is applicable, violated, or in some other state. We make use of conceptual graph based semantics to provide a graphical representation of the norms within a system. Given knowledge of the current and historical state of the system, such a representation allows for explanation of the state of norms, showing for example why they may have been activated or violated.

  2. Fast computation of voxel-level brain connectivity maps from resting-state functional MRI using l₁-norm as approximation of Pearson's temporal correlation: proof-of-concept and example vector hardware implementation.

    PubMed

    Minati, Ludovico; Zacà, Domenico; D'Incerti, Ludovico; Jovicich, Jorge

    2014-09-01

    An outstanding issue in graph-based analysis of resting-state functional MRI is choice of network nodes. Individual consideration of entire brain voxels may represent a less biased approach than parcellating the cortex according to pre-determined atlases, but entails establishing connectedness for 1(9)-1(11) links, with often prohibitive computational cost. Using a representative Human Connectome Project dataset, we show that, following appropriate time-series normalization, it may be possible to accelerate connectivity determination replacing Pearson correlation with l1-norm. Even though the adjacency matrices derived from correlation coefficients and l1-norms are not identical, their similarity is high. Further, we describe and provide in full an example vector hardware implementation of l1-norm on an array of 4096 zero instruction-set processors. Calculation times <1000 s are attainable, removing the major deterrent to voxel-based resting-sate network mapping and revealing fine-grained node degree heterogeneity. L1-norm should be given consideration as a substitute for correlation in very high-density resting-state functional connectivity analyses. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Hybrid Weighted Minimum Norm Method A new method based LORETA to solve EEG inverse problem.

    PubMed

    Song, C; Zhuang, T; Wu, Q

    2005-01-01

    This Paper brings forward a new method to solve EEG inverse problem. Based on following physiological characteristic of neural electrical activity source: first, the neighboring neurons are prone to active synchronously; second, the distribution of source space is sparse; third, the active intensity of the sources are high centralized, we take these prior knowledge as prerequisite condition to develop the inverse solution of EEG, and not assume other characteristic of inverse solution to realize the most commonly 3D EEG reconstruction map. The proposed algorithm takes advantage of LORETA's low resolution method which emphasizes particularly on 'localization' and FOCUSS's high resolution method which emphasizes particularly on 'separability'. The method is still under the frame of the weighted minimum norm method. The keystone is to construct a weighted matrix which takes reference from the existing smoothness operator, competition mechanism and study algorithm. The basic processing is to obtain an initial solution's estimation firstly, then construct a new estimation using the initial solution's information, repeat this process until the solutions under last two estimate processing is keeping unchanged.

  4. Recovering fine details from under-resolved electron tomography data using higher order total variation ℓ 1 regularization

    DOE PAGES

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.; ...

    2017-01-03

    Over the last decade or so, reconstruction methods using ℓ 1 regularization, often categorized as compressed sensing (CS) algorithms, have significantly improved the capabilities of high fidelity imaging in electron tomography. The most popular ℓ 1 regularization approach within electron tomography has been total variation (TV) regularization. In addition to reducing unwanted noise, TV regularization encourages a piecewise constant solution with sparse boundary regions. In this paper we propose an alternative ℓ 1 regularization approach for electron tomography based on higher order total variation (HOTV). Like TV, the HOTV approach promotes solutions with sparse boundary regions. In smooth regions however,more » the solution is not limited to piecewise constant behavior. We demonstrate that this allows for more accurate reconstruction of a broader class of images – even those for which TV was designed for – particularly when dealing with pragmatic tomographic sampling patterns and very fine image features. In conclusion, we develop results for an electron tomography data set as well as a phantom example, and we also make comparisons with discrete tomography approaches.« less

  5. Longitudinal Relationships Among Perceived Injunctive and Descriptive Norms and Marijuana Use

    PubMed Central

    Napper, Lucy E.; Kenney, Shannon R.; Hummer, Justin F.; Fiorot, Sara; LaBrie, Joseph W.

    2016-01-01

    Objective: The current study uses longitudinal data to examine the relative influence of perceived descriptive and injunctive norms for proximal and distal referents on marijuana use. Method: Participants were 740 undergraduate students (67% female) who completed web-based surveys at two time points 12 months apart. Time 1 measures included reports of marijuana use, approval, perceived descriptive norms, and perceived injunctive norms for the typical student, close friends, and parents. At Time 2, students reported on their marijuana use. Results: Results of a path analysis suggest that, after we controlled for Time 1 marijuana use, greater perceived friend approval indirectly predicted Time 2 marijuana use as mediated by personal approval. Greater perceived parental approval was both indirectly and directly associated with greater marijuana use at follow-up. Perceived typical-student descriptive norms were neither directly nor indirectly related to Time 2 marijuana use. Conclusions: The findings support the role of proximal injunctive norms in predicting college student marijuana use up to 12 months later. The results indicate the potential importance of developing normative interventions that incorporate the social influences of proximal referents. PMID:27172578

  6. Female non-regular workers in Japan: their current status and health.

    PubMed

    Inoue, Mariko; Nishikitani, Mariko; Tsurugano, Shinobu

    2016-12-07

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers' health; promotion of an occupational health program alone is insufficient.

  7. Female non-regular workers in Japan: their current status and health

    PubMed Central

    INOUE, Mariko; NISHIKITANI, Mariko; TSURUGANO, Shinobu

    2016-01-01

    The participation of women in the Japanese labor force is characterized by its M-shaped curve, which reflects decreased employment rates during child-rearing years. Although, this M-shaped curve is now improving, the majority of women in employment are likely to fall into the category of non-regular workers. Based on a review of the previous Japanese studies of the health of non-regular workers, we found that non-regular female workers experienced greater psychological distress, poorer self-rated health, a higher smoking rate, and less access to preventive medicine than regular workers did. However, despite the large number of non-regular workers, there are limited researches regarding their health. In contrast, several studies in Japan concluded that regular workers also had worse health conditions due to the additional responsibility and longer work hours associated with the job, housekeeping, and child rearing. The health of non-regular workers might be threatened by the effects of precarious employment status, lower income, a lower safety net, outdated social norm regarding non-regular workers, and difficulty in achieving a work-life balance. A sector wide social approach to consider life course aspect is needed to protect the health and well-being of female workers’ health; promotion of an occupational health program alone is insufficient. PMID:27818453

  8. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  9. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  10. Social influences, social norms, social support, and smoking behavior among adolescent workers.

    PubMed

    Fagan, P; Eisenberg, M; Stoddard, A M; Frazier, L; Sorensen, G

    2001-01-01

    To examine the relationships between worksite interpersonal influences and smoking and quitting behavior among adolescent workers. The cross-sectional survey assessed factors influencing tobacco use behavior. During the fall of 1998, data were collected from 10 grocery stores in Massachusetts that were owned and managed by the same company. Eligible participants included 474 working adolescents ages 15 to 18. Eighty-three percent of workers (n = 379) completed the survey. The self-report questionnaire assessed social influences, social norms, social support, friendship networks, stage of smoking and quitting behavior, employment patterns, and demographic factors. Thirty-five percent of respondents were never smokers, 21% experimental, 5% occasional, 18% regular, and 23% former smokers. Using analysis of variance (ANOVA), results indicate that regular smokers were 30% more likely than experimental or occasional smokers to report coworker encouragement to quit (p = .0002). Compared with regular smokers, never smokers were 15% more likely to report greater nonacceptability of smoking (p = .01). chi 2 tests of association revealed no differences in friendship networks by stage of smoking. These data provide evidence for the need to further explore social factors inside and outside the work environment that influence smoking and quitting behavior among working teens. Interpretations of the data are limited because of cross-sectional and self-report data collection methods used in one segment of the retail sector.

  11. Norm-Aware Socio-Technical Systems

    NASA Astrophysics Data System (ADS)

    Savarimuthu, Bastin Tony Roy; Ghose, Aditya

    The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions

  12. On the Normed Space of Equivalence Classes of Fuzzy Numbers

    PubMed Central

    Lu, Chongxia; Zhang, Wei

    2013-01-01

    We study the norm induced by the supremum metric on the space of fuzzy numbers. And then we propose a method for constructing a norm on the quotient space of fuzzy numbers. This norm is very natural and works well with the induced metric on the quotient space. PMID:24072984

  13. Least-squares finite element methods for compressible Euler equations

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  14. Social anxiety and social norms in individualistic and collectivistic countries

    PubMed Central

    Schreier, Sina-Simone; Heinrichs, Nina; Alden, Lynn; Rapee, Ronald M.; Hofmann, Stefan G.; Chen, Junwen; Ja Oh, Kyung; Bögels, Susan

    2010-01-01

    Background Social anxiety is assumed to be related to cultural norms across countries. Heinrichs and colleagues [1] compared individualistic and collectivistic countries and found higher social anxiety and more positive attitudes toward socially avoidant behaviors in collectivistic than in individualistic countries. However, the authors failed to include Latin American countries in the collectivistic group. Methods To provide support for these earlier results within an extended sample of collectivistic countries, 478 undergraduate students from individualistic countries were compared with 388 undergraduate students from collectivistic countries (including East Asian and Latin American) via self report of social anxiety and social vignettes assessing social norms. Results As expected, the results of Heinrichs and colleagues [1] were replicated for the individualistic and Asian countries but not for Latin American countries. Latin American countries displayed the lowest social anxiety levels, whereas the collectivistic East Asian group displayed the highest. Conclusions These findings indicate that while culture-mediated social norms affect social anxiety and might help to shed light on the etiology of social anxiety disorder, the dimension of individualism-collectivism may not fully capture the relevant norms. PMID:21049538

  15. Validation of a Stability-Indicating Method for Methylseleno-l-Cysteine (l-SeMC)

    PubMed Central

    Canady, Kristin; Cobb, Johnathan; Deardorff, Peter; Larson, Jami; White, Jonathan M.; Boring, Dan

    2016-01-01

    Methylseleno-l-cysteine (l-SeMC) is a naturally occurring amino acid analogue used as a general dietary supplement and is being explored as a chemopreventive agent. As a known dietary supplement, l-SeMC is not regulated as a pharmaceutical and there is a paucity of analytical methods available. To address the lack of methodology, a stability-indicating method was developed and validated to evaluate l-SeMC as both the bulk drug and formulated drug product (400 µg Se/capsule). The analytical approach presented is a simple, nonderivatization method that utilizes HPLC with ultraviolet detection at 220 nm. A C18 column with a volatile ion-pair agent and methanol mobile phase was used for the separation. The method accuracy was 99–100% from 0.05 to 0.15 mg/mL l-SeMC for the bulk drug, and 98–99% from 0.075 to 0.15 mg/mL l-SeMC for the drug product. Method precision was <1% for the bulk drug and was 3% for the drug product. The LOQ was 0.1 µg/mL l-SeMC or 0.002 µg l-SeMC on column. PMID:26199341

  16. A Graph-based Approach to Auditing RxNorm

    PubMed Central

    Bodenreider, Olivier; Peters, Lee B.

    2009-01-01

    Objectives RxNorm is a standardized nomenclature for clinical drug entities developed by the National Library of Medicine. In this paper, we audit relations in RxNorm for consistency and completeness through the systematic analysis of the graph of its concepts and relationships. Methods The representation of multi-ingredient drugs is normalized in order to make it compatible with that of single-ingredient drugs. All meaningful paths between two nodes in the type graph are computed and instantiated. Alternate paths are automatically compared and manually inspected in case of inconsistency. Results The 115 meaningful paths identified in the type graph can be grouped into 28 groups with respect to start and end nodes. Of the 19 groups of alternate paths (i.e., with two or more paths) between the start and end nodes, 9 (47%) exhibit inconsistencies. Overall, 28 (24%) of the 115 paths are inconsistent with other alternate paths. A total of 348 inconsistencies were identified in the April 2008 version of RxNorm and reported to the RxNorm team, of which 215 (62%) had been corrected in the January 2009 version of RxNorm. Conclusion The inconsistencies identified involve missing nodes (93), missing links (17), extraneous links (237) and one case of mix-up between two ingredients. Our auditing method proved effective in identifying a limited number of errors that had defeated the quality assurance mechanisms currently in place in the RxNorm production system. Some recommendations for the development of RxNorm are provided. PMID:19394440

  17. A function space framework for structural total variation regularization with applications in inverse problems

    NASA Astrophysics Data System (ADS)

    Hintermüller, Michael; Holler, Martin; Papafitsoros, Kostas

    2018-06-01

    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable TV type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted TV for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction.

  18. OPERATOR NORM INEQUALITIES BETWEEN TENSOR UNFOLDINGS ON THE PARTITION LATTICE

    PubMed Central

    Wang, Miaoyan; Duc, Khanh Dao; Fischer, Jonathan; Song, Yun S.

    2017-01-01

    Interest in higher-order tensors has recently surged in data-intensive fields, with a wide range of applications including image processing, blind source separation, community detection, and feature extraction. A common paradigm in tensor-related algorithms advocates unfolding (or flattening) the tensor into a matrix and applying classical methods developed for matrices. Despite the popularity of such techniques, how the functional properties of a tensor changes upon unfolding is currently not well understood. In contrast to the body of existing work which has focused almost exclusively on matricizations, we here consider all possible unfoldings of an order-k tensor, which are in one-to-one correspondence with the set of partitions of {1, …, k}. We derive general inequalities between the lp-norms of arbitrary unfoldings defined on the partition lattice. In particular, we demonstrate how the spectral norm (p = 2) of a tensor is bounded by that of its unfoldings, and obtain an improved upper bound on the ratio of the Frobenius norm to the spectral norm of an arbitrary tensor. For specially-structured tensors satisfying a generalized definition of orthogonal decomposability, we prove that the spectral norm remains invariant under specific subsets of unfolding operations. PMID:28286347

  19. Regularization Parameter Selection for Nonlinear Iterative Image Restoration and MRI Reconstruction Using GCV and SURE-Based Methods

    PubMed Central

    Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2012-01-01

    Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764

  20. How do social norms influence prosocial development?

    PubMed

    House, Bailey R

    2018-04-01

    Humans are both highly prosocial and extremely sensitive to social norms, and some theories suggest that norms are necessary to account for uniquely human forms of prosocial behavior and cooperation. Understanding how norms influence prosocial behavior is thus essential if we are to describe the psychology and development of prosocial behavior. In this article I review recent research from across the social sciences that provides (1) a theoretical model of how norms influence prosocial behavior, (2) empirical support for the model based on studies with adults and children, and (3) predictions about the psychological mechanisms through which norms shape prosocial behavior. I conclude by discussing the need for future studies into how prosocial behavior develops through emerging interactions between culturally varying norms, social cognition, emotions, and potentially genes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  2. Processing of Regular and Irregular Past-Tense Verb Forms in First and Second Language Reading Acquisition

    ERIC Educational Resources Information Center

    de Zeeuw, Marlies; Schreuder, Rob; Verhoeven, Ludo

    2013-01-01

    We investigated written word identification of regular and irregular past-tense verb forms by first (L1) and second language (L2) learners of Dutch in third and sixth grade. Using a lexical decision task, we measured speed and accuracy in the identification of regular and irregular past-tense verb forms by children from Turkish-speaking homes (L2…

  3. Social and moral norm differences among Portuguese 1st and 6th year medical students towards their intention to comply with hand hygiene.

    PubMed

    Roberto, Magda S; Mearns, Kathryn; Silva, Silvia A

    2012-01-01

    This study examines social and moral norms towards the intention to comply with hand hygiene among Portuguese medical students from 1st and 6th years (N = 175; 121 from the 1st year, 54 from the 6th year). The study extended the theory of planned behaviour theoretical principles and hypothesised that both subjective and moral norms will be the best predictors of 1st and 6th year medical students' intention to comply with hand hygiene; however, these predictors ability to explain intention variance will change according to medical students' school year. Results indicated that the subjective norm, whose referent focuses on professors, is a relevant predictor of 1st year medical students' intention, while the subjective norm that emphasises the relevance of colleagues predicts the intentions of medical students from the 6th year. In terms of the moral norm, 6th year students' intention is better predicted by a norm that interferes with compliance; whereas intentions from 1st year students are better predicted by a norm that favours compliance. Implications of the findings highlight the importance of role models and mentors as key factors in teaching hand hygiene in medical undergraduate curricula.

  4. English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.

    PubMed

    Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A

    2013-09-01

    As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.

  5. Adaptation and perceptual norms

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  6. Retaining both discrete and smooth features in 1D and 2D NMR relaxation and diffusion experiments

    NASA Astrophysics Data System (ADS)

    Reci, A.; Sederman, A. J.; Gladden, L. F.

    2017-11-01

    A new method of regularization of 1D and 2D NMR relaxation and diffusion experiments is proposed and a robust algorithm for its implementation is introduced. The new form of regularization, termed the Modified Total Generalized Variation (MTGV) regularization, offers a compromise between distinguishing discrete and smooth features in the reconstructed distributions. The method is compared to the conventional method of Tikhonov regularization and the recently proposed method of L1 regularization, when applied to simulated data of 1D spin-lattice relaxation, T1, 1D spin-spin relaxation, T2, and 2D T1-T2 NMR experiments. A range of simulated distributions composed of two lognormally distributed peaks were studied. The distributions differed with regard to the variance of the peaks, which were designed to investigate a range of distributions containing only discrete, only smooth or both features in the same distribution. Three different signal-to-noise ratios were studied: 2000, 200 and 20. A new metric is proposed to compare the distributions reconstructed from the different regularization methods with the true distributions. The metric is designed to penalise reconstructed distributions which show artefact peaks. Based on this metric, MTGV regularization performs better than Tikhonov and L1 regularization in all cases except when the distribution is known to only comprise of discrete peaks, in which case L1 regularization is slightly more accurate than MTGV regularization.

  7. A highly regular fucan sulfate from the sea cucumber Stichopus horrens.

    PubMed

    Ustyuzhanina, Nadezhda E; Bilan, Maria I; Dmitrenok, Andrey S; Borodina, Elizaveta Yu; Nifantiev, Nikolay E; Usov, Anatolii I

    2018-02-01

    A highly regular fucan sulfate SHFS was isolated from the sea cucumber Stichopus horrens by extraction of the body walls in the presence of papain followed by ion-exchange and gel permeation chromatography. SHFS had MW of about 140 kDa and contained fucose and sulfate in the molar ratio of about 1:1. Chemical and NMR spectroscopic methods were applied for the structural characterization of the polysaccharide. SHFS was shown to have linear molecules built up of 3-linked α-l-fucopyranose 2-sulfate residues. Anticoagulant properties of SHFS were assessed in vitro in comparison with the LMW heparin (enoxaparin) and totally sulfated 3-linked α-l-fucan. SHFS was found to have the lowest activity, and hence, both sulfate groups at O-2 and O-4 of fucosyl units seem to be important for anticoagulant effect of sulfated homo-(1 → 3)-α-l-fucans. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Monoclonal Antibody L1Mab-13 Detected Human PD-L1 in Lung Cancers.

    PubMed

    Yamada, Shinji; Itai, Shunsuke; Nakamura, Takuro; Yanaka, Miyuki; Chang, Yao-Wen; Suzuki, Hiroyoshi; Kaneko, Mika K; Kato, Yukinari

    2018-04-01

    Programmed cell death ligand-1 (PD-L1) is a type I transmembrane glycoprotein expressed on antigen-presenting cells. It is also expressed in several tumor cells such as melanoma and lung cancer cells. A strong correlation has been reported between human PD-L1 (hPD-L1) expression in tumor cells and negative prognosis in cancer patients. Here, a novel anti-hPD-L1 monoclonal antibody (mAb) L 1 Mab-13 (IgG 1 , kappa) was produced using a cell-based immunization and screening (CBIS) method. We investigated hPD-L1 expression in lung cancer using flow cytometry, Western blot, and immunohistochemical analyses. L 1 Mab-13 specifically reacted hPD-L1 of hPD-L1-overexpressed Chinese hamster ovary (CHO)-K1 cells and endogenous hPD-L1 of KMST-6 (human fibroblast) in flow cytometry and Western blot. Furthermore, L 1 Mab-13 reacted with lung cancer cell lines (EBC-1, Lu65, and Lu99) in flow cytometry and stained lung cancer tissues in a membrane-staining pattern in immunohistochemical analysis. These results indicate that a novel anti-hPD-L1 mAb, L 1 Mab-13, is very useful for detecting hPD-L1 of lung cancers in flow cytometry, Western blot, and immunohistochemical analyses.

  9. Validation of a Stability-Indicating Method for Methylseleno-L-Cysteine (L-SeMC).

    PubMed

    Canady, Kristin; Cobb, Johnathan; Deardorff, Peter; Larson, Jami; White, Jonathan M; Boring, Dan

    2016-01-01

    Methylseleno-L-cysteine (L-SeMC) is a naturally occurring amino acid analogue used as a general dietary supplement and is being explored as a chemopreventive agent. As a known dietary supplement, L-SeMC is not regulated as a pharmaceutical and there is a paucity of analytical methods available. To address the lack of methodology, a stability-indicating method was developed and validated to evaluate L-SeMC as both the bulk drug and formulated drug product (400 µg Se/capsule). The analytical approach presented is a simple, nonderivatization method that utilizes HPLC with ultraviolet detection at 220 nm. A C18 column with a volatile ion-pair agent and methanol mobile phase was used for the separation. The method accuracy was 99-100% from 0.05 to 0.15 mg/mL L-SeMC for the bulk drug, and 98-99% from 0.075 to 0.15 mg/mL L-SeMC for the drug product. Method precision was <1% for the bulk drug and was 3% for the drug product. The LOQ was 0.1 µg/mL L-SeMC or 0.002 µg L-SeMC on column. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  11. The effect of interview method on self-reported sexual behavior and perceptions of community norms in Botswana.

    PubMed

    Anglewicz, Philip; Gourvenec, Diana; Halldorsdottir, Iris; O'Kane, Cate; Koketso, Obakeng; Gorgens, Marelize; Kasper, Toby

    2013-02-01

    Since self-reports of sensitive behaviors play an important role in HIV/AIDS research, the accuracy of these measures has often been examined. In this paper we (1) examine the effect of three survey interview methods on self-reported sexual behavior and perceptions of community sexual norms in Botswana, and (2) introduce an interview method to research on self-reported sexual behavior in sub-Saharan Africa. Comparing across these three survey methods (face-to-face, ballot box, and randomized response), we find that ballot box and randomized response surveys both provide higher reports of sensitive behaviors; the results for randomized response are particularly strong. Within these overall patterns, however, there is variation by question type; additionally the effect of interview method differs by sex. We also examine interviewer effects to gain insight into the effectiveness of these interview methods, and our results suggest that caution be used when interpreting the differences between survey methods.

  12. Social norms and its correlates as a pathway to smoking among young Latino adults.

    PubMed

    Echeverría, Sandra E; Gundersen, Daniel A; Manderski, Michelle T B; Delnevo, Cristine D

    2015-01-01

    Socially and culturally embedded norms regarding smoking may be one pathway by which individuals adopt smoking behaviors. However, few studies have examined if social norms operate in young adults, a population at high risk of becoming regular smokers. There is also little research examining correlates of social norms in populations with a large immigrant segment, where social norms are likely to differ from the receiving country and could contribute to a better understanding of previously reported acculturation-health associations. Using data from a nationally representative sample of young adults in the United States reached via a novel cell-phone sampling design, we explored the relationships between acculturation proxies (nativity, language spoken and generational status), socioeconomic position (SEP), smoking social norms and current smoking status among Latinos 18-34 years of age (n = 873). Specifically, we examined if a measure of injunctive norms assessed by asking participants about the acceptability of smoking among Latino co-ethnic peers was associated with acculturation proxies and SEP. Results showed a strong gradient in smoking social norms by acculturation proxies, with significantly less acceptance of smoking reported among the foreign-born and increasing acceptance among those speaking only/mostly English at home and third-generation individuals. No consistent and significant pattern in smoking social norms was observed by education, income or employment status, possibly due to the age of the study population. Lastly, those who reported that their Latino peers do not find smoking acceptable were significantly less likely to be current smokers compared to those who said their Latino peers were ambivalent about smoking (do not care either way) in crude models, and in models that adjusted for age, sex, generational status, language spoken, and SEP. This study provides new evidence regarding the role of social norms in shaping smoking behaviors among

  13. Social norms and its correlates as a pathway to smoking among young Latino adults

    PubMed Central

    Echeverría, Sandra E.; Gundersen, Daniel A.; Manderski, Michelle T.B.; Delnevo, Cristine D.

    2014-01-01

    Socially and culturally embedded norms regarding smoking may be one pathway by which individuals adopt smoking behaviors. However, few studies have examined if social norms operate in young adults, a population at high risk of becoming regular smokers. There is also little research examining correlates of social norms in populations with a large immigrant segment, where social norms are likely to differ from the receiving country and could contribute to a better understanding of previously reported acculturation-health associations. Using data from a nationally representative sample of young adults in the United States reached via a novel cell-phone sampling design, we explored the relationships between acculturation proxies (nativity, language spoken and generational status), socioeconomic position (SEP), smoking social norms and current smoking status among Latinos 18–34 years of age (n=873). Specifically, we examined if a measure of injunctive norms assessed by asking participants about the acceptability of smoking among Latino co-ethnic peers was associated with acculturation proxies and SEP. Results showed a strong gradient in smoking social norms by acculturation proxies, with significantly less acceptance of smoking reported among the foreign-born and increasing acceptance among those speaking only/ mostly English at home and third-generation individuals. No consistent and significant pattern in smoking social norms was observed by education, income or employment status, possibly due to the age of the study population. Lastly, those who reported that their Latino peers do not find smoking acceptable were significantly less likely to be current smokers compared to those who said their Latino peers were ambivalent about smoking (do not care either way) in crude models, and in models that adjusted for age, sex, generational status, language spoken, and SEP. This study provides new evidence regarding the role of social norms in shaping smoking behaviors among

  14. EEG Arousal Norms by Age

    PubMed Central

    Bonnet, Michael H.; Arand, Donna L.

    2007-01-01

    Study Objectives: Brief arousals have been systematically scored during sleep for more than 20 years. Despite significant knowledge concerning the importance of arousals for the sleep process in normal subjects and patients, comprehensive age norms have not been published. Methods: Seventy-six normal subjects (40 men) without sleep apnea or periodic limb movements of sleep, aged 18 to 70 years, slept in the sleep laboratory for 1 or more nights. Sleep and arousal data were scored by the same scorer for the first night (comparable to clinical polysomnograms) and summarized by age decade. Results: There were no statistically significant differences for sex or interaction of sex by age (p > .5 for both). The mean arousal index increased as a function of age. Newman-Keuls comparisons (.05) showed arousal index in the 18- to 20-year and 21- to 30-year age groups to be significantly less than the arousal index in the other 4 age groups. Arousal index in the 31-to 40-year and 41-to 50-year groups was significantly less than the arousal index in the older groups. The arousal index was significantly negatively correlated with total sleep time and all sleep stages (positive correlation with stage 1 and wake). Conclusions: Brief arousals are an integral component of the sleep process. They increase with other electroencephalographic markers as a function of age. They are highly correlated with traditional sleep-stage amounts and are related to major demographic variables. Age-related norms may make identification of pathologic arousal easier. Citations: Bonnet M; Arand D. EEG Arousal Norms by Age. J Clin Sleep Med 2007;3(3):271–274 PMID:17561594

  15. Norms as Group-Level Constructs: Investigating School-Level Teen Pregnancy Norms and Behaviors.

    PubMed

    Mollborn, Stefanie; Domingue, Benjamin W; Boardman, Jason D

    2014-09-01

    Social norms are a group-level phenomenon, but past quantitative research has rarely measured them in the aggregate or considered their group-level properties. We used the school-based design of the National Longitudinal Study of Adolescent Health to measure normative climates regarding teen pregnancy across 75 U.S. high schools. We distinguished between the strength of a school's norm against teen pregnancy and the consensus around that norm. School-level norm strength and dissensus were strongly (r = -0.65) and moderately (r = 0.34) associated with pregnancy prevalence within schools, respectively. Normative climate partially accounted for observed racial differences in school pregnancy prevalence, but norms were a stronger predictor than racial composition. As hypothesized, schools with both a stronger average norm against teen pregnancy and greater consensus around the norm had the lowest pregnancy prevalence. Results highlight the importance of group-level normative processes and of considering the local school environment when designing policies to reduce teen pregnancy.

  16. Norms as Group-Level Constructs: Investigating School-Level Teen Pregnancy Norms and Behaviors

    PubMed Central

    Mollborn, Stefanie; Domingue, Benjamin W.; Boardman, Jason D.

    2015-01-01

    Social norms are a group-level phenomenon, but past quantitative research has rarely measured them in the aggregate or considered their group-level properties. We used the school-based design of the National Longitudinal Study of Adolescent Health to measure normative climates regarding teen pregnancy across 75 U.S. high schools. We distinguished between the strength of a school's norm against teen pregnancy and the consensus around that norm. School-level norm strength and dissensus were strongly (r = -0.65) and moderately (r = 0.34) associated with pregnancy prevalence within schools, respectively. Normative climate partially accounted for observed racial differences in school pregnancy prevalence, but norms were a stronger predictor than racial composition. As hypothesized, schools with both a stronger average norm against teen pregnancy and greater consensus around the norm had the lowest pregnancy prevalence. Results highlight the importance of group-level normative processes and of considering the local school environment when designing policies to reduce teen pregnancy. PMID:26074628

  17. Coordinating bracket torque and incisor inclination : Part 3: Validity of bracket torque values in achieving norm inclinations.

    PubMed

    Zimmer, Bernd; Sino, Hiba

    2018-03-19

    To analyze common values of bracket torque (Andrews, Roth, MBT, Ricketts) for their validity in achieving incisor inclinations that are considered normal by different cephalometric standards. Using the equations developed in part 1 (eU1 (BOP) = 90° - BT (U1) - TCA (U1) + α 1 - α 2 and eL1 (BOP) = 90° - BT (L1) - TCA (L1) + β 1 - β 2 ) (abbreviations see part 1) and the mean values (± SD) obtained as statistical measures in parts 1 and 2 of the study (α 1 and β 1 [1.7° ± 0.7°], α 2 [3.6° ± 0.3°], β 2 [3.2° ± 0.4°], TCA (U1) [24.6° ± 3.6°] and TCA (L1) [22.9° ± 4.3°]) expected (= theoretically anticipated) values were calculated for upper and lower incisors (U1 and L1) and compared to targeted (= cephalometric norm) values. For U1, there was no overlapping between the ranges of expected and targeted values, as the lowest targeted value of (58.3°; Ricketts) was higher than the highest expected value (56.5°; Andrews) relative to the bisected occlusal plane (BOP). Thus all of these torque systems will aim for flatter inclinations than prescribed by any of the norm values. Depending on target values, the various bracket systems fell short by 1.8-5.5° (Andrews), 6.8-10.5° (Roth), 11.8-15.5° (MBT), or 16.8-20.5° (Ricketts). For L1, there was good agreement of the MBT system with the Ricketts and Björk target values (Δ0.1° and Δ-0.8°, respectively), and both the Roth and Ricketts systems came close to the Bergen target value (both Δ2.3°). Depending on target values, the ranges of deviation for L1 were 6.3-13.2° for Andrews (Class II prescription), 2.3°-9.2° for Roth, -3.7 to -3.2° for MBT, and 2.3-9.2° for Ricketts. Common values of upper incisor bracket torque do not have acceptable validity in achieving normal incisor inclinations. A careful selection of lower bracket torque may provide satisfactory matching with some of the targeted norm values.

  18. Convergence of Proximal Iteratively Reweighted Nuclear Norm Algorithm for Image Processing.

    PubMed

    Sun, Tao; Jiang, Hao; Cheng, Lizhi

    2017-08-25

    The nonsmooth and nonconvex regularization has many applications in imaging science and machine learning research due to its excellent recovery performance. A proximal iteratively reweighted nuclear norm algorithm has been proposed for the nonsmooth and nonconvex matrix minimizations. In this paper, we aim to investigate the convergence of the algorithm. With the Kurdyka-Łojasiewicz property, we prove the algorithm globally converges to a critical point of the objective function. The numerical results presented in this paper coincide with our theoretical findings.

  19. Visual tracking based on the sparse representation of the PCA subspace

    NASA Astrophysics Data System (ADS)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  20. Pragmatic mode-sum regularization method for semiclassical black-hole spacetimes

    NASA Astrophysics Data System (ADS)

    Levi, Adam; Ori, Amos

    2015-05-01

    Computation of the renormalized stress-energy tensor is the most serious obstacle in studying the dynamical, self-consistent, semiclassical evaporation of a black hole in 4D. The difficulty arises from the delicate regularization procedure for the stress-energy tensor, combined with the fact that in practice the modes of the field need to be computed numerically. We have developed a new method for numerical implementation of the point-splitting regularization in 4D, applicable to the renormalized stress-energy tensor as well as to ⟨ϕ2⟩ren , namely the renormalized ⟨ϕ2⟩. So far we have formulated two variants of this method: t -splitting (aimed for stationary backgrounds) and angular splitting (for spherically symmetric backgrounds). In this paper we introduce our basic approach, and then focus on the t -splitting variant, which is the simplest of the two (deferring the angular-splitting variant to a forthcoming paper). We then use this variant, as a first stage, to calculate ⟨ϕ2⟩ren in Schwarzschild spacetime, for a massless scalar field in the Boulware state. We compare our results to previous ones, obtained by a different method, and find full agreement. We discuss how this approach can be applied (using the angular-splitting variant) to analyze the dynamical self-consistent evaporation of black holes.

  1. Social Norms about a Health Issue in Work Group Networks

    PubMed Central

    Frank, Lauren B.

    2015-01-01

    The purpose of this study is to advance theorizing about how small groups understand health issues through the use of social network analysis. To achieve this goal, an adapted cognitive social structure examines group social norms around a specific health issue, H1N1 flu prevention. As predicted, individual’s attitudes, self-efficacy, and perceived social norms were each positively associated with behavioral intentions for at least one of the H1N1 health behaviors studied. Moreover, collective norms of the whole group were also associated with behavioral intentions, even after controlling for how individual group members perceive those norms. For members of work groups in which pairs were perceived to agree in their support for H1N1 vaccination, the effect of individually perceived group norms on behavioral intentions was stronger than for groups with less agreement. PMID:26389934

  2. Smoothed low rank and sparse matrix recovery by iteratively reweighted least squares minimization.

    PubMed

    Lu, Canyi; Lin, Zhouchen; Yan, Shuicheng

    2015-02-01

    This paper presents a general framework for solving the low-rank and/or sparse matrix minimization problems, which may involve multiple nonsmooth terms. The iteratively reweighted least squares (IRLSs) method is a fast solver, which smooths the objective function and minimizes it by alternately updating the variables and their weights. However, the traditional IRLS can only solve a sparse only or low rank only minimization problem with squared loss or an affine constraint. This paper generalizes IRLS to solve joint/mixed low-rank and sparse minimization problems, which are essential formulations for many tasks. As a concrete example, we solve the Schatten-p norm and l2,q-norm regularized low-rank representation problem by IRLS, and theoretically prove that the derived solution is a stationary point (globally optimal if p,q ≥ 1). Our convergence proof of IRLS is more general than previous one that depends on the special properties of the Schatten-p norm and l2,q-norm. Extensive experiments on both synthetic and real data sets demonstrate that our IRLS is much more efficient.

  3. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  4. An interior-point method for total variation regularized positron emission tomography image reconstruction

    NASA Astrophysics Data System (ADS)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  5. Probing the A1 to L1{sub 0} transformation in FeCuPt using the first order reversal curve method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Dustin A.; Liu, Kai; Liao, Jung-Wei

    2014-08-01

    The A1-L1{sub 0} phase transformation has been investigated in (001) FeCuPt thin films prepared by atomic-scale multilayer sputtering and rapid thermal annealing (RTA). Traditional x-ray diffraction is not always applicable in generating a true order parameter, due to non-ideal crystallinity of the A1 phase. Using the first-order reversal curve (FORC) method, the A1 and L1{sub 0} phases are deconvoluted into two distinct features in the FORC distribution, whose relative intensities change with the RTA temperature. The L1{sub 0} ordering takes place via a nucleation-and-growth mode. A magnetization-based phase fraction is extracted, providing a quantitative measure of the L1{sub 0} phasemore » homogeneity.« less

  6. LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction

    PubMed Central

    Huang, Li

    2017-01-01

    Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885

  7. Fraction of exhaled nitric oxide (FeNO ) norms in healthy North African children 5-16 years old.

    PubMed

    Rouatbi, Sonia; Alqodwa, Ashraf; Ben Mdella, Samia; Ben Saad, Helmi

    2013-10-01

    (i) To identify factors that influence the FeNO values in healthy North African, Arab children aged 6-16 years; (ii) to test the applicability and reliability of the previously published FeNO norms; and (iii) if needed, to establish FeNO norms in this population, and to prospectively assess its reliability. This was a cross-sectional analytical study. A convenience sample of healthy Tunisian children, aged 6-16 years was recruited. First subjects have responded to two questionnaires, and then FeNO levels were measured by an online method with electrochemical analyzer (Medisoft, Sorinnes [Dinant], Belgium). Anthropometric and spirometric data were collected. Simple and a multiple linear regressions were determined. The 95% confidence interval (95% CI) and upper limit of normal (ULN) were defined. Two hundred eleven children (107 boys) were retained. Anthropometric data, gender, socioeconomic level, obesity or puberty status, and sports activity were not independent influencing variables. Total sample FeNO data appeared to be influenced only by maximum mid expiratory flow (l sec(-1) ; r(2)  = 0.0236, P = 0.0516). For boys, only 1st second forced expiratory volume (l) explains a slight (r(2)  = 0.0451) but significant FeNO variability (P = 0.0281). For girls, FeNO was not significantly correlated with any children determined data. For North African/Arab children, FeNO values were significantly lower than in other populations and the available published FeNO norms did not reliably predict FeNO in our population. The mean ± SD (95% CI ULN, minimum-maximum) of FeNO (ppb) for the total sample was 5.0 ± 2.9 (5.4, 1.0-17.0). For North African, Arab children of any age, any FeNO value greater than 17.0 ppb may be considered abnormal. Finally, in an additional group of children prospectively assessed, we found no child with a FeNO higher than 17.0 ppb. Our FeNO norms enrich the global repository of FeNO norms the pediatrician can use to choose

  8. Quantifying the Quality Difference between L1 and L2 Essays: A Rating Procedure with Bilingual Raters and L1 and L2 Benchmark Essays

    ERIC Educational Resources Information Center

    Tillema, Marion; van den Bergh, Huub; Rijlaarsdam, Gert; Sanders, Ted

    2013-01-01

    It is the consensus that, as a result of the extra constraints placed on working memory, texts written in a second language (L2) are usually of lower quality than texts written in the first language (L1) by the same writer. However, no method is currently available for quantifying the quality difference between L1 and L2 texts. In the present…

  9. ON THE BASIS PROPERTY OF THE HAAR SYSTEM IN THE SPACE \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) AND THE PRINCIPLE OF LOCALIZATION IN THE MEAN

    NASA Astrophysics Data System (ADS)

    Sharapudinov, I. I.

    1987-02-01

    Let p=p(t) be a measurable function defined on \\lbrack0,\\,1\\rbrack. If p(t) is essentially bounded on \\lbrack0,\\,1\\rbrack, denote by \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) the set of measurable functions f defined on \\lbrack0,\\,1\\rbrack for which \\int_0^1\\vert f(t)\\vert^{p(t)}dt<\\infty. The space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) with p(t)\\geqslant 1 is a normed space with norm \\displaystyle \\vert\\vert f\\vert\\vert _p=\\inf\\bigg\\{\\alpha>0:\\,\\int_0^1\\bigg\\vert\\frac{f(t)}{\\alpha}\\bigg\\vert^{p(t)}dt\\leqslant1\\bigg\\}.This paper examines the question of whether the Haar system is a basis in \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack). Conditions that are in a certain sense definitive on the function p(t) in order that the Haar system be a basis of \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) are obtained. The concept of a localization principle in the mean is introduced, and its connection with the space \\mathscr{L}^{p(t)}(\\lbrack0,\\,1\\rbrack) is exhibited.Bibliography: 2 titles.

  10. Reference gene selection for molecular studies of dormancy in wild oat (Avena fatua L.) caryopses by RT-qPCR method.

    PubMed

    Ruduś, Izabela; Kępczyński, Jan

    2018-01-01

    Molecular studies of primary and secondary dormancy in Avena fatua L., a serious weed of cereal and other crops, are intended to reveal the species-specific details of underlying molecular mechanisms which in turn may be useable in weed management. Among others, quantitative real-time PCR (RT-qPCR) data of comparative gene expression analysis may give some insight into the involvement of particular wild oat genes in dormancy release, maintenance or induction by unfavorable conditions. To assure obtaining biologically significant results using this method, the expression stability of selected candidate reference genes in different data subsets was evaluated using four statistical algorithms i.e. geNorm, NormFinder, Best Keeper and ΔCt method. Although some discrepancies in their ranking outputs were noticed, evidently two ubiquitin-conjugating enzyme homologs, AfUBC1 and AfUBC2, as well as one homolog of glyceraldehyde 3-phosphate dehydrogenase AfGAPDH1 and TATA-binding protein AfTBP2 appeared as more stably expressed than AfEF1a (translation elongation factor 1α), AfGAPDH2 or the least stable α-tubulin homolog AfTUA1 in caryopses and seedlings of A. fatua. Gene expression analysis of a dormancy-related wild oat transcription factor VIVIPAROUS1 (AfVP1) allowed for a validation of candidate reference genes performance. Based on the obtained results it can be recommended that the normalization factor calculated as a geometric mean of Cq values of AfUBC1, AfUBC2 and AfGAPDH1 would be optimal for RT-qPCR results normalization in the experiments comprising A. fatua caryopses of different dormancy status.

  11. Mobility of radionuclides and trace elements in soil from legacy NORM and undisturbed naturally 232Th-rich sites.

    PubMed

    Mrdakovic Popic, Jelena; Meland, Sondre; Salbu, Brit; Skipperud, Lindis

    2014-05-01

    Investigation of radionuclides (232Th and 238U) and trace elements (Cr, As and Pb) in soil from two legacy NORM (former mining sites) and one undisturbed naturally 232Th-rich site was conducted as a part of the ongoing environmental impact assessment in the Fen Complex area (Norway). The major objectives were to determine the radionuclide and trace element distribution and mobility in soils as well as to analyze possible differences between legacy NORM and surrounding undisturbed naturally 232Th-rich soils. Inhomogeneous soil distribution of radionuclides and trace elements was observed for each of the investigated sites. The concentration of 232Th was high (up to 1685 mg kg(-1), i.e., ∼7000 Bq kg(-1)) and exceeded the screening value for the radioactive waste material in Norway (1 Bq g(-1)). Based on the sequential extraction results, the majority of 232Th and trace elements were rather inert, irreversibly bound to soil. Uranium was found to be potentially more mobile, as it was associated with pH-sensitive soil phases, redox-sensitive amorphous soil phases and soil organic compounds. Comparison of the sequential extraction datasets from the three investigated sites revealed increased mobility of all analyzed elements at the legacy NORM sites in comparison with the undisturbed 232Th-rich site. Similarly, the distribution coefficients Kd (232Th) and Kd (238U) suggested elevated dissolution, mobility and transportation at the legacy NORM sites, especially at the decommissioned Nb-mining site (346 and 100 L kg(-1) for 232Th and 238U, respectively), while the higher sorption of radionuclides was demonstrated at the undisturbed 232Th-rich site (10,672 and 506 L kg(-1) for 232Th and 238U, respectively). In general, although the concentration ranges of radionuclides and trace elements were similarly wide both at the legacy NORM and at the undisturbed 232Th-rich sites, the results of soil sequential extractions together with Kd values supported the expected differences

  12. Radiological criteria for unrestricted use of sites containing norm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernhardt, D.E.; Rogers, V.C.; Nielson, K.K.

    1996-06-01

    Natural occurring radioactive materia (NORM) is redistributed in the environment as a result of many mineral recovery and processing industries. There are not federal regulations specifying criteria for NORM contaminated sites; however, several states have promulgated regulations. The regulations promulgated by the states generally focus on NORM from oil and gas production, the primary focus of the assessments for this paper. The criteria for residual NORM in soil are generally (1) 0.18 Bq g{sup -1} in the surface 15 cm of soil and 0.6 Bq g{sup -1} at depth or (2) 1.1 Bq g{sup -1}, with a limitation on themore » radon flux. The primary radiation dose pathways for unrestricted use of land are external gamma exposure and exposure related to indoor radon. Radiation doses vary by over an order of magnitude based on different ratios of {sup 226}Ra to {sup 228}Ra concentrations, biological uptake parameters related to NORM, and the different radon emanation factors for oil field scale and sludge. A {sup 226}Ra criterion of 1.1 Bq g{sup -1} results in a dose of about 2 mSv y{sup -1} from external gamma and about 3 mSv y{sup -1} from radon for a general crawl-space type residence home scenario. The chemical and physical characteristics of the NORM and site-specific factors are important considerations in the assessments. The radon dose would be about 3 times higher for NORM in sludge, vs. the assumption of pipe scale. The structural characteristics of residences (e.g., slab-on-grade, crawl-space, or trailer) also have a significant impact on the potential doses to residences.« less

  13. Silence and table manners: when environments activate norms.

    PubMed

    Joly, Janneke F; Stapel, Diederik A; Lindenberg, Siegwart M

    2008-08-01

    Two studies tested the conditions under which an environment (e.g., library, restaurant) raises the relevance of environment-specific social norms (e.g., being quiet, using table manners). As hypothesized, the relevance of such norms is raised when environments are goal relevant ("I am going there later") and when they are humanized with people or the remnants of their presence (e.g., a glass of wine on a table). Two studies show that goal-relevant environments and humanized environments raise the perceived importance of norms (Study 1) and the intention to conform to norms (Study 2). Interestingly, in both studies, these effects reach beyond norms related to the environments used in the studies.

  14. Brain vascular image enhancement based on gradient adjust with split Bregman

    NASA Astrophysics Data System (ADS)

    Liang, Xiao; Dong, Di; Hui, Hui; Zhang, Liwen; Fang, Mengjie; Tian, Jie

    2016-04-01

    Light Sheet Microscopy is a high-resolution fluorescence microscopic technique which enables to observe the mouse brain vascular network clearly with immunostaining. However, micro-vessels are stained with few fluorescence antibodies and their signals are much weaker than large vessels, which make micro-vessels unclear in LSM images. In this work, we developed a vascular image enhancement method to enhance micro-vessel details which should be useful for vessel statistics analysis. Since gradient describes the edge information of the vessel, the main idea of our method is to increase the gradient values of the enhanced image to improve the micro-vessels contrast. Our method contained two steps: 1) calculate the gradient image of LSM image, and then amplify high gradient values of the original image to enhance the vessel edge and suppress low gradient values to remove noises. Then we formulated a new L1-norm regularization optimization problem to find an image with the expected gradient while keeping the main structure information of the original image. 2) The split Bregman iteration method was used to deal with the L1-norm regularization problem and generate the final enhanced image. The main advantage of the split Bregman method is that it has both fast convergence and low memory cost. In order to verify the effectiveness of our method, we applied our method to a series of mouse brain vascular images acquired from a commercial LSM system in our lab. The experimental results showed that our method could greatly enhance micro-vessel edges which were unclear in the original images.

  15. Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model

    PubMed Central

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389

  16. Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.

    PubMed

    Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan

    2014-01-01

    Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  17. On the Critical One Component Regularity for 3-D Navier-Stokes System: General Case

    NASA Astrophysics Data System (ADS)

    Chemin, Jean-Yves; Zhang, Ping; Zhang, Zhifei

    2017-06-01

    Let us consider initial data {v_0} for the homogeneous incompressible 3D Navier-Stokes equation with vorticity belonging to {L^{3/2}\\cap L^2}. We prove that if the solution associated with {v_0} blows up at a finite time {T^\\star}, then for any p in {]4,∞[}, and any unit vector e of {R^3}, the L p norm in time with value in \\dot{H}^{1/2 + 2/p } of {(v|e)_{R^3}} blows up at {T^\\star}.

  18. On split regular BiHom-Lie superalgebras

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Liangyun; Zhang, Chiping

    2018-06-01

    We introduce the class of split regular BiHom-Lie superalgebras as the natural extension of the one of split Hom-Lie superalgebras and the one of split Lie superalgebras. By developing techniques of connections of roots for this kind of algebras, we show that such a split regular BiHom-Lie superalgebra L is of the form L = U +∑ [ α ] ∈ Λ / ∼I[α] with U a subspace of the Abelian (graded) subalgebra H and any I[α], a well described (graded) ideal of L, satisfying [I[α] ,I[β] ] = 0 if [ α ] ≠ [ β ] . Under certain conditions, in the case of L being of maximal length, the simplicity of the algebra is characterized and it is shown that L is the direct sum of the family of its simple (graded) ideals.

  19. Reliance on God, Prayer, and Religion Reduces Influence of Perceived Norms on Drinking

    PubMed Central

    Neighbors, Clayton; Brown, Garrett A.; Dibello, Angelo M.; Rodriguez, Lindsey M.; Foster, Dawn W.

    2013-01-01

    Objective: Previous research has shown that perceived social norms are among the strongest predictors of drinking among young adults. Research has also consistently found religiousness to be protective against risk and negative health behaviors. The present research evaluates the extent to which reliance on God, prayer, and religion moderates the association between perceived social norms and drinking. Method: Participants (n = 1,124 undergraduate students) completed a cross-sectional survey online, which included measures of perceived norms, religious values, and drinking. Perceived norms were assessed by asking participants their perceptions of typical student drinking. Drinking outcomes included drinks per week, drinking frequency, and typical quantity consumed. Results: Regression analyses indicated that religiousness and perceived norms had significant unique associations in opposite directions for all three drinking outcomes. Significant interactions were evident between religiousness and perceived norms in predicting drinks per week, frequency, and typical quantity. In each case, the interactions indicated weaker associations between norms and drinking among those who assigned greater importance to religiousness. Conclusions: The extent of the relationship between perceived social norms and drinking was buffered by the degree to which students identified with religiousness. A growing body of literature has shown interventions including personalized feedback regarding social norms to be an effective strategy in reducing drinking among college students. The present research suggests that incorporating religious or spiritual values into student interventions may be a promising direction to pursue. PMID:23490564

  20. An automated and efficient conformation search of L-cysteine and L,L-cystine using the scaled hypersphere search method

    NASA Astrophysics Data System (ADS)

    Kishimoto, Naoki; Waizumi, Hiroki

    2017-10-01

    Stable conformers of L-cysteine and L,L-cystine were explored using an automated and efficient conformational searching method. The Gibbs energies of the stable conformers of L-cysteine and L,L-cystine were calculated with G4 and MP2 methods, respectively, at 450, 298.15, and 150 K. By assuming thermodynamic equilibrium and the barrier energies for the conformational isomerization pathways, the estimated ratios of the stable conformers of L-cysteine were compared with those determined by microwave spectroscopy in a previous study. Equilibrium structures of 1:1 and 2:1 cystine-Fe complexes were also calculated, and the energy of insertion of Fe into the disulfide bond was obtained.

  1. Overview of NORM and activities by a NORM licensed permanent decontamination and waste processing facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirro, G.A.

    1997-02-01

    This paper presents an overview of issues related to handling NORM materials, and provides a description of a facility designed for the processing of NORM contaminated equipment. With regard to handling NORM materials the author discusses sources of NORM, problems, regulations and disposal options, potential hazards, safety equipment, and issues related to personnel protection. For the facility, the author discusses: description of the permanent facility; the operations of the facility; the license it has for handling specific radioactive material; operating and safety procedures; decontamination facilities on site; NORM waste processing capabilities; and offsite NORM services which are available.

  2. Social Norms and Financial Incentives to Promote Employees’ Healthy Food Choices: A Randomized Controlled Trial

    PubMed Central

    Thorndike, Anne N.; Riis, Jason; Levy, Douglas E.

    2016-01-01

    Population-level strategies to improve healthy food choices are needed for obesity prevention. We conducted a randomized controlled trial of 2,672 employees at Massachusetts General Hospital who were regular customers of the hospital cafeteria with all items labeled green (healthy), yellow (less healthy), or red (unhealthy) to determine if social norm (peer-comparison) feedback with or without financial incentives increased employees’ healthy food choices. Participants were randomized in 2012 to three arms: 1) monthly letter with social norm feedback about healthy food purchases, comparing employee to “all” and to “healthiest” customers (feedback-only); 2) monthly letter with social norm feedback plus small financial incentive for increasing green purchases (feedback-incentive); or 3) no contact (control). The main outcome was change in proportion of green-labeled purchases at end of 3-month intervention. Post-hoc analyses examined linear trends. At baseline, the proportion of green-labeled purchases (50%) did not differ between arms. At end of the 3-month intervention, the percentage increase in green-labeled purchases was larger in the feedback-incentive arm compared to control (2.2% vs. 0.1%, P=0.03), but the two intervention arms were not different. The rate of increase in green-labeled purchases was higher in both feedback-only (P=0.04) and feedback-incentive arms (P=0.004) compared to control. At end of a 3-month wash-out, there were no differences between control and intervention arms. Social norms plus small financial incentives increased employees’ healthy food choices over the short-term. Future research will be needed to assess the impact of this relatively low-cost intervention on employees’ food choices and weight over the long-term. Trial Registration: Clinical Trials.gov NCT01604499 PMID:26827617

  3. Social norms and financial incentives to promote employees' healthy food choices: A randomized controlled trial.

    PubMed

    Thorndike, Anne N; Riis, Jason; Levy, Douglas E

    2016-05-01

    Population-level strategies to improve healthy food choices are needed for obesity prevention. We conducted a randomized controlled trial of 2672 employees at the Massachusetts General Hospital who were regular customers of the hospital cafeteria with all items labeled green (healthy), yellow (less healthy), or red (unhealthy) to determine if social norm (peer-comparison) feedback with or without financial incentives increased employees' healthy food choices. Participants were randomized in 2012 to three arms: 1) monthly letter with social norm feedback about healthy food purchases, comparing employee to "all" and to "healthiest" customers (feedback-only); 2) monthly letter with social norm feedback plus small financial incentive for increasing green purchases (feedback-incentive); or 3) no contact (control). The main outcome was change in proportion of green-labeled purchases at the end of 3-month intervention. Post-hoc analyses examined linear trends. At baseline, the proportion of green-labeled purchases (50%) did not differ between arms. At the end of the 3-month intervention, the percentage increase in green-labeled purchases was larger in the feedback-incentive arm compared to control (2.2% vs. 0.1%, P=0.03), but the two intervention arms were not different. The rate of increase in green-labeled purchases was higher in both feedback-only (P=0.04) and feedback-incentive arms (P=0.004) compared to control. At the end of a 3-month wash-out, there were no differences between control and intervention arms. Social norms plus small financial incentives increased employees' healthy food choices over the short-term. Future research will be needed to assess the impact of this relatively low-cost intervention on employees' food choices and weight over the long-term. Clinical Trials.gov: NCT01604499. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Fast calculation of the `ILC norm' in iterative learning control

    NASA Astrophysics Data System (ADS)

    Rice, Justin K.; van Wingerden, Jan-Willem

    2013-06-01

    In this paper, we discuss and demonstrate a method for the exploitation of matrix structure in computations for iterative learning control (ILC). In Barton, Bristow, and Alleyne [International Journal of Control, 83(2), 1-8 (2010)], a special insight into the structure of the lifted convolution matrices involved in ILC is used along with a modified Lanczos method to achieve very fast computational bounds on the learning convergence, by calculating the 'ILC norm' in ? computational complexity. In this paper, we show how their method is equivalent to a special instance of the sequentially semi-separable (SSS) matrix arithmetic, and thus can be extended to many other computations in ILC, and specialised in some cases to even faster methods. Our SSS-based methodology will be demonstrated on two examples: a linear time-varying example resulting in the same ? complexity as in Barton et al., and a linear time-invariant example where our approach reduces the computational complexity to ?, thus decreasing the computation time, for an example, from the literature by a factor of almost 100. This improvement is achieved by transforming the norm computation via a linear matrix inequality into a check of positive definiteness - which allows us to further exploit the almost-Toeplitz properties of the matrix, and additionally provides explicit upper and lower bounds on the norm of the matrix, instead of the indirect Ritz estimate. These methods are now implemented in a MATLAB toolbox, freely available on the Internet.

  5. L1 Use in L2 Vocabulary Learning: Facilitator or Barrier

    ERIC Educational Resources Information Center

    Liu, Jing

    2008-01-01

    Based on empirical research and qualitative analysis, this paper aims to explore the effects of L1 use on L2 vocabulary teaching. The results show that, during L2 vocabulary teaching process, the proper application of L1 can effectively facilitate the memorization of new words, and the bilingual method (both English explanation and Chinese…

  6. Base norms and discrimination of generalized quantum channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenčová, A.

    2014-02-15

    We introduce and study norms in the space of hermitian matrices, obtained from base norms in positively generated subspaces. These norms are closely related to discrimination of so-called generalized quantum channels, including quantum states, channels, and networks. We further introduce generalized quantum decision problems and show that the maximal average payoffs of decision procedures are again given by these norms. We also study optimality of decision procedures, in particular, we obtain a necessary and sufficient condition under which an optimal 1-tester for discrimination of quantum channels exists, such that the input state is maximally entangled.

  7. Application of Two-Parameter Stabilizing Functions in Solving a Convolution-Type Integral Equation by Regularization Method

    NASA Astrophysics Data System (ADS)

    Maslakov, M. L.

    2018-04-01

    This paper examines the solution of convolution-type integral equations of the first kind by applying the Tikhonov regularization method with two-parameter stabilizing functions. The class of stabilizing functions is expanded in order to improve the accuracy of the resulting solution. The features of the problem formulation for identification and adaptive signal correction are described. A method for choosing regularization parameters in problems of identification and adaptive signal correction is suggested.

  8. Encrypted data stream identification using randomness sparse representation and fuzzy Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Hou, Rui; Yi, Lei; Meng, Juan; Pan, Zhisong; Zhou, Yuhuan

    2016-07-01

    The accurate identification of encrypted data stream helps to regulate illegal data, detect network attacks and protect users' information. In this paper, a novel encrypted data stream identification algorithm is introduced. The proposed method is based on randomness characteristics of encrypted data stream. We use a l1-norm regularized logistic regression to improve sparse representation of randomness features and Fuzzy Gaussian Mixture Model (FGMM) to improve identification accuracy. Experimental results demonstrate that the method can be adopted as an effective technique for encrypted data stream identification.

  9. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network.

    PubMed

    Xi, Jianing; Wang, Minghui; Li, Ao

    2018-06-05

    Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.

  10. Discrete maximal regularity of time-stepping schemes for fractional evolution equations.

    PubMed

    Jin, Bangti; Li, Buyang; Zhou, Zhi

    2018-01-01

    In this work, we establish the maximal [Formula: see text]-regularity for several time stepping schemes for a fractional evolution model, which involves a fractional derivative of order [Formula: see text], [Formula: see text], in time. These schemes include convolution quadratures generated by backward Euler method and second-order backward difference formula, the L1 scheme, explicit Euler method and a fractional variant of the Crank-Nicolson method. The main tools for the analysis include operator-valued Fourier multiplier theorem due to Weis (Math Ann 319:735-758, 2001. doi:10.1007/PL00004457) and its discrete analogue due to Blunck (Stud Math 146:157-176, 2001. doi:10.4064/sm146-2-3). These results generalize the corresponding results for parabolic problems.

  11. Local social norms and military sexual stressors: do senior officers' norms matter?

    PubMed

    Murdoch, Maureen; Pryor, John Barron; Polusny, Melissa Anderson; Gackstetter, Gary D; Ripley, Diane Cowper

    2009-10-01

    To examine the relative importance of harassment-tolerant norms emanating from troops senior officers, immediate supervisors, and units on troops' sexual stressor experiences and to see whether associations differed by sex. Cross-sectional survey of all 681 willing and confirmed active duty troops enrolled in the VA National Enrollment Database between 1998 and 2002. Findings extended an earlier analysis. After adjusting for other significant correlates, senior officers' perceived tolerance of sexual harassment was not associated with the severity of sexual harassment troops reported (p = 0.64) or with the numbers of sexual identity challenges they reported (p = 0.11). Harassment-tolerant norms emanating from troops' units and immediate supervisors were associated with reporting more severe sexual harassment and more sexual identity challenges (all p < 0.003). Findings generalized across sex. Senior officers' norms did not appear to affect troops' reports of military sexual stressors, but unit norms and immediate supervisors' norms did.

  12. Constrained H1-regularization schemes for diffeomorphic image registration

    PubMed Central

    Mang, Andreas; Biros, George

    2017-01-01

    We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361

  13. A graph-based approach to auditing RxNorm.

    PubMed

    Bodenreider, Olivier; Peters, Lee B

    2009-06-01

    RxNorm is a standardized nomenclature for clinical drug entities developed by the National Library of Medicine. In this paper, we audit relations in RxNorm for consistency and completeness through the systematic analysis of the graph of its concepts and relationships. The representation of multi-ingredient drugs is normalized in order to make it compatible with that of single-ingredient drugs. All meaningful paths between two nodes in the type graph are computed and instantiated. Alternate paths are automatically compared and manually inspected in case of inconsistency. The 115 meaningful paths identified in the type graph can be grouped into 28 groups with respect to start and end nodes. Of the 19 groups of alternate paths (i.e., with two or more paths) between the start and end nodes, 9 (47%) exhibit inconsistencies. Overall, 28 (24%) of the 115 paths are inconsistent with other alternate paths. A total of 348 inconsistencies were identified in the April 2008 version of RxNorm and reported to the RxNorm team, of which 215 (62%) had been corrected in the January 2009 version of RxNorm. The inconsistencies identified involve missing nodes (93), missing links (17), extraneous links (237) and one case of mix-up between two ingredients. Our auditing method proved effective in identifying a limited number of errors that had defeated the quality assurance mechanisms currently in place in the RxNorm production system. Some recommendations for the development of RxNorm are provided.

  14. Social norms and risk perception: predictors of distracted driving behavior among novice adolescent drivers.

    PubMed

    Carter, Patrick M; Bingham, C Raymond; Zakrajsek, Jennifer S; Shope, Jean T; Sayer, Tina B

    2014-05-01

    Adolescent drivers are at elevated crash risk due to distracted driving behavior (DDB). Understanding parental and peer influences on adolescent DDB may aid future efforts to decrease crash risk. We examined the influence of risk perception, sensation seeking, as well as descriptive and injunctive social norms on adolescent DDB using the theory of normative social behavior. 403 adolescents (aged 16-18 years) and their parents were surveyed by telephone. Survey instruments measured self-reported sociodemographics, DDB, sensation seeking, risk perception, descriptive norms (perceived parent DDB, parent self-reported DDB, and perceived peer DDB), and injunctive norms (parent approval of DDB and peer approval of DDB). Hierarchical multiple linear regression was used to predict the influence of descriptive and injunctive social norms, risk perception, and sensation seeking on adolescent DDB. 92% of adolescents reported regularly engaging in DDB. Adolescents perceived that their parents and peers participated in DDB more frequently than themselves. Adolescent risk perception, parent DDB, perceived parent DDB, and perceived peer DDB were predictive of adolescent DDB in the regression model, but parent approval and peer approval of DDB were not predictive. Risk perception and parental DDB were stronger predictors among males, whereas perceived parental DDB was stronger for female adolescents. Adolescent risk perception and descriptive norms are important predictors of adolescent distracted driving. More study is needed to understand the role of injunctive normative influences on adolescent DDB. Effective public health interventions should address parental role modeling, parental monitoring of adolescent driving, and social marketing techniques that correct misconceptions of norms related to around driver distraction and crash risk. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  15. Peer Group Norms and Accountability Moderate the Effect of School Norms on Children's Intergroup Attitudes

    ERIC Educational Resources Information Center

    McGuire, Luke; Rutland, Adam; Nesdale, Drew

    2015-01-01

    The present study examined the interactive effects of school norms, peer norms, and accountability on children's intergroup attitudes. Participants (n = 229) aged 5-11 years, in a between-subjects design, were randomly assigned to a peer group with an inclusion or exclusion norm, learned their school either had an inclusion norm or not, and were…

  16. Wired: Energy Drinks, Jock Identity, Masculine Norms, and Risk Taking

    PubMed Central

    Miller, Kathleen E.

    2008-01-01

    Objective The author examined gendered links among sport-related identity, endorsement of conventional masculine norms, risk taking, and energy-drink consumption. Participants The author surveyed 795 undergraduate students enrolled in introductory-level courses at a public university. Methods The author conducted linear regression analyses of energy-drink consumption frequencies on sociodemographic characteristics, jock identity, masculine norms, and risk-taking behavior. Results Of participants, 39% consumed an energy drink in the past month, with more frequent use by men (2.49 d/month) than by women (1.22 d/month). Strength of jock identity was positively associated with frequency of energy-drink consumption; this relationship was mediated by both masculine norms and risk-taking behavior. Conclusions Sport-related identity, masculinity, and risk taking are components of the emerging portrait of a toxic jock identity, which may signal an elevated risk for health-compromising behaviors. College undergraduates’ frequent consumption of Red Bull and comparable energy drinks should be recognized as a potential predictor of toxic jock identity. PMID:18400659

  17. Identifying factors associated with regular physical activity in leisure time among Canadian adolescents.

    PubMed

    Godin, Gaston; Anderson, Donna; Lambert, Léo-Daniel; Desharnais, Raymond

    2005-01-01

    The purpose of this study was to identify the factors explaining regular physical activity among Canadian adolescents. A cohort study conducted over a period of 2 years. A French-language high school located near Québec City. A cohort of 740 students (352 girls; 388 boys) aged 13.3 +/- 1.0 years at baseline. Psychosocial, life context, profile, and sociodemographic variables were assessed at baseline and 1 and 2 years after baseline. Exercising almost every day during leisure time at each measurement time was the dependent variable. The Generalized Estimating Equations (GEE) analysis indicated that exercising almost every day was significantly associated with a high intention to exercise (odds ratio [OR]: 8.33, confidence interval [CI] 95%: 5.26, 13.18), being satisfied with the activity practiced (OR: 2.07, CI 95%: 1.27, 3.38), perceived descriptive norm (OR: 1.82, CI 95%: 1.41, 2.35), being a boy (OR: 1.83, CI 95%: 1.37, 2.46), practicing "competitive" activities (OR: 1.80, CI 95%: 1.37, 2.36), eating a healthy breakfast (OR: 1.68, CI 95%: 1.09, 2.60), and normative beliefs (OR: 1.48, CI 95%: 1.14, 1.90). Specific GEE analysis for gender indicated slight but significant differences. This study provides evidence for the need to design interventions that are gender specific and that focus on increasing intention to exercise regularly.

  18. Parent and Grandparent Marijuana Use and Child Substance Use and Norms

    PubMed Central

    Bailey, Jennifer A.; Hill, Karl G.; Guttmannova, Katarina; Epstein, Marina; Abbott, Robert D.; Steeger, Christine M.; Skinner, Martie L.

    2016-01-01

    Purpose Using prospective longitudinal data from 3 generations, this study seeks to test whether and how parent and grandparent marijuana use (current and prior) predicts an increased likelihood of child cigarette, alcohol, and marijuana use. Methods Using multilevel modeling of prospective data spanning 3 generations (N = 306 families, children ages 6-22), this study tested associations between grandparent (G1) and parent (G2) marijuana use and child (G3) past-year cigarette, alcohol, and marijuana use. Analyses tested whether G3 substance-related norms mediated these associations. Current G1 and G2 marijuana use was examined, as was G2 high school and early adult use and G1 marijuana use when G2 parents were in early adolescence. Controls included G2 age at G3 birth, G2 education and depression, and G3 gender. Results G2 current marijuana use predicted a higher likelihood of G3 alcohol and marijuana use, but was not related to the probability of G3 cigarette use. G3's perceptions of their parents' norms and G2 current marijuana use both contributed independently to the likelihood of G3 alcohol and marijuana use when included in the same model. G3 children's own norms and their perceptions of friends' norms mediated the link between G2 current marijuana use and G3 alcohol and marijuana use. Conclusions Results are discussed in light of the growing trend toward marijuana legalization. To the extent that parent marijuana use increases under legalization, we can expect more youth to use alcohol and marijuana and to have norms that favor substance use. PMID:27265424

  19. Injunctive social norms primacy over descriptive social norms in retirement savings decisions.

    PubMed

    Croy, Gerry; Gerrans, Paul; Speelman, Craig

    2010-01-01

    Consistent with the global trend to shift responsibility for retirement income provision from the public purse to individuals has been encouragement to save more and to manage investment strategy. Analyzing data from 2,300 respondents to a randomly distributed questionnaire, this article focuses on the motivational importance of social norms. The study finds injunctive social norms (what is commonly approved or disapproved of) exert greater influence than descriptive social norms (what is commonly done) in predicting retirement savings intentions. Modeling employs the theory of planned behavior, and also finds injunctive social norm has predictive primacy over attitude and perceived behavioral control. Discussion advocates a balanced approach to intervention design, and identifies opportunities for the further study of normative message framing.

  20. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  1. Thematic relatedness production norms for 100 object concepts.

    PubMed

    Jouravlev, Olessia; McRae, Ken

    2016-12-01

    Knowledge of thematic relations is an area of increased interest in semantic memory research because it is crucial to many cognitive processes. One methodological issue that researchers face is how to identify pairs of thematically related concepts that are well-established in semantic memory for most people. In this article, we review existing methods of assessing thematic relatedness and provide thematic relatedness production norming data for 100 object concepts. In addition, 1,174 related concept pairs obtained from the production norms were classified as reflecting one of the five subtypes of relations: attributive, argument, coordinate, locative, and temporal. The database and methodology will be useful for researchers interested in the effects of thematic knowledge on language processing, analogical reasoning, similarity judgments, and memory. These data will also benefit researchers interested in investigating potential processing differences among the five types of semantic relations.

  2. Learning the Norm of Internality: NetNorm, a Connectionist Model

    ERIC Educational Resources Information Center

    Thierry, Bollon; Adeline, Paignon; Pascal, Pansu

    2011-01-01

    The objective of the present article is to show that connectionist simulations can be used to model some of the socio-cognitive processes underlying the learning of the norm of internality. For our simulations, we developed a connectionist model which we called NetNorm (based on Dual-Network formalism). This model is capable of simulating the…

  3. EQ-5D Portuguese population norms.

    PubMed

    Ferreira, Lara Noronha; Ferreira, Pedro L; Pereira, Luis N; Oppe, Mark

    2014-03-01

    The EQ-5D is a widely used preference-based measure. Normative data can be used as references to analyze the effects of healthcare, determine the burden of disease and enable regional or country comparisons. Population norms for the EQ-5D exist for other countries but have not been previously published for Portugal. The purpose of this study was to derive EQ-5D Portuguese population norms. The EQ-5D was applied by phone interview to a random sample of the Portuguese general population (n = 1,500) stratified by age, gender and region. The Portuguese value set was used to derive the EQ-5D index. Mean values were computed by gender and age groups, marital status, educational attainment, region and other variables to obtain the EQ-5D Portuguese norms. Health status declines with advancing age, and women reported worse health status than men. These results are similar to other EQ-5D population health studies. This study provides Portuguese population health-related quality of life data measured by the EQ-5D that can be used as population norms. These norms can be used to inform Portuguese policy makers, health care professionals and researchers in issues related to health care policy and planning and quantification of treatment effects on health status.

  4. Nonconvex Nonsmooth Low Rank Minimization via Iteratively Reweighted Nuclear Norm.

    PubMed

    Lu, Canyi; Tang, Jinhui; Yan, Shuicheng; Lin, Zhouchen

    2016-02-01

    The nuclear norm is widely used as a convex surrogate of the rank function in compressive sensing for low rank matrix recovery with its applications in image recovery and signal processing. However, solving the nuclear norm-based relaxed convex problem usually leads to a suboptimal solution of the original rank minimization problem. In this paper, we propose to use a family of nonconvex surrogates of L0-norm on the singular values of a matrix to approximate the rank function. This leads to a nonconvex nonsmooth minimization problem. Then, we propose to solve the problem by an iteratively re-weighted nuclear norm (IRNN) algorithm. IRNN iteratively solves a weighted singular value thresholding problem, which has a closed form solution due to the special properties of the nonconvex surrogate functions. We also extend IRNN to solve the nonconvex problem with two or more blocks of variables. In theory, we prove that the IRNN decreases the objective function value monotonically, and any limit point is a stationary point. Extensive experiments on both synthesized data and real images demonstrate that IRNN enhances the low rank matrix recovery compared with the state-of-the-art convex algorithms.

  5. Exploiting Lexical Regularities in Designing Natural Language Systems.

    DTIC Science & Technology

    1988-04-01

    ELEMENT. PROJECT. TASKN Artificial Inteligence Laboratory A1A4WR NTumet 0) 545 Technology Square Cambridge, MA 02139 Ln *t- CONTROLLING OFFICE NAME AND...RO-RI95 922 EXPLOITING LEXICAL REGULARITIES IN DESIGNING NATURAL 1/1 LANGUAGE SYSTENS(U) MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE...oes.ary and ftdou.Ip hr Nl wow" L,2This paper presents the lexical component of the START Question Answering system developed at the MIT Artificial

  6. The Mimetic Finite Element Method and the Virtual Element Method for elliptic problems with arbitrary regularity.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco

    2012-07-13

    We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.

  7. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  8. Efficient methods for overlapping group lasso.

    PubMed

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  9. Multiple graph regularized protein domain ranking.

    PubMed

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  10. Electrophysiological evidence of sublexical phonological access in character processing by L2 Chinese learners of L1 alphabetic scripts.

    PubMed

    Yum, Yen Na; Law, Sam-Po; Mo, Kwan Nok; Lau, Dustin; Su, I-Fan; Shum, Mark S K

    2016-04-01

    While Chinese character reading relies more on addressed phonology relative to alphabetic scripts, skilled Chinese readers also access sublexical phonological units during recognition of phonograms. However, sublexical orthography-to-phonology mapping has not been found among beginning second language (L2) Chinese learners. This study investigated character reading in more advanced Chinese learners whose native writing system is alphabetic. Phonological regularity and consistency were examined in behavioral responses and event-related potentials (ERPs) in lexical decision and delayed naming tasks. Participants were 18 native English speakers who acquired written Chinese after age 5 years and reached grade 4 Chinese reading level. Behaviorally, regular characters were named more accurately than irregular characters, but consistency had no effect. Similar to native Chinese readers, regularity effects emerged early with regular characters eliciting a greater N170 than irregular characters. Regular characters also elicited greater frontal P200 and smaller N400 than irregular characters in phonograms of low consistency. Additionally, regular-consistent characters and irregular-inconsistent characters had more negative amplitudes than irregular-consistent characters in the N400 and LPC time windows. The overall pattern of brain activities revealed distinct regularity and consistency effects in both tasks. Although orthographic neighbors are activated in character processing of L2 Chinese readers, the timing of their impact seems delayed compared with native Chinese readers. The time courses of regularity and consistency effects across ERP components suggest both assimilation and accommodation of the reading network in learning to read a typologically distinct second orthographic system.

  11. Hybrid normed ideal perturbations of n-tuples of operators I

    NASA Astrophysics Data System (ADS)

    Voiculescu, Dan-Virgil

    2018-06-01

    In hybrid normed ideal perturbations of n-tuples of operators, the normed ideal is allowed to vary with the component operators. We begin extending to this setting the machinery we developed for normed ideal perturbations based on the modulus of quasicentral approximation and an adaptation of our non-commutative generalization of the Weyl-von Neumann theorem. For commuting n-tuples of hermitian operators, the modulus of quasicentral approximation remains essentially the same when Cn- is replaced by a hybrid n-tuple Cp1,…- , … , Cpn- , p1-1 + ⋯ + pn-1 = 1. The proof involves singular integrals of mixed homogeneity.

  12. A multiplicative regularization for force reconstruction

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2017-02-01

    Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.

  13. Current Trends in the study of Gender Norms and Health Behaviors

    PubMed Central

    Fleming, Paul J.; Agnew-Brune, Christine

    2015-01-01

    Gender norms are recognized as one of the major social determinants of health and gender norms can have implications for an individual’s health behaviors. This paper reviews the recent advances in research on the role of gender norms on health behaviors most associated with morbidity and mortality. We find that (1) the study of gender norms and health behaviors is varied across different types of health behaviors, (2) research on masculinity and masculine norms appears to have taken on an increasing proportion of studies on the relationship between gender norms and health, and (3) we are seeing new and varied populations integrated into the study of gender norms and health behaviors. PMID:26075291

  14. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  15. Using Facebook to deliver a social norm intervention to reduce problem drinking at university.

    PubMed

    Ridout, Brad; Campbell, Andrew

    2014-11-01

    University students usually overestimate peer alcohol use, resulting in them 'drinking up' to perceived norms. Social norms theory suggests correcting these inflated perceptions can reduce alcohol consumption. Recent findings by the current authors show portraying oneself as 'a drinker' is considered by many students to be a socially desirable component of their Facebook identity, perpetuating an online culture that normalises binge drinking. However, social networking sites have yet to be utilised in social norms interventions. Actual and perceived descriptive and injunctive drinking norms were collected from 244 university students. Ninety-five students screened positive for hazardous drinking and were randomly allocated to a control group or intervention group that received social norms feedback via personalised Facebook private messages over three sessions. At 1 month post-intervention, the quantity and frequency of alcohol consumed by intervention group during the previous month had significantly reduced compared with baseline and controls. Reductions were maintained 3 months post-intervention. Intervention group perceived drinking norms were significantly more accurate post-intervention. This is the first study to test the feasibility of using Facebook to deliver social norms interventions. Correcting misperceptions of peer drinking norms resulted in clinically significant reductions in alcohol use. Facebook has many advantages over traditional social norms delivery, providing an innovative method for tackling problem drinking at university. These results have implications for the use of Facebook to deliver positive messages about safe alcohol use to students, which may counter the negative messages regarding alcohol normally seen on Facebook. © 2014 Australasian Professional Society on Alcohol and other Drugs.

  16. Global Norms: Towards Some Guidelines for Aggregating Personality Norms across Countries

    ERIC Educational Resources Information Center

    Bartram, Dave

    2008-01-01

    The article discusses issues relating to the international use of personality inventories, especially those in which organizations make comparisons between people from differing cultures or countries or those with different languages. The focus is on the issue of norming and the use of national versus multinational norms. It is noted that…

  17. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    PubMed

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  18. Regularized Generalized Canonical Correlation Analysis

    ERIC Educational Resources Information Center

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  19. Extensive regularization of the coupled cluster methods based on the generating functional formalism: application to gas-phase benchmarks and to the S(N)2 reaction of CHCl3 and OH- in water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalski, Karol; Valiev, Marat

    2009-12-21

    The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski, P.D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent non-iterative coupled-cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wavefunction. Although proven to be effcient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we addressmore » the issue of size-consistent regularization of the GF expansion by redefning the equations for the cluster amplitudes. The performance and basic features of proposed methodology is illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with QM/MM module and applied to describe the SN2 reaction of CHCl3 and OH- in aqueous solution.« less

  20. Alcohol evaluations and acceptability: Examining descriptive and injunctive norms among heavy drinkers

    PubMed Central

    Foster, Dawn W.; Neighbors, Clayton; Krieger, Heather

    2015-01-01

    Objectives This study assessed descriptive and injunctive norms, evaluations of alcohol consequences, and acceptability of drinking. Methods Participants were 248 heavy-drinking undergraduates (81.05% female; Mage = 23.45). Results Stronger perceptions of descriptive and injunctive norms for drinking and more positive evaluations of alcohol consequences were positively associated with drinking and the number of drinks considered acceptable. Descriptive and injunctive norms interacted, indicating that injunctive norms were linked with number of acceptable drinks among those with higher descriptive norms. Descriptive norms and evaluations of consequences interacted, indicating that descriptive norms were positively linked with number of acceptable drinks among those with negative evaluations of consequences; however, among those with positive evaluations of consequences, descriptive norms were negatively associated with number of acceptable drinks. Injunctive norms and evaluations of consequences interacted, indicating that injunctive norms were positively associated with number of acceptable drinks, particularly among those with positive evaluations of consequences. A three-way interaction emerged between injunctive and descriptive norms and evaluations of consequences, suggesting that injunctive norms and the number of acceptable drinks were positively associated more strongly among those with negative versus positive evaluations of consequences. Those with higher acceptable drinks also had positive evaluations of consequences and were high in injunctive norms. Conclusions Findings supported hypotheses that norms and evaluations of alcohol consequences would interact with respect to drinking and acceptance of drinking. These examinations have practical utility and may inform development and implementation of interventions and programs targeting alcohol misuse among heavy drinking undergraduates. PMID:25437265

  1. Estimation of High-Dimensional Graphical Models Using Regularized Score Matching

    PubMed Central

    Lin, Lina; Drton, Mathias; Shojaie, Ali

    2017-01-01

    Graphical models are widely used to model stochastic dependences among large collections of variables. We introduce a new method of estimating undirected conditional independence graphs based on the score matching loss, introduced by Hyvärinen (2005), and subsequently extended in Hyvärinen (2007). The regularized score matching method we propose applies to settings with continuous observations and allows for computationally efficient treatment of possibly non-Gaussian exponential family models. In the well-explored Gaussian setting, regularized score matching avoids issues of asymmetry that arise when applying the technique of neighborhood selection, and compared to existing methods that directly yield symmetric estimates, the score matching approach has the advantage that the considered loss is quadratic and gives piecewise linear solution paths under ℓ1 regularization. Under suitable irrepresentability conditions, we show that ℓ1-regularized score matching is consistent for graph estimation in sparse high-dimensional settings. Through numerical experiments and an application to RNAseq data, we confirm that regularized score matching achieves state-of-the-art performance in the Gaussian case and provides a valuable tool for computationally efficient estimation in non-Gaussian graphical models. PMID:28638498

  2. Multiple graph regularized protein domain ranking

    PubMed Central

    2012-01-01

    Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331

  3. Evaluating the implementation of RxNorm in ambulatory electronic prescriptions

    PubMed Central

    Ward-Charlerie, Stacy; Rupp, Michael T; Kilbourne, John; Amin, Vishal P; Ruiz, Joshua

    2016-01-01

    Objective RxNorm is a standardized drug nomenclature maintained by the National Library of Medicine that has been recommended as an alternative to the National Drug Code (NDC) terminology for use in electronic prescribing. The objective of this study was to evaluate the implementation of RxNorm in ambulatory care electronic prescriptions (e-prescriptions). Methods We analyzed a random sample of 49 997 e-prescriptions that were received by 7391 locations of a national retail pharmacy chain during a single day in April 2014. The e-prescriptions in the sample were generated by 37 801 ambulatory care prescribers using 519 different e-prescribing software applications. Results We found that 97.9% of e-prescriptions in the study sample could be accurately represented by an RxNorm identifier. However, RxNorm identifiers were actually used as drug identifiers in only 16 433 (33.0%) e-prescriptions. Another 431 (2.5%) e-prescriptions that used RxNorm identifiers had a discrepancy in the corresponding Drug Database Code qualifier field or did not have a qualifier (Term Type) at all. In 10 e-prescriptions (0.06%), the free-text drug description and the RxNorm concept unique identifier pointed to completely different drug concepts, and in 7 e-prescriptions (0.04%), the NDC and RxNorm drug identifiers pointed to completely different drug concepts. Discussion The National Library of Medicine continues to enhance the RxNorm terminology and expand its scope. This study illustrates the need for technology vendors to improve their implementation of RxNorm; doing so will accelerate the adoption of RxNorm as the preferred alternative to using the NDC terminology in e-prescribing. PMID:26510879

  4. Development of fast measurements of concentration of NORM U-238 by HPGe

    NASA Astrophysics Data System (ADS)

    Cha, Seokki; Kim, Siu; Kim, Geehyun

    2017-02-01

    Naturally Occureed Radioactive Material (NORM) generated from the origin of earth can be found all around us and even people who are not engaged in the work related to radiation have been exposed to unnecessary radiation. This NORM has a potential risk provided that is concentrated or transformed by artificial activities. Likewise, a development of fast measruement method of NORM is emerging to prevent the radiation exposure of the general public and person engaged in the work related to the type of business related thereto who uses the material in which NORM is concentrated or transfromed. Based on such a background, many of countries have tried to manage NORM and carried out regulatory legislation. To effienctly manage NORM, there is need for developing new measurement to quickly and accurately analyze the nuclide and concentration. In this study, development of the fast and reliable measurement was carried out. In addition to confirming the reliability of the fast measurement, we have obtained results that can suggest the possibility of developing another fast measurement. Therefore, as a follow-up, it is possible to develop another fast analytical measurement afterwards. The results of this study will be very useful for the regulatory system to manage NORM. In this study, a review of two indirect measurement methods of NORM U-238 that has used HPGe on the basis of the equilibrium theory of relationships of mother and daughter nuclide at decay-chain of NORM U-238 has been carried out. For comparative study(in order to know reliabily), direct measurement that makes use of alpha spectrometer with complicated pre-processing process was implemented.

  5. On-Line Identification of Simulation Examples for Forgetting Methods to Track Time Varying Parameters Using the Alternative Covariance Matrix in Matlab

    NASA Astrophysics Data System (ADS)

    Vachálek, Ján

    2011-12-01

    The paper compares the abilities of forgetting methods to track time varying parameters of two different simulated models with different types of excitation. The observed parameters in the simulations are the integral sum of the Euclidean norm, deviation of the parameter estimates from their true values and a selected band prediction error count. As supplementary information, we observe the eigenvalues of the covariance matrix. In the paper we used a modified method of Regularized Exponential Forgetting with Alternative Covariance Matrix (REFACM) along with Directional Forgetting (DF) and three standard regularized methods.

  6. Modelling the Flow Stress of Alloy 316L using a Multi-Layered Feed Forward Neural Network with Bayesian Regularization

    NASA Astrophysics Data System (ADS)

    Abiriand Bhekisipho Twala, Olufunminiyi

    2017-08-01

    In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.

  7. Recursive regularization step for high-order lattice Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Coreixas, Christophe; Wissocq, Gauthier; Puigt, Guillaume; Boussuge, Jean-François; Sagaut, Pierre

    2017-09-01

    A lattice Boltzmann method (LBM) with enhanced stability and accuracy is presented for various Hermite tensor-based lattice structures. The collision operator relies on a regularization step, which is here improved through a recursive computation of nonequilibrium Hermite polynomial coefficients. In addition to the reduced computational cost of this procedure with respect to the standard one, the recursive step allows to considerably enhance the stability and accuracy of the numerical scheme by properly filtering out second- (and higher-) order nonhydrodynamic contributions in under-resolved conditions. This is first shown in the isothermal case where the simulation of the doubly periodic shear layer is performed with a Reynolds number ranging from 104 to 106, and where a thorough analysis of the case at Re=3 ×104 is conducted. In the latter, results obtained using both regularization steps are compared against the Bhatnagar-Gross-Krook LBM for standard (D2Q9) and high-order (D2V17 and D2V37) lattice structures, confirming the tremendous increase of stability range of the proposed approach. Further comparisons on thermal and fully compressible flows, using the general extension of this procedure, are then conducted through the numerical simulation of Sod shock tubes with the D2V37 lattice. They confirm the stability increase induced by the recursive approach as compared with the standard one.

  8. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin I, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin I data directly without the need for any convergence criteria.

  9. Children are sensitive to norms of giving.

    PubMed

    McAuliffe, Katherine; Raihani, Nichola J; Dunham, Yarrow

    2017-10-01

    People across societies engage in costly sharing, but the extent of such sharing shows striking cultural variation, highlighting the importance of local norms in shaping generosity. Despite this acknowledged role for norms, it is unclear when they begin to exert their influence in development. Here we use a Dictator Game to investigate the extent to which 4- to 9-year-old children are sensitive to selfish (give 20%) and generous (give 80%) norms. Additionally, we varied whether children were told how much other children give (descriptive norm) or what they should give according to an adult (injunctive norm). Results showed that children generally gave more when they were exposed to a generous norm. However, patterns of compliance varied with age. Younger children were more likely to comply with the selfish norm, suggesting a licensing effect. By contrast, older children were more influenced by the generous norm, yet capped their donations at 50%, perhaps adhering to a pre-existing norm of equality. Children were not differentially influenced by descriptive or injunctive norms, suggesting a primacy of norm content over norm format. Together, our findings indicate that while generosity is malleable in children, normative information does not completely override pre-existing biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Regularization by Functions of Bounded Variation and Applications to Image Enhancement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, E.; Kunisch, K.; Pola, C.

    1999-09-15

    Optimization problems regularized by bounded variation seminorms are analyzed. The optimality system is obtained and finite-dimensional approximations of bounded variation function spaces as well as of the optimization problems are studied. It is demonstrated that the choice of the vector norm in the definition of the bounded variation seminorm is of special importance for approximating subspaces consisting of piecewise constant functions. Algorithms based on a primal-dual framework that exploit the structure of these nondifferentiable optimization problems are proposed. Numerical examples are given for denoising of blocky images with very high noise.

  11. Method of making L-dopa from L-tyrosine

    DOEpatents

    Xun, Luying; Lee, Jang Young

    1998-01-01

    The invention is a method of making a L-dopa from L-tyrosine in the presence of an enzyme catalyst and oxygen. By starting with L-tyrosine, no variant of the L-dopa is produced and the L-dopa is stable in the presence of the enzyme catalyst. In other words, the reaction favors the L-dopa and is not reversible.

  12. Method of making L-dopa from L-tyrosine

    DOEpatents

    Xun, L.; Lee, J.Y.

    1998-11-17

    The invention is a method of making a L-dopa from L-tyrosine in the presence of an enzyme catalyst and oxygen. By starting with L-tyrosine, no variant of the L-dopa is produced and the L-dopa is stable in the presence of the enzyme catalyst. In other words, the reaction favors the L-dopa and is not reversible. 3 figs.

  13. Reduced rank regression via adaptive nuclear norm penalization

    PubMed Central

    Chen, Kun; Dong, Hongbo; Chan, Kung-Sik

    2014-01-01

    Summary We propose an adaptive nuclear norm penalization approach for low-rank matrix approximation, and use it to develop a new reduced rank estimation method for high-dimensional multivariate regression. The adaptive nuclear norm is defined as the weighted sum of the singular values of the matrix, and it is generally non-convex under the natural restriction that the weight decreases with the singular value. However, we show that the proposed non-convex penalized regression method has a global optimal solution obtained from an adaptively soft-thresholded singular value decomposition. The method is computationally efficient, and the resulting solution path is continuous. The rank consistency of and prediction/estimation performance bounds for the estimator are established for a high-dimensional asymptotic regime. Simulation studies and an application in genetics demonstrate its efficacy. PMID:25045172

  14. Development and validation of an eating norms inventory. Americans' lay-beliefs about appropriate eating.

    PubMed

    Fisher, Robert J; Dubé, Laurette

    2011-10-01

    What do American adults believe about what, where, when, how much, and how often it is appropriate to eat? Such normative beliefs originate from family and friends through socialization processes, but they are also influenced by governments, educational institutions, and businesses. Norms therefore provide an important link between the social environment and individual attitudes and behaviors. This paper reports on five studies that identify, develop, and validate measures of normative beliefs about eating. In study 1 we use an inductive method to identify what American adults believe are appropriate or desirable eating behaviors. Studies 2 and 3 are used to purify and assess the discriminant and nomological validity of the proposed set of 18 unidimensional eating norms. Study 4 assesses predictive validity and finds that acting in a norm-consistent fashion is associated with lower Body Mass Index (BMI), and greater body satisfaction and subjective health. Study 5 assesses the underlying social desirability and perceived healthiness of the norms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. A Handful of Paragraphs on "Translation" and "Norms."

    ERIC Educational Resources Information Center

    Toury, Gideon

    1998-01-01

    Presents some thoughts on the issue of translation and norms, focusing on the relationships between social agreements, conventions, and norms; translational norms; acts of translation and translation events; norms and values; norms for translated texts versus norms for non-translated texts; and competing norms. Comments on the reactions to three…

  16. [Construction and application of prokaryotic expression system of Leptospira interrogans lipL32/1-lipL41/1 fusion gene].

    PubMed

    Luo, Dong-jiao; Yan, Jie; Mao, Ya-fei; Li, Shu-ping; Luo, Yi-hui; Li, Li-wei

    2005-01-01

    To construct lipL32/1-lipL41/1 fusion gene and its prokaryotic expression system and to determine frequencies of carrying and expression of lipL32 and lipL41 genes in L.interrogans wild strains and specific antibody levels in sera from leptospirosis patients. lipL32/1-lipL41/1 fusion gene was constructed using linking primer PCR method and the prokaryotic expression system of the fusion gene done with routine techniques. SDS-PAGE was used to examine expression of the target recombinant protein rLipL32/1-rLipL41/1. Immunogenicity of rLipL32/1-rLipL41/1 was identified by Western blot. PCR and MAT were performed to detect carrying and expression of lipL32 and lipL41 genes in 97 wild L.interrogans strains. Antibodies against products of lipL32 and lipL41 genes in serum samples from 228 leptospirosis patients were detected by ELISA method. The homogeneity of nucleotide and putative amino acid sequence of lipL32/1-lipL41/1 fusion gene were 99.9 % and 99.8 % in comparison with the reported sequences. Expression output of the target recombinant protein rLipL32/1-rLipL41/1, mainly present in inclusion body, accounted for 10 % of the total bacterial proteins. Both the rabbit antisera against rLipL32/1 and rLipL41/1 could combine to rLipL32/1-rLipL41/1. 97.9 % and 87.6 % of the L.interrogans wild strains had lipL32 and lipL41 genes, respectively. 95.9 % and 84.5 % of the wild strains were positive for MAT with titers of 1:4 - 1:128 using rabbit anti-rLipL32s or anti-rLipL41s sera, respectively. 94.7 % - 97.4 % of the patients'serum samples were positive for rLipL32s antibodies, while 78.5 % - 84.6 % of them were rLipL41s antibodies detectable. lipL32/1-jlipL41/1 fusion gene and its prokaryotic expression system were successfully constructed. The expressed fusion protein had qualified immunogenicity. Both the lipL32 and lipL41 genes are extensively carried and frequently expressed by different serogroups of L.interrogans, and their expression products exhibit cross-antigenicity.

  17. Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.

    2013-08-01

    In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.

  18. Sensitivity regularization of the Cramér-Rao lower bound to minimize B1 nonuniformity effects in quantitative magnetization transfer imaging.

    PubMed

    Boudreau, Mathieu; Pike, G Bruce

    2018-05-07

    To develop and validate a regularization approach of optimizing B 1 insensitivity of the quantitative magnetization transfer (qMT) pool-size ratio (F). An expression describing the impact of B 1 inaccuracies on qMT fitting parameters was derived using a sensitivity analysis. To simultaneously optimize for robustness against noise and B 1 inaccuracies, the optimization condition was defined as the Cramér-Rao lower bound (CRLB) regularized by the B 1 -sensitivity expression for the parameter of interest (F). The qMT protocols were iteratively optimized from an initial search space, with and without B 1 regularization. Three 10-point qMT protocols (Uniform, CRLB, CRLB+B 1 regularization) were compared using Monte Carlo simulations for a wide range of conditions (e.g., SNR, B 1 inaccuracies, tissues). The B 1 -regularized CRLB optimization protocol resulted in the best robustness of F against B 1 errors, for a wide range of SNR and for both white matter and gray matter tissues. For SNR = 100, this protocol resulted in errors of less than 1% in mean F values for B 1 errors ranging between -10 and 20%, the range of B 1 values typically observed in vivo in the human head at field strengths of 3 T and less. Both CRLB-optimized protocols resulted in the lowest σ F values for all SNRs and did not increase in the presence of B 1 inaccuracies. This work demonstrates a regularized optimization approach for improving the robustness of auxiliary measurements (e.g., B 1 ) sensitivity of qMT parameters, particularly the pool-size ratio (F). Predicting substantially less B 1 sensitivity using protocols optimized with this method, B 1 mapping could even be omitted for qMT studies primarily interested in F. © 2018 International Society for Magnetic Resonance in Medicine.

  19. Social norms and prejudice against homosexuals.

    PubMed

    Pereira, Annelyse; Monteiro, Maria Benedicta; Camino, Leoncio

    2009-11-01

    Different studies regarding the role of norms on the expression of prejudice have shown that the anti-prejudice norm influences people to inhibit prejudice expressions. However, if norm pressure has led to a substantial decrease in the public expression of prejudice against certain targets (e.g., blacks, women, blind people), little theoretical and empirical attention has been paid to the role of this general norm regarding sexual minorities (e.g., prostitutes, lesbians and gays). In this sense, the issue we want to address is whether general anti-prejudice norms can reduce the expression of prejudice against homosexual individuals. In this research we investigate the effect of activating an anti-prejudice norm against homosexuals on blatant and subtle expressions of prejudice. The anti-prejudice norm was experimentally manipulated and its effects were observed on rejection to intimacy (blatant prejudice) and on positive-negative emotions (subtle prejudice) regarding homosexuals. 136 university students were randomly allocated to activated-norm and control conditions and completed a questionnaire that included norm manipulation and the dependent variables. A multivariate analysis of variance (MANOVA) as well as subsequent ANOVAS showed that only in the high normative pressure condition participants expressed less rejection to intimacy and less negative emotions against homosexuals, when compared to the simple norm-activation and the control conditions. Positive emotions, however, were similar both in the high normative pressure and the control conditions. We concluded that a high anti-prejudice pressure regarding homosexuals could reduce blatant prejudice but not subtle prejudice, considering that the expression of negative emotions decreased while the expression of positive emotions remained stable.

  20. Do social norms affect intended food choice?

    PubMed

    Croker, H; Whitaker, K L; Cooke, L; Wardle, J

    2009-01-01

    To evaluate the effect of social norms on intended fruit and vegetable intake. A two-stage design to i) compare the perceived importance of normative influences vs cost and health on dietary choices, and ii) test the prediction that providing information on social norms will increase intended fruit and vegetable consumption in an experimental study. Home-based interviews (N=1083; 46% men, 54% women) were carried out as part of the Office for National Statistics Omnibus Survey in November 2008. The public's perception of the importance of social norms was lower (M=2.1) than the perceived importance of cost (M=2.7) or health (M=3.4) (all p's<0.001) on a scale from 1 (not at all important) to 4 (very important). In contrast, results from the experimental study showed that intentions to eat fruit and vegetables were positively influenced by normative information (p=0.011) in men but not by health or cost information; none of the interventions affected women's intentions. People have little awareness of the influence of social norms but normative information can have a demonstrable impact on dietary intentions. Health promotion might profit from emphasising how many people are attempting to adopt healthy lifestyles rather than how many have poor diets.

  1. Comparison of ultrasonic-assisted and regular leaching of germanium from by-product of zinc metallurgy.

    PubMed

    Zhang, Libo; Guo, Wenqian; Peng, Jinhui; Li, Jing; Lin, Guo; Yu, Xia

    2016-07-01

    A major source of germanium recovery and also the source of this research is the by-product of lead and zinc metallurgical process. The primary purpose of the research is to investigate the effects of ultrasonic assisted and regular methods on the leaching yield of germanium from roasted slag containing germanium. In the study, the HCl-CaCl2 mixed solution is adopted as the reacting system and the Ca(ClO)2 used as the oxidant. Through six single factor (leaching time, temperature, amount of Ca(ClO)2, acid concentration, concentration of CaCl2 solution, ultrasonic power) experiments and the comparison of the two methods, it is found the optimum collective of germanium for ultrasonic-assisted method is obtained at temperature 80 °C for a leaching duration of 40 min. The optimum concentration for hydrochloric acid, CaCl2 and oxidizing agent are identified to be 3.5 mol/L, 150 g/L and 58.33 g/L, respectively. In addition, 700 W is the best ultrasonic power and an over-high power is adverse in the leaching process. Under the optimum condition, the recovery of germanium could reach up to 92.7%. While, the optimum leaching condition for regular leaching method is same to ultrasonic-assisted method, except regular method consume 100 min and the leaching rate of Ge 88.35% is lower about 4.35%. All in all, the experiment manifests that the leaching time can be reduced by as much as 60% and the leaching rate of Ge can be increased by 3-5% with the application of ultrasonic tool, which is mainly thanks to the mechanical action of ultrasonic. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Improvements in GRACE Gravity Fields Using Regularization

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  3. Social norms of accompanied young children and observed crossing behaviors.

    PubMed

    Rosenbloom, Tova; Sapir-Lavid, Yael; Hadari-Carmi, Ofri

    2009-01-01

    Social norms for accompanied young children and crossing behaviors were examined in two studies conducted in an Ultra-Orthodox Jewish community in Israel. In Study 1, road behaviors of young children crossing with and without accompaniment and older children were observed, and the actual social norm for accompanied school children younger than 9-years-old was examined. In Study 2, the perceived norm of accompaniment was tested by questionnaires. Young children who crossed without accompaniment exhibited poorer crossing skills compared to older children and to young children crossing with accompaniment. In the four locations observed, the actual accompaniment rate ranged between 15%-60%. The perceived social norm for child accompaniment was lower than the actual norm. The discussion refers to both theoretical issues and their practical implications.

  4. From Abstract to Concrete Norms in Agent Institutions

    NASA Technical Reports Server (NTRS)

    Grossi, Davide; Dignum, Frank

    2004-01-01

    Norms specifying constraints over institutions are stated in such a form that allows them to regulate a wide range of situations over time without need for modification. To guarantee this stability, the formulation of norms need to abstract from a variety of concrete aspects, which are instead relevant for the actual operationalization of institutions. If agent institutions are to be built, which comply with a set of abstract requirements, how can those requirements be translated in more concrete constraints the impact of which can be described directly in the institution? In this work we make use of logical methods in order to provide a formal characterization of the translation rules that operate the connection between abstract and concrete norms. On the basis of this characterization, a comprehensive formalization of the notion of institution is also provided.

  5. C1,1 regularity for degenerate elliptic obstacle problems

    NASA Astrophysics Data System (ADS)

    Daskalopoulos, Panagiota; Feehan, Paul M. N.

    2016-03-01

    The Heston stochastic volatility process is a degenerate diffusion process where the degeneracy in the diffusion coefficient is proportional to the square root of the distance to the boundary of the half-plane. The generator of this process with killing, called the elliptic Heston operator, is a second-order, degenerate-elliptic partial differential operator, where the degeneracy in the operator symbol is proportional to the distance to the boundary of the half-plane. In mathematical finance, solutions to the obstacle problem for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset. With the aid of weighted Sobolev spaces and weighted Hölder spaces, we establish the optimal C 1 , 1 regularity (up to the boundary of the half-plane) for solutions to obstacle problems for the elliptic Heston operator when the obstacle functions are sufficiently smooth.

  6. Punish and voice: punishment enhances cooperation when combined with norm-signalling.

    PubMed

    Andrighetto, Giulia; Brandts, Jordi; Conte, Rosaria; Sabater-Mir, Jordi; Solaz, Hector; Villatoro, Daniel

    2013-01-01

    Material punishment has been suggested to play a key role in sustaining human cooperation. Experimental findings, however, show that inflicting mere material costs does not always increase cooperation and may even have detrimental effects. Indeed, ethnographic evidence suggests that the most typical punishing strategies in human ecologies (e.g., gossip, derision, blame and criticism) naturally combine normative information with material punishment. Using laboratory experiments with humans, we show that the interaction of norm communication and material punishment leads to higher and more stable cooperation at a lower cost for the group than when used separately. In this work, we argue and provide experimental evidence that successful human cooperation is the outcome of the interaction between instrumental decision-making and the norm psychology humans are provided with. Norm psychology is a cognitive machinery to detect and reason upon norms that is characterized by a salience mechanism devoted to track how much a norm is prominent within a group. We test our hypothesis both in the laboratory and with an agent-based model. The agent-based model incorporates fundamental aspects of norm psychology absent from previous work. The combination of these methods allows us to provide an explanation for the proximate mechanisms behind the observed cooperative behaviour. The consistency between the two sources of data supports our hypothesis that cooperation is a product of norm psychology solicited by norm-signalling and coercive devices.

  7. Autoclave decomposition method for metals in soils and sediments.

    PubMed

    Navarrete-López, M; Jonathan, M P; Rodríguez-Espinosa, P F; Salgado-Galeana, J A

    2012-04-01

    Leaching of partially leached metals (Fe, Mn, Cd, Co, Cu, Ni, Pb, and Zn) was done using autoclave technique which was modified based on EPA 3051A digestion technique. The autoclave method was developed as an alternative to the regular digestion procedure passed the safety norms for partial extraction of metals in polytetrafluoroethylene (PFA vessel) with a low constant temperature (119.5° ± 1.5°C) and the recovery of elements were also precise. The autoclave method was also validated using two Standard Reference Materials (SRMs: Loam Soil B and Loam Soil D) and the recoveries were equally superior to the traditionally established digestion methods. Application of the autoclave was samples from different natural environments (beach, mangrove, river, and city soil) to reproduce the recovery of elements during subsequent analysis.

  8. Social Support and Peer Norms Scales for Physical Activity in Adolescents

    PubMed Central

    Ling, Jiying; Robbins, Lorraine B.; Resnicow, Ken; Bakhoya, Marion

    2015-01-01

    Objectives To evaluate psychometric properties of a Social Support and Peer Norms Scale in 5th-7th grade urban girls. Methods Baseline data from 509 girls and test-retest data from another 94 girls in the Midwestern US were used. Results Cronbach's alpha was .83 for the Social Support Scale and .72 for the Peer Norms Scale, whereas test-re-test reliability was .78 for both scales. Exploratory factor analysis suggested a single factor structure for the Social Support Scale, and a 3-factor structure for the Peer Norms Scale. Social support was correlated with accelerometer-measured physical activity (r = .13, p = .006), and peer norms (r = .50, p < .0001). Conclusions Both scales have adequate psychometric properties. PMID:25207514

  9. Detection of high PD-L1 expression in oral cancers by a novel monoclonal antibody L1Mab-4.

    PubMed

    Yamada, Shinji; Itai, Shunsuke; Kaneko, Mika K; Kato, Yukinari

    2018-03-01

    Programmed cell death-ligand 1 (PD-L1), which is a ligand of programmed cell death-1 (PD-1), is a type I transmembrane glycoprotein that is expressed on antigen-presenting cells and several tumor cells, including melanoma and lung cancer cells. There is a strong correlation between human PD-L1 (hPD-L1) expression on tumor cells and negative prognosis in cancer patients. In this study, we produced a novel anti-hPD-L1 monoclonal antibody (mAb), L 1 Mab-4 (IgG 2b , kappa), using cell-based immunization and screening (CBIS) method and investigated hPD-L1 expression in oral cancers. L 1 Mab-4 reacted with oral cancer cell lines (Ca9-22, HO-1-u-1, SAS, HSC-2, HSC-3, and HSC-4) in flow cytometry and stained oral cancers in a membrane-staining pattern. L 1 Mab-4 stained 106/150 (70.7%) of oral squamous cell carcinomas, indicating the very high sensitivity of L 1 Mab-4. These results indicate that L 1 Mab-4 could be useful for investigating the function of hPD-L1 in oral cancers.

  10. Growth of Sobolev Norms in Linear Schrödinger Equations with Quasi-Periodic Potential

    NASA Astrophysics Data System (ADS)

    Bourgain, J.

    In this paper, we consider the following problem. Let iut+Δu+V(x,t)u= 0 be a linear Schrödinger equation ( periodic boundary conditions) where V is a real, bounded, real analytic potential which is periodic in x and quasi periodic in t with diophantine frequency vector λ. Denote S(t) the corresponding flow map. Thus S(t) preserves the L2-norm and our aim is to study its behaviour on Hs(TD), s> 0. Our main result is the growth in time is at most logarithmic; thus if φ∈Hs, then More precisely, (*) is proven in 1D and 2D when V is small. We also exhibit examples showing that a growth of higher Sobolev norms may occur in this context and (*) is thus essentially best possible.

  11. Swedish women's perceptions of and conformity to feminine norms.

    PubMed

    Kling, Johanna; Holmqvist Gattario, Kristina; Frisén, Ann

    2017-06-01

    The relatively high gender equality in the Swedish society is likely to exert an influence on gender role construction. Hence, the present research aimed to investigate Swedish women's perceptions of and conformity to feminine norms. A mixed methods approach with two studies was used. In Study 1, young Swedish women's gender role conformity, as measured by the Conformity to Feminine Norms Inventory 45 (CFNI-45), was compared to the results from previously published studies in Canada, the United States, and Slovakia. Overall, Swedish women displayed less conformity than their foreign counterparts, with the largest difference on the subscale Sexual fidelity. In Study 2, focus group interviews with young Swedish women added a more complex picture of feminine norms in the Swedish society. For instance the results indicated that Swedish women, while living in a society with a strong gender equality discourse, are torn between the perceived need to invest in their appearances and the risk of being viewed as non-equal when doing so. In sum, despite the fact that traditional gender roles are less pronounced in Sweden, gender role conformity is still a pressing issue. Since attending to the potential roles of feminine norms in women's lives previously has been proposed to be useful in counseling and therapeutic work, the present research also offers valuable information for both researchers and practitioners. [Correction added on 5 May 2017, after first online publication in April 2017: An incorrect Abstract was inadvertently captured in the published article and has been corrected in this current version.]. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  12. 26 CFR 1.446-2 - Method of accounting for interest.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... account by a taxpayer under the taxpayer's regular method of accounting (e.g., an accrual method or the... 26 Internal Revenue 6 2010-04-01 2010-04-01 false Method of accounting for interest. 1.446-2... TAX (CONTINUED) INCOME TAXES Methods of Accounting § 1.446-2 Method of accounting for interest. (a...

  13. Social norms and their influence on eating behaviours.

    PubMed

    Higgs, Suzanne

    2015-03-01

    Social norms are implicit codes of conduct that provide a guide to appropriate action. There is ample evidence that social norms about eating have a powerful effect on both food choice and amounts consumed. This review explores the reasons why people follow social eating norms and the factors that moderate norm following. It is proposed that eating norms are followed because they provide information about safe foods and facilitate food sharing. Norms are a powerful influence on behaviour because following (or not following) norms is associated with social judgements. Norm following is more likely when there is uncertainty about what constitutes correct behaviour and when there is greater shared identity with the norm referent group. Social norms may affect food choice and intake by altering self-perceptions and/or by altering the sensory/hedonic evaluation of foods. The same neural systems that mediate the rewarding effects of food itself are likely to reinforce the following of eating norms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Development and testing of the Youth Alcohol Norms Survey (YANS) instrument to measure youth alcohol norms and psychosocial influences.

    PubMed

    Burns, Sharyn K; Maycock, Bruce; Hildebrand, Janina; Zhao, Yun; Allsop, Steve; Lobo, Roanna; Howat, Peter

    2018-05-14

    This study aimed to develop and validate an online instrument to: (1) identify common alcohol-related social influences, norms and beliefs among adolescents; (2) clarify the process and pathways through which proalcohol norms are transmitted to adolescents; (3) describe the characteristics of social connections that contribute to the transmission of alcohol norms; and (4) identify the influence of alcohol marketing on adolescent norm development. The online Youth Alcohol Norms Survey (YANS) was administered in secondary schools in Western Australia PARTICIPANTS: Using a 2-week test-retest format, the YANS was administered to secondary school students (n=481, age=13-17 years, female 309, 64.2%). The development of the YANS was guided by social cognitive theory and comprised a systematic multistage process including evaluation of content and face validity. A 2-week test-retest format was employed. Exploratory factor analysis was conducted to determine the underlying factor structure of the instrument. Test-retest reliability was examined using intraclass correlation coefficient (ICC) and Cohen's kappa. A five-factor structure with meaningful components and robust factorial loads was identified, and the five factors were labelled as 'individual attitudes and beliefs', 'peer and community identity', 'sibling influences', 'school and community connectedness' and 'injunctive norms', respectively. The instrument demonstrated stability across the test-retest procedure (ICC=0.68-0.88, Cohen's kappa coefficient=0.69) for most variables. The results support the reliability and factorial validity of this instrument. The YANS presents a promising tool, which enables comprehensive assessment of reciprocal individual, behavioural and environmental factors that influence alcohol-related norms among adolescents. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise

  15. The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods

    NASA Astrophysics Data System (ADS)

    Ginibre, J.; Velo, G.

    We continue the study of the initial value problem for the complex Ginzburg-Landau equation (with a > 0, b > 0, g>= 0) in initiated in a previous paper [I]. We treat the case where the initial data and the solutions belong to local uniform spaces, more precisely to spaces of functions satisfying local regularity conditions and uniform bounds in local norms, but no decay conditions (or arbitrarily weak decay conditions) at infinity in . In [I] we used compactness methods and an extended version of recent local estimates [3] and proved in particular the existence of solutions globally defined in time with local regularity of the initial data corresponding to the spaces Lr for r>= 2 or H1. Here we treat the same problem by contraction methods. This allows us in particular to prove that the solutions obtained in [I] are unique under suitable subcriticality conditions, and to obtain for them additional regularity properties and uniform bounds. The method extends some of those previously applied to the nonlinear heat equation in global spaces to the framework of local uniform spaces.

  16. COMPARED TO WHAT? EARLY BRAIN OVERGROWTH IN AUTISM AND THE PERILS OF POPULATION NORMS

    PubMed Central

    Raznahan, Armin; Wallace, Gregory L; Antezana, Ligia; Greenstein, Dede; Lenroot, Rhoshel; Thurm, Audrey; Gozzi, Marta; Spence, Sarah; Martin, Alex; Swedo, Susan E; Giedd, Jay N

    2013-01-01

    Background Early brain overgrowth (EBO) in autism spectrum disorder (ASD) is amongst the best-replicated biological associations in psychiatry. Most positive reports have compared head circumference (HC) in ASD (an excellent proxy for early brain size) with well-known reference norms. We sought to reappraise evidence for the EBO hypothesis given (i) the recent proliferation of longitudinal HC studies in ASD, and (ii) emerging reports that several of the reference norms used to define EBO in ASD may be biased towards detecting HC overgrowth in contemporary samples of healthy children. Methods (1)Systematic review of all published HC studies in children with ASD. (2)Comparison of 330 longitudinally gathered HC measures between birth and 18 months from male children with autism(n=35) and typically developing controls(n=22). Results In systematic review, comparisons with locally recruited controls were significantly less likely to identify EBO in ASD than norm-based studies(p<0.006). Through systematic review and analysis of new data we replicate seminal reports of EBO in ASD relative to classical HC norms, but show that this overgrowth relative to norms is mimicked by patterns of HC growth age in a large contemporary community-based sample of US children(n~75,000). Controlling for known HC norm biases leaves inconsistent support for a subtle, later-emerging and sub-group specific pattern of EBO in clinically-ascertained ASD vs. community controls. Conclusions The best-replicated aspects of EBO reflect generalizable HC norm biases rather than disease-specific biomarkers. The potential HC norm biases we detail are not specific to ASD research, but apply throughout clinical and academic medicine. PMID:23706681

  17. [French norms of imagery for pictures, for concrete and abstract words].

    PubMed

    Robin, Frédérique

    2006-09-01

    This paper deals with French norms for mental image versus picture agreement for 138 pictures and the imagery value for 138 concrete words and 69 abstract words. The pictures were selected from Snodgrass et Vanderwart's norms (1980). The concrete words correspond to the dominant naming response to the pictorial stimuli. The abstract words were taken from verbal associative norms published by Ferrand (2001). The norms were established according to two variables: 1) mental image vs. picture agreement, and 2) imagery value of words. Three other variables were controlled: 1) picture naming agreement; 2) familiarity of objects referred to in the pictures and the concrete words, and 3) subjective verbal frequency of words. The originality of this work is to provide French imagery norms for the three kinds of stimuli usually compared in research on dual coding. Moreover, these studies focus on figurative and verbal stimuli variations in visual imagery processes.

  18. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  19. Factors affecting children's adherence to regular dental attendance: a systematic review.

    PubMed

    Badri, Parvaneh; Saltaji, Humam; Flores-Mir, Carlos; Amin, Maryam

    2014-08-01

    Parents' adherence to regular dental attendance for their young children plays an important role in improving and maintaining children's oral health. The authors conducted a systematic review to determine the factors that influence parental adherence to regular dental attendance for their children. The authors searched nine electronic databases to May 2013. They included quantitative and qualitative studies in which researchers examined factors influencing dental attendance in children 12 years or younger. The authors considered all emergency and nonemergency visits. They appraised methodological quality through the Health Evidence Bulletins Wales methodological quality assessment tool. The authors selected 14 studies for the systematic review. Researchers in these studies reported a variety of factors at the patient, provider and system levels that influenced dental attendance. Factors identified at the patient level included parents' education, socioeconomic status, behavioral beliefs, perceived power and subjective norms. At the provider level, the authors identified communication and professional skills. At the system level, the authors identified collaborations between communities and health care professionals, as well as a formal policy of referring patients from family physicians and pediatricians to dentists. Barriers to and facilitators of parents' adherence to regular dental attendance for their children should be identified and considered when formulating health promotion policies. Further research is needed to investigate psychosocial determinants of children's adherence to regular dental visits.

  20. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    NASA Astrophysics Data System (ADS)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  1. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  2. Injunctive Norms and Problem Gambling among College Students

    PubMed Central

    Lostutter, Ty W.; Whiteside, Ursula; Fossos, Nicole; Walker, Denise D.; Larimer, Mary E.

    2010-01-01

    Two studies examined the relationships among injunctive norms and college student gambling. In study 1 we evaluated the accuracy of perceptions of other students’ approval of gambling and the relationship between perceived approval and gambling behavior. In study 2 we evaluated gambling behavior as a function of perceptions of approval of other students, friends, and family. In study 1, which included 2524 college students, perceptions of other students’ approval of gambling were found to be overestimated and were negatively associated with gambling behavior. The results of study 2, which included 565 college students, replicated the findings of study 1 and revealed positive associations between gambling behavior and perceived approval of friends and family. Results highlight the complexity of injunctive norms and the importance of considering the reference group (e.g., peers, friends, family members) in their evaluation. Results also encourage caution in considering the incorporation of injunctive norms in prevention and intervention approaches. PMID:17394053

  3. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is basedmore » on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan{sup ©}600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction

  4. Noise suppression for dual-energy CT via penalized weighted least-square optimization with similarity-based regularization

    PubMed Central

    Harms, Joseph; Wang, Tonghe; Petrongolo, Michael; Niu, Tianye; Zhu, Lei

    2016-01-01

    Purpose: Dual-energy CT (DECT) expands applications of CT imaging in its capability to decompose CT images into material images. However, decomposition via direct matrix inversion leads to large noise amplification and limits quantitative use of DECT. Their group has previously developed a noise suppression algorithm via penalized weighted least-square optimization with edge-preservation regularization (PWLS-EPR). In this paper, the authors improve method performance using the same framework of penalized weighted least-square optimization but with similarity-based regularization (PWLS-SBR), which substantially enhances the quality of decomposed images by retaining a more uniform noise power spectrum (NPS). Methods: The design of PWLS-SBR is based on the fact that averaging pixels of similar materials gives a low-noise image. For each pixel, the authors calculate the similarity to other pixels in its neighborhood by comparing CT values. Using an empirical Gaussian model, the authors assign high/low similarity value to one neighboring pixel if its CT value is close/far to the CT value of the pixel of interest. These similarity values are organized in matrix form, such that multiplication of the similarity matrix to the image vector reduces image noise. The similarity matrices are calculated on both high- and low-energy CT images and averaged. In PWLS-SBR, the authors include a regularization term to minimize the L-2 norm of the difference between the images without and with noise suppression via similarity matrix multiplication. By using all pixel information of the initial CT images rather than just those lying on or near edges, PWLS-SBR is superior to the previously developed PWLS-EPR, as supported by comparison studies on phantoms and a head-and-neck patient. Results: On the line-pair slice of the Catphan©600 phantom, PWLS-SBR outperforms PWLS-EPR and retains spatial resolution of 8 lp/cm, comparable to the original CT images, even at 90% reduction in noise

  5. Professional Norms Guiding School Principals' Pedagogical Leadership

    ERIC Educational Resources Information Center

    Leo, Ulf

    2015-01-01

    Purpose: The purpose of this paper is to identify and analyze the professional norms surrounding school development, with a special emphasis on school principals' pedagogical leadership. Design/methodology/approach: A norm perspective is used to identify possible links between legal norms, professional norms, and actions. The findings are based on…

  6. Selection of regularization parameter in total variation image restoration.

    PubMed

    Liao, Haiyong; Li, Fang; Ng, Michael K

    2009-11-01

    We consider and study total variation (TV) image restoration. In the literature there are several regularization parameter selection methods for Tikhonov regularization problems (e.g., the discrepancy principle and the generalized cross-validation method). However, to our knowledge, these selection methods have not been applied to TV regularization problems. The main aim of this paper is to develop a fast TV image restoration method with an automatic selection of the regularization parameter scheme to restore blurred and noisy images. The method exploits the generalized cross-validation (GCV) technique to determine inexpensively how much regularization to use in each restoration step. By updating the regularization parameter in each iteration, the restored image can be obtained. Our experimental results for testing different kinds of noise show that the visual quality and SNRs of images restored by the proposed method is promising. We also demonstrate that the method is efficient, as it can restore images of size 256 x 256 in approximately 20 s in the MATLAB computing environment.

  7. Punish and Voice: Punishment Enhances Cooperation when Combined with Norm-Signalling

    PubMed Central

    Andrighetto, Giulia; Brandts, Jordi; Conte, Rosaria; Sabater-Mir, Jordi; Solaz, Hector; Villatoro, Daniel

    2013-01-01

    Material punishment has been suggested to play a key role in sustaining human cooperation. Experimental findings, however, show that inflicting mere material costs does not always increase cooperation and may even have detrimental effects. Indeed, ethnographic evidence suggests that the most typical punishing strategies in human ecologies (e.g., gossip, derision, blame and criticism) naturally combine normative information with material punishment. Using laboratory experiments with humans, we show that the interaction of norm communication and material punishment leads to higher and more stable cooperation at a lower cost for the group than when used separately. In this work, we argue and provide experimental evidence that successful human cooperation is the outcome of the interaction between instrumental decision-making and the norm psychology humans are provided with. Norm psychology is a cognitive machinery to detect and reason upon norms that is characterized by a salience mechanism devoted to track how much a norm is prominent within a group. We test our hypothesis both in the laboratory and with an agent-based model. The agent-based model incorporates fundamental aspects of norm psychology absent from previous work. The combination of these methods allows us to provide an explanation for the proximate mechanisms behind the observed cooperative behaviour. The consistency between the two sources of data supports our hypothesis that cooperation is a product of norm psychology solicited by norm-signalling and coercive devices. PMID:23776441

  8. Regular treatment with formoterol versus regular treatment with salmeterol for chronic asthma: serious adverse events

    PubMed Central

    Cates, Christopher J; Lasserson, Toby J

    2014-01-01

    Background An increase in serious adverse events with both regular formoterol and regular salmeterol in chronic asthma has been demonstrated in previous Cochrane reviews. Objectives We set out to compare the risks of mortality and non-fatal serious adverse events in trials which have randomised patients with chronic asthma to regular formoterol versus regular salmeterol. Search methods We identified trials using the Cochrane Airways Group Specialised Register of trials. We checked manufacturers’ websites of clinical trial registers for unpublished trial data and also checked Food and Drug Administration (FDA) submissions in relation to formoterol and salmeterol. The date of the most recent search was January 2012. Selection criteria We included controlled, parallel-design clinical trials on patients of any age and with any severity of asthma if they randomised patients to treatment with regular formoterol versus regular salmeterol (without randomised inhaled corticosteroids), and were of at least 12 weeks’ duration. Data collection and analysis Two authors independently selected trials for inclusion in the review and extracted outcome data. We sought unpublished data on mortality and serious adverse events from the sponsors and authors. Main results The review included four studies (involving 1116 adults and 156 children). All studies were open label and recruited patients who were already taking inhaled corticosteroids for their asthma, and all studies contributed data on serious adverse events. All studies compared formoterol 12 μg versus salmeterol 50 μg twice daily. The adult studies were all comparing Foradil Aerolizer with Serevent Diskus, and the children’s study compared Oxis Turbohaler to Serevent Accuhaler. There was only one death in an adult (which was unrelated to asthma) and none in children, and there were no significant differences in non-fatal serious adverse events comparing formoterol to salmeterol in adults (Peto odds ratio (OR) 0.77; 95

  9. The Effects of Liking Norms and Descriptive Norms on Vegetable Consumption: A Randomized Experiment

    PubMed Central

    Thomas, Jason M.; Liu, Jinyu; Robinson, Eric L.; Aveyard, Paul; Herman, C. Peter; Higgs, Suzanne

    2016-01-01

    There is evidence that social norm messages can be used to promote the selection of fruit and vegetables in low habitual consumers of these foods but it is unclear whether this effect is sustained over time. It is also unclear whether information about others' liking for a food (liking norm) could have the same effect. Using a 2 × 5 × 2 experimental design we investigated the effects of exposure to various messages on later intake from a food buffet and whether any effects were sustained 24 h after exposure in both low and high consumers of vegetables. There were three factors: delay (immediate food selection vs. food selection 24 h after exposure), message type (liking norm, descriptive norm, health message, vegetable variety condition, and neutral control message), and habitual consumption (low vs. high). The buffet consisted of three raw vegetables, three energy-dense foods, and two dips. For vegetables and non-vegetables there were no main effects of message type, nor any main effect of delay. There was a significant message × habitual vegetable consumption interaction for vegetable consumption; however, follow up tests did not yield any significant effects. Examining each food individually, there were no main effects of message type, nor any main effect of delay, for any of the foods; however, there was a message × habitual vegetable consumption interaction for broccoli. Consumption of broccoli in the health message and descriptive norm conditions did not differ from the control neutral condition. However, habitually low consumers of vegetables increased their consumption of broccoli in the vegetable variety and liking norm conditions relative to habitual low vegetable consumers in the neutral control condition (p < 0.05). Further, investigation of the effects of the liking norm and vegetable variety condition on vegetable intake is warranted. This trial is listed as NCT02618174 at clinicaltrials.gov. PMID:27065913

  10. Facial Anthropometric Norms among Kosovo - Albanian Adults.

    PubMed

    Staka, Gloria; Asllani-Hoxha, Flurije; Bimbashi, Venera

    2017-09-01

    The development of an anthropometric craniofacial database is a necessary multidisciplinary proposal. The aim of this study was to establish facial anthropometric norms and to investigate into sexual dimorphism in facial variables among Kosovo Albanian adults. The sample included 204 students of Dental School, Faculty of Medicine, University of Pristina. Using direct anthropometry, a series of 8 standard facial measurements was taken on each subject with digital caliper with an accuracy of 0.01 mm (Boss, Hamburg-Germany). The normative data and percentile rankings were calculated. Gender differences in facial variables were analyzed using t- test for independent samples (p<0.05). The index of sexual dimorphism (ISD) and percentage of sexual dimorphism were calculated for each facial measurement. ormative data for all facial anthropometric measurements in males were higher than in females. Male average norms compared with the female average norms differed significantly from each other (p>0.05).The highest index of sexual dimorphism (ISD) was found for the lower facial height 1.120, for which the highest percentage of sexual dimorphism, 12.01%., was also found. The lowest ISD was found for intercanthal width, 1.022, accompanied with the lowest percentage of sexual dimorphism, 2.23%. The obtained results have established the facial anthropometric norms among Kosovo Albanian adults. Sexual dimorphism has been confirmed for each facial measurement.

  11. Regular physical exercise improves endothelial function in heart transplant recipients.

    PubMed

    Schmidt, Alice; Pleiner, Johannes; Bayerle-Eder, Michaela; Wiesinger, Günther F; Rödler, Suzanne; Quittan, Michael; Mayer, Gert; Wolzt, Michael

    2002-04-01

    Impaired endothelial function is detectable in heart transplant (HTX) recipients and regarded as risk factor for coronary artery disease. We have studied whether endothelial function can be improved in HTX patients participating in a regular physical training program as demonstrated in patients with chronic heart failure, hypertension and coronary artery disease. Male HTX patients and healthy, age-matched controls were studied. Seven HTX patients (age: 60 +/- 6 yr; 6 +/- 2 yr of HTX) participated in an outpatient training program, six HTX patients (age: 63 +/- 8 yr; 7 +/- 1 yr of HTX) maintained a sedentary lifestyle without regular physical exercise since transplantation. A healthy control group comprised six subjects (age: 62 +/- 6 yr). Vascular function was assessed by flow-mediated dilation of the brachial artery (FMD). Systemic haemodynamic responses to intravenous infusion of the endothelium independent vasodilator sodium nitroprusside (SNP) and to NG-monomethyl-L-arginine (L-NMMA), an inhibitor of constitutive nitric oxide synthase, were also measured. Resting heart rate was significantly lower (p < 0.05) in healthy controls (66 +/- 13) than in the HTX training group (83 +/- 11) and in non-training HTX patients (91 +/- 9), baseline blood pressure also tended to be lower in healthy subjects and in the training HTX patients. FMD was significantly higher (p < 0.05) in the control group (8.4 +/- 2.2%) and in the training group (7.1 +/- 2.4%), compared with non-training HTX patients (1.4 +/- 0.8%). The response of systolic blood pressure (p = 0.08) and heart rate (p < 0.05) to L-NMMA was reduced in sedentary HTX patients compared with healthy controls and heart rate response to SNP was also impaired in sedentary HTX patients. Regular aerobic physical training restores vascular function in HTX patients, who are at considerable risk for developing vascular complications. This effect is demonstrable in conduit and systemic resistance arteries.

  12. Geochemical signature of NORM waste in Brazilian oil and gas industry.

    PubMed

    De-Paula-Costa, G T; Guerrante, I C; Costa-de-Moura, J; Amorim, F C

    2018-09-01

    The Brazilian Nuclear Energy Agency (CNEN) is responsible for any radioactive waste storage and disposal in the country. The storage of radioactive waste is carried out in the facilities under CNEN regulation and its disposal is operated, managed and controlled by the CNEN. Oil NORM (Naturally Occurring Radioactive Materials) in this article refers to waste coming from oil exploitation. Oil NORM has called much attention during the last decades, mostly because it is not possible to determine its primary source due to the actual absence of a regulatory control mechanism. There is no efficient regulatory tool which allows determining the origin of such NORM wastes even among those facilities under regulatory control. This fact may encourage non-authorized radioactive material transportation, smuggling and terrorism. The aim of this project is to provide a geochemical signature for oil NORM waste using its naturally occurring isotopic composition to identify its origin. The here proposed method is the modeling of radioisotopes normally present in oil pipe contamination such as 228 Ac, 214 Bi and 214 Pb analyzed by gamma spectrometry. The specific activities of elements from different decay series are plotted in a scatter diagram. This method was successfully tested with gamma spectrometry analyses of oil sludge NORM samples from four different sources obtained from Petrobras reports for the Campos Basin/Brazil. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Oral-diadochokinesis rates across languages: English and Hebrew norms.

    PubMed

    Icht, Michal; Ben-David, Boaz M

    2014-01-01

    Oro-facial and speech motor control disorders represent a variety of speech and language pathologies. Early identification of such problems is important and carries clinical implications. A common and simple tool for gauging the presence and severity of speech motor control impairments is oral-diadochokinesis (oral-DDK). Surprisingly, norms for adult performance are missing from the literature. The goals of this study were: (1) to establish a norm for oral-DDK rate for (young to middle-age) adult English speakers, by collecting data from the literature (five studies, N=141); (2) to investigate the possible effect of language (and culture) on oral-DDK performance, by analyzing studies conducted in other languages (five studies, N=140), alongside the English norm; and (3) to find a new norm for adult Hebrew speakers, by testing 115 speakers. We first offer an English norm with a mean of 6.2syllables/s (SD=.8), and a lower boundary of 5.4syllables/s that can be used to indicate possible abnormality. Next, we found significant differences between four tested languages (English, Portuguese, Farsi and Greek) in oral-DDK rates. Results suggest the need to set language and culture sensitive norms for the application of the oral-DDK task world-wide. Finally, we found the oral-DDK performance for adult Hebrew speakers to be 6.4syllables/s (SD=.8), not significantly different than the English norms. This implies possible phonological similarities between English and Hebrew. We further note that no gender effects were found in our study. We recommend using oral-DDK as an important tool in the speech language pathologist's arsenal. Yet, application of this task should be done carefully, comparing individual performance to a set norm within the specific language. Readers will be able to: (1) identify the Speech-Language Pathologist assessment process using the oral-DDK task, by comparing an individual performance to the present English norm, (2) describe the impact of language

  14. Perceived peer drinking norms and responsible drinking in UK university settings.

    PubMed

    Robinson, Eric; Jones, Andrew; Christiansen, Paul; Field, Matt

    2014-09-01

    Heavy drinking is common among students at UK universities. US students overestimate how much their peers drink and correcting this through the use of social norm messages may promote responsible drinking. We tested whether there is an association between perceived campus drinking norms and usual drinking behavior in UK university students and whether norm messages about responsible drinking correct normative misperceptions and increase students' intentions to drink responsibly. 1,020 UK university students took part in an online study. Participants were exposed to one of five message types: a descriptive norm, an injunctive norm, a descriptive and injunctive norm, or one of two control messages. Message credibility was assessed. Afterwards participants completed measures of intentions to drink responsibly and we measured usual drinking habits and perceptions of peer drinking. Perceptions of peer drinking were associated modestly with usual drinking behavior, whereby participants who believed other students drank responsibly also drank responsibly. Norm messages changed normative perceptions, but not in the target population of participants who underestimated responsible drinking in their peers at baseline. Norm messages did not increase intentions to drink responsibly and although based on accurate data, norm messages were not seen as credible. In this UK based study, although perceived social norms about peer drinking were associated with individual differences in drinking habits, campus wide norm messages about responsible drinking did not affect students' intentions to drink more responsibly. More research is required to determine if this approach can be applied to UK settings.

  15. PEPAB Norm Development (PEPABNRM)

    DTIC Science & Technology

    1991-01-09

    AD-A249 908A PEPAB NORM DEVELOPMENT (PEPABNRM) ANNUAL REPORT LESLIE CAROL MONTGOMERY PATRICIA A. DEUSTER S AYA V f JANUARY 9, 1991 S~ Supported by...Methods 14 C. Results 15 D . Discussion 16 III. DEVELOPMENT OF A COMPUTERIZED PHYSICAL ACTIVITY QUESTIONNAIRE 16 REFERENCES 20 TABLES 22 FIGURES 26 ANNUAL...group after the AC test was over 11 mM, even with the 30-sec rest inter- vals during which lactate could be removed. 3. Discussion In conclusion, the

  16. NORM Management in the Oil & Gas Industry

    NASA Astrophysics Data System (ADS)

    Cowie, Michael; Mously, Khalid; Fageeha, Osama; Nassar, Rafat

    2008-08-01

    It has been established that Naturally Occurring Radioactive Materials (NORM) accumulates at various locations along the oil/gas production process. Components such as wellheads, separation vessels, pumps, and other processing equipment can become NORM contaminated, and NORM can accumulate in sludge and other waste media. Improper handling and disposal of NORM contaminated equipment and waste can create a potential radiation hazard to workers and the environment. Saudi Aramco Environmental Protection Department initiated a program to identify the extent, form and level of NORM contamination associated with the company operations. Once identified the challenge of managing operations which had a NORM hazard was addressed in a manner that gave due consideration to workers and environmental protection as well as operations' efficiency and productivity. The benefits of shared knowledge, practice and experience across the oil & gas industry are seen as key to the establishment of common guidance on NORM management. This paper outlines Saudi Aramco's experience in the development of a NORM management strategy and its goals of establishing common guidance throughout the oil and gas industry.

  17. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    ERIC Educational Resources Information Center

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  18. Perceived eating norms and children's eating behaviour: An informational social influence account.

    PubMed

    Sharps, Maxine; Robinson, Eric

    2017-06-01

    There is initial evidence that beliefs about the eating behaviour of others (perceived eating norms) can influence children's vegetable consumption, but little research has examined the mechanisms explaining this effect. In two studies we aimed to replicate the effect that perceived eating norms have on children's vegetable consumption, and to explore mechanisms which may underlie the influence of perceived eating norms on children's vegetable consumption. Study 1 investigated whether children follow perceived eating norms due to a desire to maintain personal feelings of social acceptance. Study 2 investigated whether perceived eating norms influence eating behaviour because eating norms provide information which can remove uncertainty about how to behave. Across both studies children were exposed to vegetable consumption information of other children and their vegetable consumption was examined. In both studies children were influenced by perceived eating norms, eating more when led to believe others had eaten a large amount compared to when led to believe others had eaten no vegetables. In Study 1, children were influenced by a perceived eating norm regardless of whether they felt sure or unsure that other children accepted them. In Study 2, children were most influenced by a perceived eating norm if they were eating in a novel context in which it may have been uncertain how to behave, as opposed to an eating context that children had already encountered. Perceived eating norms may influence children's eating behaviour by removing uncertainty about how to behave, otherwise known as informational social influence. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Simultaneous regularization method for the determination of radius distributions from experimental multiangle correlation functions

    NASA Astrophysics Data System (ADS)

    Buttgereit, R.; Roths, T.; Honerkamp, J.; Aberle, L. B.

    2001-10-01

    Dynamic light scattering experiments have become a powerful tool in order to investigate the dynamical properties of complex fluids. In many applications in both soft matter research and industry so-called ``real world'' systems are subject of great interest. Here, the dilution of the investigated system often cannot be changed without getting measurement artifacts, so that one often has to deal with highly concentrated and turbid media. The investigation of such systems requires techniques that suppress the influence of multiple scattering, e.g., cross correlation techniques. However, measurements at turbid as well as highly diluted media lead to data with low signal-to-noise ratio, which complicates data analysis and leads to unreliable results. In this article a multiangle regularization method is discussed, which copes with the difficulties arising from such samples and enhances enormously the quality of the estimated solution. In order to demonstrate the efficiency of this multiangle regularization method we applied it to cross correlation functions measured at highly turbid samples.

  20. Gender Norms and Family Planning Practices Among Men in Western Jamaica.

    PubMed

    Walcott, Melonie M; Ehiri, John; Kempf, Mirjam C; Funkhouser, Ellen; Bakhoya, Marion; Aung, Maung; Zhang, Kui; Jolly, Pauline E

    2015-07-01

    The objective of this study was to identify the association between gender norms and family planning practices among men in Western Jamaica. A cross-sectional survey of 549 men aged 19 to 54 years attending or visiting four government-operated hospitals was conducted in 2011. Logistic regression models were used to identify factors associated with taking steps to prevent unwanted pregnancy, intention to have a large family size (three or more children), and fathering children with multiple women. Adjusted odds ratios (AORs) and 95% confidence intervals (CIs) were calculated from the models. Reduced odds for taking steps to prevent unwanted pregnancy among men with moderate (AOR = 0.5; 95% CI = 0.3-0.8) and high (AOR = 0.3; 95% CI = 0.1-0.6) support for inequitable gender norms was observed. Desiring large family size was associated with moderate (AOR = 2.0; 95% CI = 1.3-2.5) and high (AOR = 2.6; 95% CI = 1.5-4.3) support for macho scores. For men with two or more children (41%), there were increased odds of fathering children with multiple women among those who had moderate (AOR = 2.1; 95% CI = 1.0-4.4) and high (AOR = 2.4; 95% CI = 1.1-5.6) support for masculinity norms. Support for inequitable gender norms was associated with reduced odds of taking steps to prevent unwanted pregnancy, while support for masculinity norms was associated with desiring a large family size and fathering children with multiple women. These findings highlight the importance of including men and gender norms in family planning programs in Jamaica. © The Author(s) 2014.

  1. Perceived social norms and eating behaviour: An evaluation of studies and future directions.

    PubMed

    Robinson, Eric

    2015-12-01

    Social norms refer to what most people typically do or approve of. There has been some suggestion that perceived social norms may be an important influence on eating behaviour. We and others have shown that perceived social norms relating to very specific contexts can influence food intake (the amount of food consumed in a single sitting) in those contexts; these studies have predominantly sampled young female adults. Less research has examined whether perceived social norms predict dietary behaviour (the types of food people eat on a day to day basis); here, most evidence comes from cross-sectional studies, which have a number of limitations. A small number of intervention studies have started to explore whether perceived social norms can be used to encourage healthier eating with mixed results. The influence that perceived social norms have on objective measures of eating behaviour now needs to be examined using longitudinal methods in order to determine if social norms are an important influence on eating behaviour and/or can be used to promote meaningful behaviour change. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Lessons learned from public health campaigns and applied to anti-DWI norms development

    DOT National Transportation Integrated Search

    1995-05-01

    The purpose of this study was to examine norms development in past public health campaigns to direct lessons learned from those efforts to future anti-DNN'l programming. Three campaigns were selected for a multiple case study. The anti-smoking, anti-...

  3. An in situ/ex vivo comparison of the ability of regular and light colas to induce enamel wear when erosion is combined with abrasion.

    PubMed

    Rios, Daniela; Santos, Flávia Cardoso Zaidan; Honório, Heitor Marques; Magalhães, Ana Carolina; Wang, Linda; de Andrade Moreira Machado, Maria Aparecida; Buzalaf, Marilia Afonso Rabelo

    2011-03-01

    To evaluate whether the type of cola drink (regular or diet) could influence the wear of enamel subjected to erosion followed by brushing abrasion. Ten volunteers wore intraoral devices that each had eight bovine enamel blocks divided into four groups: ER, erosion with regular cola; EAR, erosion with regular cola plus abrasion; EL, erosion with light cola; and EAL, erosion with light cola plus abrasion. Each day for 1 week, half of each device was immersed in regular cola for 5 minutes. Then, two blocks were brushed using a fluoridated toothpaste and electric toothbrush for 30 seconds four times daily. Immediately after, the other half of the device was subjected to the same procedure using a light cola. The pH, calcium, phosphorus, and fluoride concentrations of the colas were analyzed using standard procedures. Enamel alterations were measured by profilometry. Data were tested using two-way ANOVA and Bonferroni test (P<.05). Regarding chemical characteristics, light cola presented pH 3.0, 13.7 mg Ca/L, 15.5 mg P/L, and 0.31 mg F/L, while regular cola had pH 2.6, 32.1 mg Ca/L, 18.1 mg P/L, and 0.26 mg F/L. The light cola promoted less enamel loss (EL, 0.36 Μm; EAL, 0.39 Μm) than its regular counterpart (ER, 0.72 Μm; EAR, 0.95 Μm) for both conditions. There was not a significant difference (P>.05) between erosion and erosion plus abrasion for light cola. However, for regular cola, erosion plus abrasion resulted in higher enamel loss than erosion alone. The data suggest that light cola promoted less enamel wear even when erosion was followed by brushing abrasion.

  4. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  5. Gait symmetry and regularity in transfemoral amputees assessed by trunk accelerations

    PubMed Central

    2010-01-01

    Background The aim of this study was to evaluate a method based on a single accelerometer for the assessment of gait symmetry and regularity in subjects wearing lower limb prostheses. Methods Ten transfemoral amputees and ten healthy control subjects were studied. For the purpose of this study, subjects wore a triaxial accelerometer on their thorax, and foot insoles. Subjects were asked to walk straight ahead for 70 m at their natural speed, and at a lower and faster speed. Indices of step and stride regularity (Ad1 and Ad2, respectively) were obtained by the autocorrelation coefficients computed from the three acceleration components. Step and stride durations were calculated from the plantar pressure data and were used to compute two reference indices (SI1 and SI2) for step and stride regularity. Results Regression analysis showed that both Ad1 well correlates with SI1 (R2 up to 0.74), and Ad2 well correlates with SI2 (R2 up to 0.52). A ROC analysis showed that Ad1 and Ad2 has generally a good sensitivity and specificity in classifying amputee's walking trial, as having a normal or a pathologic step or stride regularity as defined by means of the reference indices SI1 and SI2. In particular, the antero-posterior component of Ad1 and the vertical component of Ad2 had a sensitivity of 90.6% and 87.2%, and a specificity of 92.3% and 81.8%, respectively. Conclusions The use of a simple accelerometer, whose components can be analyzed by the autocorrelation function method, is adequate for the assessment of gait symmetry and regularity in transfemoral amputees. PMID:20085653

  6. Television Content Viewing Patterns: Some Clues from Societal Norms.

    ERIC Educational Resources Information Center

    McDonald, Daniel G.; Glynn, Carroll J.

    Focusing on how television viewing fits into a general model of consumer consumption patterns, a study examined (1) the extent to which the viewing of certain television content can be considered a "norm" of society, (2) similarities and differences between the norms for adults and those for children, and (3) some of the antecedents of…

  7. An accurate method to measure alpha-emitting natural radionuclides in atmospheric filters: Application in two NORM industries

    NASA Astrophysics Data System (ADS)

    Lozano, R. L.; Bolívar, J. P.; San Miguel, E. G.; García-Tenorio, R.; Gázquez, M. J.

    2011-12-01

    In this work, an accurate method for the measurement of natural alpha-emitting radionuclides from aerosols collected in air filters is presented and discussed in detail. The knowledge of the levels of several natural alpha-emitting radionuclides (238U, 234U, 232Th, 230Th, 228Th, 226Ra and 210Po) in atmospheric aerosols is essential not only for a better understanding of the several atmospheric processes and changes, but also for a proper evaluation of the potential doses, which can inadvertently be received by the population via inhalation. The proposed method takes into account the presence of intrinsic amounts of these radionuclides in the matrices of the quartz filters used, as well as the possible variation in the humidity of the filters throughout the collection process. In both cases, the corrections necessary in order to redress these levels have been evaluated and parameterized. Furthermore, a detailed study has been performed into the optimisation of the volume of air to be sampled in order to increase the accuracy in the determination of the radionuclides. The method as a whole has been applied for the determination of the activity concentrations of U- and Th-isotopes in aerosols collected at two NORM (Naturally Occurring Radioactive Material) industries located in the southwest of Spain. Based on the levels found, a conservative estimation has been performed to yield the additional committed effective doses to which the workers are potentially susceptible due to inhalation of anthropogenic material present in the environment of these two NORM industries.

  8. Descriptive Drinking Norms: For Whom Does Reference Group Matter?*

    PubMed Central

    Larimer, Mary E.; Neighbors, Clayton; LaBrie, Joseph W.; Atkins, David C.; Lewis, Melissa A.; Lee, Christine M.; Kilmer, Jason R.; Kaysen, Debra L.; Pedersen, Eric r.; Montoya, Heidi; Hodge, Kimberley; Desai, Sruti; Hummer, Justin F.; Walter, Theresa

    2011-01-01

    Objective: Perceived descriptive drinking norms often differ from actual norms and are positively related to personal consumption. However, it is not clear how normative perceptions vary with specificity of the reference group. Are drinking norms more accurate and more closely related to drinking behavior as reference group specificity increases? Do these relationships vary as a function of participant demographics? The present study examined the relationship between perceived descriptive norms and drinking behavior by ethnicity (Asian or White), sex, and fraternity/sorority status. Method: Participants were 2,699 (58% female) White (75%) or Asian (25%) undergraduates from two universities who reported their own alcohol use and perceived descriptive norms for eight reference groups: "typical student"; same sex, ethnicity, or fraternity/sorority status; and all combinations of these three factors. Results: Participants generally reported the highest perceived norms for the most distal reference group (typical student), with perceptions becoming more accurate as individuals' similarity to the reference group increased. Despite increased accuracy, participants perceived that all reference groups drank more than was actually the case. Across specific subgroups (fraternity/sorority members and men) different patterns emerged. Fraternity/sorority members reliably reported higher estimates of drinking for reference groups that included fraternity/ sorority status, and, to a lesser extent, men reported higher estimates for reference groups that included men. Conclusions: The results suggest that interventions targeting normative misperceptions may need to provide feedback based on participant demography or group membership. Although reference group-specific feedback may be important for some subgroups, typical student feedback provides the largest normative discrepancy for the majority of students. PMID:21906510

  9. Development and testing of the Youth Alcohol Norms Survey (YANS) instrument to measure youth alcohol norms and psychosocial influences

    PubMed Central

    Maycock, Bruce; Hildebrand, Janina; Zhao, Yun; Allsop, Steve; Lobo, Roanna; Howat, Peter

    2018-01-01

    Objectives This study aimed to develop and validate an online instrument to: (1) identify common alcohol-related social influences, norms and beliefs among adolescents; (2) clarify the process and pathways through which proalcohol norms are transmitted to adolescents; (3) describe the characteristics of social connections that contribute to the transmission of alcohol norms; and (4) identify the influence of alcohol marketing on adolescent norm development. Setting The online Youth Alcohol Norms Survey (YANS) was administered in secondary schools in Western Australia Participants Using a 2-week test–retest format, the YANS was administered to secondary school students (n=481, age=13–17 years, female 309, 64.2%). Primary and secondary outcome measures The development of the YANS was guided by social cognitive theory and comprised a systematic multistage process including evaluation of content and face validity. A 2-week test–retest format was employed. Exploratory factor analysis was conducted to determine the underlying factor structure of the instrument. Test–retest reliability was examined using intraclass correlation coefficient (ICC) and Cohen’s kappa. Results A five-factor structure with meaningful components and robust factorial loads was identified, and the five factors were labelled as ‘individual attitudes and beliefs’, ‘peer and community identity’, ‘sibling influences’, ‘school and community connectedness’ and ‘injunctive norms’, respectively. The instrument demonstrated stability across the test–retest procedure (ICC=0.68–0.88, Cohen’s kappa coefficient=0.69) for most variables. Conclusions The results support the reliability and factorial validity of this instrument. The YANS presents a promising tool, which enables comprehensive assessment of reciprocal individual, behavioural and environmental factors that influence alcohol-related norms among adolescents. PMID:29764872

  10. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  11. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  12. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  13. Brain Activity of Regular and Dyslexic Readers while Reading Hebrew as Compared to English Sentences

    ERIC Educational Resources Information Center

    Breznitz, Zvia; Oren, Revital; Shaul, Shelley

    2004-01-01

    The aim of the present study was to examine differences among "regular" and dyslexic adult bilingual readers when processing reading and reading related skills in their first (L1 Hebrew) and second (L2 English) languages. Brain activity during reading Hebrew and English unexpected sentence endings was also studied. Behavioral and…

  14. 1 / n Expansion for the Number of Matchings on Regular Graphs and Monomer-Dimer Entropy

    NASA Astrophysics Data System (ADS)

    Pernici, Mario

    2017-08-01

    Using a 1 / n expansion, that is an expansion in descending powers of n, for the number of matchings in regular graphs with 2 n vertices, we study the monomer-dimer entropy for two classes of graphs. We study the difference between the extensive monomer-dimer entropy of a random r-regular graph G (bipartite or not) with 2 n vertices and the average extensive entropy of r-regular graphs with 2 n vertices, in the limit n → ∞. We find a series expansion for it in the numbers of cycles; with probability 1 it converges for dimer density p < 1 and, for G bipartite, it diverges as |ln(1-p)| for p → 1. In the case of regular lattices, we similarly expand the difference between the specific monomer-dimer entropy on a lattice and the one on the Bethe lattice; we write down its Taylor expansion in powers of p through the order 10, expressed in terms of the number of totally reducible walks which are not tree-like. We prove through order 6 that its expansion coefficients in powers of p are non-negative.

  15. Uranium Mining and Norm in North America-Some Perspectives on Occupational Radiation Exposure.

    PubMed

    Brown, Steven H; Chambers, Douglas B

    2017-07-01

    All soils and rocks contain naturally occurring radioactive materials (NORM). Many ores and raw materials contain relatively elevated levels of natural radionuclides, and processing such materials can further increase the concentrations of naturally occurring radionuclides. In the U.S., these materials are sometimes referred to as technologically-enhanced naturally occurring radioactive materials (TENORM). Examples of NORM minerals include uranium ores, monazite (a source of rare earth minerals), and phosphate rock used to produce phosphate fertilizer. The processing of these materials has the potential to result in above-background radiation exposure to workers. Following a brief review of the sources and potential for worker exposure from NORM in these varied industries, this paper will then present an overview of uranium mining and recovery in North America, including discussion on the mining methods currently being used for both conventional (underground, open pit) and in situ leach (ISL), also referred to as In Situ Recovery (ISR), and the production of NORM materials and wastes associated with these uranium recovery methods. The radiological composition of the NORM products and wastes produced and recent data on radiological exposures received by workers in the North American uranium recovery industry are then described. The paper also identifies the responsible government agencies in the U.S. and Canada assigned the authority to regulate and control occupational exposure from these NORM materials.

  16. [Cleanliness Norms 1964-1975].

    PubMed

    Noelle-Neumann, E

    1976-01-01

    In 1964 the Institut für Demoskopie Allensbach made a first survey taking stock of norms concerning cleanliness in the Federal Republic of Germany. At that time, 78% of respondents thought that the vogue among young people of cultivating an unkempt look was past or on the wane (Table 1.). Today we know that this fashion was an indicator of more serious desires for change in many different areas like politics, sexual morality, education and that its high point was still to come. In the fall of 1975 a second survey, modelled on the one of 1964, was conducted. Again, it concentrated on norms, not on behavior. As expected, norms have changed over this period but not in a one-directional or simple manner. In general, people are much more large-minded about children's looks: neat, clean school-dress, properly combed hair, clean shoes, all this and also holding their things in order has become less important in 1975 (Table 2). To carry a clean handkerchief is becoming oldfashioned (Table 3). On the other hand, principles of bringing-up children have not loosened concerning personal hygiene - brushing ones teeth, washing hands, feet, and neck, clean fingernails (Table 4). On one item related to protection of the environment, namely throwing around waste paper, standards have even become more strict (Table 5). With regard to school-leavers, norms of personal hygiene have generally become more strict (Table 6). As living standards have gone up and the number of full bathrooms has risen from 42% to 75% of households, norms of personal hygiene have also increased: one warm bath a week seemed enough to 56% of adults in 1964, but to only 32% in 1975 (Table 7). Also standards for changing underwear have changed a lot: in 1964 only 12% of respondents said "every day", in 1975 48% said so (Table 8). Even more stringent norms are applied to young women (Tables 9/10). For comparison: 1964 there were automatic washing machines in 16%, 1975 in 79% of households. Answers to questions

  17. Relationships among L1 Print Exposure and Early L1 Literacy Skills, L2 Aptitude, and L2 Proficiency

    ERIC Educational Resources Information Center

    Sparks, Richard L.; Patton, Jon; Ganschow, Leonore; Humbach, Nancy

    2012-01-01

    Authors examined the relationship between individual differences in L1 print exposure and differences in early L1 skills and later L2 aptitude, L2 proficiency, and L2 classroom achievement. Participants were administered measures of L1 word decoding, spelling, phonemic awareness, reading comprehension, receptive vocabulary, and listening…

  18. Native-likeness in second language lexical categorization reflects individual language history and linguistic community norms.

    PubMed

    Zinszer, Benjamin D; Malt, Barbara C; Ameel, Eef; Li, Ping

    2014-01-01

    SECOND LANGUAGE LEARNERS FACE A DUAL CHALLENGE IN VOCABULARY LEARNING: First, they must learn new names for the 100s of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants' response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals' names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals' language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals' L2 picture naming responses.

  19. Native-likeness in second language lexical categorization reflects individual language history and linguistic community norms

    PubMed Central

    Zinszer, Benjamin D.; Malt, Barbara C.; Ameel, Eef; Li, Ping

    2014-01-01

    Second language learners face a dual challenge in vocabulary learning: First, they must learn new names for the 100s of common objects that they encounter every day. Second, after some time, they discover that these names do not generalize according to the same rules used in their first language. Lexical categories frequently differ between languages (Malt et al., 1999), and successful language learning requires that bilinguals learn not just new words but new patterns for labeling objects. In the present study, Chinese learners of English with varying language histories and resident in two different language settings (Beijing, China and State College, PA, USA) named 67 photographs of common serving dishes (e.g., cups, plates, and bowls) in both Chinese and English. Participants’ response patterns were quantified in terms of similarity to the responses of functionally monolingual native speakers of Chinese and English and showed the cross-language convergence previously observed in simultaneous bilinguals (Ameel et al., 2005). For English, bilinguals’ names for each individual stimulus were also compared to the dominant name generated by the native speakers for the object. Using two statistical models, we disentangle the effects of several highly interactive variables from bilinguals’ language histories and the naming norms of the native speaker community to predict inter-personal and inter-item variation in L2 (English) native-likeness. We find only a modest age of earliest exposure effect on L2 category native-likeness, but importantly, we find that classroom instruction in L2 negatively impacts L2 category native-likeness, even after significant immersion experience. We also identify a significant role of both L1 and L2 norms in bilinguals’ L2 picture naming responses. PMID:25386149

  20. Descriptive and Injunctive Network Norms Associated with Non Medical Use of Prescription Drugs among Homeless Youth

    PubMed Central

    Barman-Adhikari, Anamika; Al Tayyib, Alia; Begun, Stephanie; Bowen, Elizabeth; Rice, Eric

    2016-01-01

    Background Nonmedical use of prescription drugs (NMUPD) among youth and young adults is being increasingly recognized as a significant public health problem. Homeless youth in particular are more likely to engage in NMUPD compared to housed youth. Studies suggest that network norms are strongly associated with a range of substance use behaviors. However, evidence regarding the association between network norms and NMUPD is scarce. We sought to understand whether social network norms of NMUPD are associated with engagement in NMUPD among homeless youth. Methods 1,046 homeless youth were recruited from three drop-in centers in Los Angeles, CA and were interviewed regarding their individual and social network characteristics. Multivariate logistic regression was employed to evaluate the significance of associations between social norms (descriptive and injunctive) and self-reported NMUPD. Results Approximately 25% of youth reported past 30-day NMUPD. However, more youth (32.28%) of youth believed that their network members engage in NMUPD, perhaps suggesting some pluralistic ignorance bias. Both descriptive and injunctive norms were associated with self-reported NMUPD among homeless youth. However, these varied by network type, with presence of NMUPD engaged street-based and home-based peers (descriptive norm) increasing the likelihood of NMUPD, while objections from family-members (injunctive norm) decreasing that likelihood. Conclusions Our findings suggest that, like other substance use behaviors, NMUPD is also influenced by youths’ perceptions of the behaviors of their social network members. Therefore, prevention and interventions programs designed to influence NMUPD might benefit from taking a social network norms approach. PMID:27563741

  1. A Constructive Approach to Regularity of Lagrangian Trajectories for Incompressible Euler Flow in a Bounded Domain

    NASA Astrophysics Data System (ADS)

    Besse, Nicolas; Frisch, Uriel

    2017-04-01

    The 3D incompressible Euler equations are an important research topic in the mathematical study of fluid dynamics. Not only is the global regularity for smooth initial data an open issue, but the behaviour may also depend on the presence or absence of boundaries. For a good understanding, it is crucial to carry out, besides mathematical studies, high-accuracy and well-resolved numerical exploration. Such studies can be very demanding in computational resources, but recently it has been shown that very substantial gains can be achieved first, by using Cauchy's Lagrangian formulation of the Euler equations and second, by taking advantage of analyticity results of the Lagrangian trajectories for flows whose initial vorticity is Hölder-continuous. The latter has been known for about 20 years (Serfati in J Math Pures Appl 74:95-104, 1995), but the combination of the two, which makes use of recursion relations among time-Taylor coefficients to obtain constructively the time-Taylor series of the Lagrangian map, has been achieved only recently (Frisch and Zheligovsky in Commun Math Phys 326:499-505, 2014; Podvigina et al. in J Comput Phys 306:320-342, 2016 and references therein). Here we extend this methodology to incompressible Euler flow in an impermeable bounded domain whose boundary may be either analytic or have a regularity between indefinite differentiability and analyticity. Non-constructive regularity results for these cases have already been obtained by Glass et al. (Ann Sci Éc Norm Sup 45:1-51, 2012). Using the invariance of the boundary under the Lagrangian flow, we establish novel recursion relations that include contributions from the boundary. This leads to a constructive proof of time-analyticity of the Lagrangian trajectories with analytic boundaries, which can then be used subsequently for the design of a very high-order Cauchy-Lagrangian method.

  2. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  3. NORM regulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, P.

    1997-02-01

    The author reviews the question of regulation for naturally occuring radioactive material (NORM), and the factors that have made this a more prominent concern today. Past practices have been very relaxed, and have often involved very poor records, the involvment of contractors, and the disposition of contaminated equipment back into commercial service. The rationale behind the establishment of regulations is to provide worker protection, to exempt low risk materials, to aid in scrap recycling, to provide direction for remediation and to examine disposal options. The author reviews existing regulations at federal and state levels, impending legislation, and touches on themore » issue of site remediation and potential liabilities affecting the release of sites contaminated by NORM.« less

  4. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  5. Changes in Perceived Filial Obligation Norms Among Coresident Family Caregivers in Japan

    PubMed Central

    Tsutsui, Takako; Muramatsu, Naoko; Higashino, Sadanori

    2014-01-01

    Purpose of the Study: Japan introduced a nationwide long-term care insurance (LTCI) system in 2000, making long-term care (LTC) a right for older adults regardless of income and family availability. To shed light on its implications for family caregiving, we investigated perceived filial obligation norms among coresident primary family caregivers before and after the policy change. Design and Methods: Descriptive and multiple regression analyses were conducted to examine changes in perceived filial obligation norms and its subdimensions (financial, physical, and emotional support), using 2-wave panel survey data of coresident primary family caregivers (N = 611) in 1 city. The baseline survey was conducted in 1999, and a follow-up survey 2 years later. Results: On average, perceived filial obligation norms declined (p < .05). Daughters-in-law had the most significant declines (global and physical: p < .01, emotional: p < .05) among family caregivers. In particular, physical support, which Japan’s LTC reform targeted, declined significantly among daughters and daughters-in-law (p < .01). Multiple regression analysis indicated that daughters-in-law had significantly lower perceived filial obligation norms after the policy introduction than sons and daughters (p < .01 and p < .05, respectively), controlling for the baseline filial obligation and situational factors. Implications: Our research indicates declining roles of daughters-in-law in elder care during Japan’s LTCI system implementation period. Further international efforts are needed to design and implement longitudinal studies that help promote understanding of the interplay among national LTC policies, social changes, and caregiving norms and behaviors. PMID:24009170

  6. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mory, Cyril, E-mail: cyril.mory@philips.com; Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes; Auvray, Vincent

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method,more » which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.« less

  7. Effective dose evaluation of NORM-added consumer products using Monte Carlo simulations and the ICRP computational human phantoms.

    PubMed

    Lee, Hyun Cheol; Yoo, Do Hyeon; Testa, Mauro; Shin, Wook-Geun; Choi, Hyun Joon; Ha, Wi-Ho; Yoo, Jaeryong; Yoon, Seokwon; Min, Chul Hee

    2016-04-01

    The aim of this study is to evaluate the potential hazard of naturally occurring radioactive material (NORM) added consumer products. Using the Monte Carlo method, the radioactive products were simulated with ICRP reference phantom and the organ doses were calculated with the usage scenario. Finally, the annual effective doses were evaluated as lower than the public dose limit of 1mSv y(-1) for 44 products. It was demonstrated that NORM-added consumer products could be quantitatively assessed for the safety regulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. NORM management in the oil and gas industry.

    PubMed

    Cowie, M; Mously, K; Fageeha, O; Nassar, R

    2012-01-01

    It has been established that naturally occurring radioactive material (NORM) may accumulate at various locations along the oil and gas production process. Components such as wellheads, separation vessels, pumps, and other processing equipment can become contaminated with NORM, and NORM can accumulate in the form of sludge, scale, scrapings, and other waste media. This can create a potential radiation hazard to workers, the general public, and the environment if certain controls are not established. Saudi Aramco has developed NORM management guidelines, and is implementing a comprehensive strategy to address all aspects of NORM management that aim to enhance NORM monitoring; control of NORM-contaminated equipment; control of NORM waste handling and disposal; and protection, awareness, and training of workers. The benefits of shared knowledge, best practice, and experience across the oil and gas industry are seen as key to the establishment of common guidance. This paper outlines Saudi Aramco's experience in the development of a NORM management strategy, and its goals of establishing common guidance throughout the oil and gas industry. Copyright © 2012. Published by Elsevier Ltd.

  9. Do "Clicker" Educational Sessions Enhance the Effectiveness of a Social Norms Marketing Campaign?

    ERIC Educational Resources Information Center

    Killos, Lydia F.; Hancock, Linda C.; McGann, Amanda Wattenmaker; Keller, Adrienne E.

    2010-01-01

    Objective: Social norms campaigns are a cost-effective way to reduce high-risk drinking on college campuses. This study compares effectiveness of a "standard" social norms media (SNM) campaign for those with and without exposure to additional educational sessions using audience response technology ("clickers"). Methods: American College Health…

  10. Naturally Occurring Radioactive Materials (NORM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, P.

    1997-02-01

    This paper discusses the broad problems presented by Naturally Occuring Radioactive Materials (NORM). Technologically Enhanced naturally occuring radioactive material includes any radionuclides whose physical, chemical, radiological properties or radionuclide concentration have been altered from their natural state. With regard to NORM in particular, radioactive contamination is radioactive material in an undesired location. This is a concern in a range of industries: petroleum; uranium mining; phosphorus and phosphates; fertilizers; fossil fuels; forestry products; water treatment; metal mining and processing; geothermal energy. The author discusses in more detail the problem in the petroleum industry, including the isotopes of concern, the hazards theymore » present, the contamination which they cause, ways to dispose of contaminated materials, and regulatory issues. He points out there are three key programs to reduce legal exposure and problems due to these contaminants: waste minimization; NORM assesment (surveys); NORM compliance (training).« less

  11. Norm, gender, and bribe-giving: Insights from a behavioral game.

    PubMed

    Lan, Tian; Hong, Ying-Yi

    2017-01-01

    Previous research has suggested that bribery is more normative in some countries than in others. To understand the underlying process, this paper examines the effects of social norm and gender on bribe-giving behavior. We argue that social norms provide information for strategic planning and impression management, and thus would impact participants' bribe amount. Besides, males are more agentic and focus more on impression management than females. We predicted that males would defy the norm in order to win when the amount of their bribe was kept private, but would conform to the norm when it was made public. To test this hypothesis, we conducted two studies using a competitive game. In each game, we asked three participants to compete in five rounds of creative tasks, and the winner was determined by a referee's subjective judgment of the participants' performance on the tasks. Participants were allowed to give bribes to the referee. Bribe-giving norms were manipulated in two domains: norm level (high vs. low) and norm context (private vs. public), in order to investigate the influence of informational and affiliational needs. Studies 1 and 2 consistently showed that individuals conformed to the norm level of bribe-giving while maintaining a relative advantage for economic benefit. Study 2 found that males gave larger bribes in the private context than in the public, whereas females gave smaller bribes in both contexts. We used a latent growth curve model (LGCM) to depict the development of bribe-giving behaviors during five rounds of competition. The results showed that gender, creative performance, and norm level all influence the trajectory of bribe-giving behavior.

  12. Norm, gender, and bribe-giving: Insights from a behavioral game

    PubMed Central

    Hong, Ying-yi

    2017-01-01

    Previous research has suggested that bribery is more normative in some countries than in others. To understand the underlying process, this paper examines the effects of social norm and gender on bribe-giving behavior. We argue that social norms provide information for strategic planning and impression management, and thus would impact participants’ bribe amount. Besides, males are more agentic and focus more on impression management than females. We predicted that males would defy the norm in order to win when the amount of their bribe was kept private, but would conform to the norm when it was made public. To test this hypothesis, we conducted two studies using a competitive game. In each game, we asked three participants to compete in five rounds of creative tasks, and the winner was determined by a referee’s subjective judgment of the participants’ performance on the tasks. Participants were allowed to give bribes to the referee. Bribe-giving norms were manipulated in two domains: norm level (high vs. low) and norm context (private vs. public), in order to investigate the influence of informational and affiliational needs. Studies 1 and 2 consistently showed that individuals conformed to the norm level of bribe-giving while maintaining a relative advantage for economic benefit. Study 2 found that males gave larger bribes in the private context than in the public, whereas females gave smaller bribes in both contexts. We used a latent growth curve model (LGCM) to depict the development of bribe-giving behaviors during five rounds of competition. The results showed that gender, creative performance, and norm level all influence the trajectory of bribe-giving behavior. PMID:29272291

  13. Visual body size norms and the under‐detection of overweight and obesity

    PubMed Central

    Robinson, E.

    2017-01-01

    Summary Objectives The weight status of men with overweight and obesity tends to be visually underestimated, but visual recognition of female overweight and obesity has not been formally examined. The aims of the present studies were to test whether people can accurately recognize both male and female overweight and obesity and to examine a visual norm‐based explanation for why weight status is underestimated. Methods The present studies examine whether both male and female overweight and obesity are visually underestimated (Study 1), whether body size norms predict when underestimation of weight status occurs (Study 2) and whether visual exposure to heavier body weights adjusts visual body size norms and results in underestimation of weight status (Study 3). Results The weight status of men and women with overweight and obesity was consistently visually underestimated (Study 1). Body size norms predicted underestimation of weight status (Study 2) and in part explained why visual exposure to heavier body weights caused underestimation of overweight (Study 3). Conclusions The under‐detection of overweight and obesity may have been in part caused by exposure to larger body sizes resulting in an upwards shift in the range of body sizes that are perceived as being visually ‘normal’. PMID:29479462

  14. Beyond Picture Naming: Norms and Patient Data for a Verb Generation Task**

    PubMed Central

    Kurland, Jacquie; Reber, Alisson; Stokes, Polly

    2014-01-01

    Purpose The current study aimed to: 1) acquire a set of verb generation to picture norms; and 2) probe its utility as an outcomes measure in aphasia treatment. Method Fifty healthy volunteers participated in Phase I, the verb generation normative sample. They generated verbs for 218 pictures of common objects (ISI=5s). In Phase II, four persons with aphasia (PWA) generated verbs for 60 objects (ISI=10s). Their stimuli consisted of objects which were: 1) recently trained (for object naming; n=20); 2) untrained (a control set; n=20); or 3) from a set of pictures named correctly at baseline (n=20). Verb generation was acquired twice: two months into, and following, a six-month home practice program. Results No objects elicited perfect verb agreement in the normed sample. Stimuli with the highest percent agreement were mostly artifacts and dominant verbs primary functional associates. Although not targeted in treatment or home practice, PWA mostly improved performance in verb generation post-practice. Conclusions A set of clinically and experimentally useful verb generation norms was acquired for a subset of the Snodgrass and Vanderwart (1980) picture set. More cognitively demanding than confrontation naming, this task may help to fill the sizeable gap between object picture naming and propositional speech. PMID:24686752

  15. An Abbreviated Tool for Assessing Feminine Norm Conformity: Psychometric Properties of the Conformity to Feminine Norms Inventory-45

    ERIC Educational Resources Information Center

    Parent, Mike C.; Moradi, Bonnie

    2011-01-01

    The Conformity to Feminine Norms Inventory-45 (CFNI-45; Parent & Moradi, 2010) is an important tool for assessing level of conformity to feminine gender norms and for investigating the implications of such norms for women's functioning. The authors of the present study assessed the factor structure, measurement invariance, reliability, and…

  16. Learning accurate and interpretable models based on regularized random forests regression

    PubMed Central

    2014-01-01

    Background Many biology related research works combine data from multiple sources in an effort to understand the underlying problems. It is important to find and interpret the most important information from these sources. Thus it will be beneficial to have an effective algorithm that can simultaneously extract decision rules and select critical features for good interpretation while preserving the prediction performance. Methods In this study, we focus on regression problems for biological data where target outcomes are continuous. In general, models constructed from linear regression approaches are relatively easy to interpret. However, many practical biological applications are nonlinear in essence where we can hardly find a direct linear relationship between input and output. Nonlinear regression techniques can reveal nonlinear relationship of data, but are generally hard for human to interpret. We propose a rule based regression algorithm that uses 1-norm regularized random forests. The proposed approach simultaneously extracts a small number of rules from generated random forests and eliminates unimportant features. Results We tested the approach on some biological data sets. The proposed approach is able to construct a significantly smaller set of regression rules using a subset of attributes while achieving prediction performance comparable to that of random forests regression. Conclusion It demonstrates high potential in aiding prediction and interpretation of nonlinear relationships of the subject being studied. PMID:25350120

  17. A Review of Norms and Normative Multiagent Systems

    PubMed Central

    Mahmoud, Moamin A.; Ahmad, Mohd Sharifuddin; Mustapha, Aida

    2014-01-01

    Norms and normative multiagent systems have become the subjects of interest for many researchers. Such interest is caused by the need for agents to exploit the norms in enhancing their performance in a community. The term norm is used to characterize the behaviours of community members. The concept of normative multiagent systems is used to facilitate collaboration and coordination among social groups of agents. Many researches have been conducted on norms that investigate the fundamental concepts, definitions, classification, and types of norms and normative multiagent systems including normative architectures and normative processes. However, very few researches have been found to comprehensively study and analyze the literature in advancing the current state of norms and normative multiagent systems. Consequently, this paper attempts to present the current state of research on norms and normative multiagent systems and propose a norm's life cycle model based on the review of the literature. Subsequently, this paper highlights the significant areas for future work. PMID:25110739

  18. ℓ p-Norm Multikernel Learning Approach for Stock Market Price Forecasting

    PubMed Central

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ 1-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ p-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ 1-norm multiple support vector regression model. PMID:23365561

  19. ℓ(p)-Norm multikernel learning approach for stock market price forecasting.

    PubMed

    Shao, Xigao; Wu, Kun; Liao, Bifeng

    2012-01-01

    Linear multiple kernel learning model has been used for predicting financial time series. However, ℓ(1)-norm multiple support vector regression is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we adopt ℓ(p)-norm multiple kernel support vector regression (1 ≤ p < ∞) as a stock price prediction model. The optimization problem is decomposed into smaller subproblems, and the interleaved optimization strategy is employed to solve the regression model. The model is evaluated on forecasting the daily stock closing prices of Shanghai Stock Index in China. Experimental results show that our proposed model performs better than ℓ(1)-norm multiple support vector regression model.

  20. Extensive theoretical/numerical comparative studies on H2 and generalised H2 norms in sampled-data systems

    NASA Astrophysics Data System (ADS)

    Kim, Jung Hoon; Hagiwara, Tomomichi

    2017-11-01

    This paper is concerned with linear time-invariant (LTI) sampled-data systems (by which we mean sampled-data systems with LTI generalised plants and LTI controllers) and studies their H2 norms from the viewpoint of impulse responses and generalised H2 norms from the viewpoint of the induced norms from L2 to L∞. A new definition of the H2 norm of LTI sampled-data systems is first introduced through a sort of intermediate standpoint of those for the existing two definitions. We then establish unified treatment of the three definitions of the H2 norm through a matrix function G(τ) defined on the sampling interval [0, h). This paper next considers the generalised H2 norms, in which two types of the L∞ norm of the output are considered as the temporal supremum magnitude under the spatial 2-norm and ∞-norm of a vector-valued function. We further give unified treatment of the generalised H2 norms through another matrix function F(θ) which is also defined on [0, h). Through a close connection between G(τ) and F(θ), some theoretical relationships between the H2 and generalised H2 norms are provided. Furthermore, appropriate extensions associated with the treatment of G(τ) and F(θ) to the closed interval [0, h] are discussed to facilitate numerical computations and comparisons of the H2 and generalised H2 norms. Through theoretical and numerical studies, it is shown that the two generalised H2 norms coincide with neither of the three H2 norms of LTI sampled-data systems even though all the five definitions coincide with each other when single-output continuous-time LTI systems are considered as a special class of LTI sampled-data systems. To summarise, this paper clarifies that the five control performance measures are mutually related with each other but they are also intrinsically different from each other.

  1. Willingness to Drink as a Function of Peer Offers and Peer Norms in Early Adolescence

    PubMed Central

    Jackson, Kristina M; Roberts, Megan E; Colby, Suzanne M; Barnett, Nancy P; Abar, Caitlin C; Merrill, Jennifer E

    2014-01-01

    Objective: The goal of this study was to explore the effect of subjective peer norms on adolescents’ willingness to drink and whether this association was moderated by sensitivity to peer approval, prior alcohol use, and gender. Method: The sample was 1,023 middle-school students (52% female; 76% White; 12% Hispanic; Mage = 12.22 years) enrolled in a prospective study of drinking initiation and progression. Using web-based surveys, participants reported on their willingness to drink alcohol if offered by (a) a best friend or (b) a classmate, peer norms for two referent groups (close friends and classmates), history of sipping or consuming a full drink of alcohol, and sensitivity to peer approval (extreme peer orientation). Items were re-assessed at two follow-ups (administered 6 months apart). Results: Multilevel models revealed that measures of peer norms were significantly associated with both willingness outcomes, with the greatest prediction by descriptive norms. The association between norms and willingness was magnified for girls, those with limited prior experience with alcohol, and youths with low sensitivity to peer approval. Conclusions: Social norms appear to play a key role in substance use decisions and are relevant when considering more reactive behaviors that reflect willingness to drink under conducive circumstances. Prevention programs might target individuals with higher willingness, particularly girls who perceive others to be drinking and youths who have not yet sipped alcohol but report a higher perceived prevalence of alcohol consumption among both friends and peers. PMID:24766752

  2. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 22 Foreign Relations 1 2013-04-01 2013-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  3. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 22 Foreign Relations 1 2012-04-01 2012-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  4. 22 CFR 120.39 - Regular employee.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 22 Foreign Relations 1 2014-04-01 2014-04-01 false Regular employee. 120.39 Section 120.39 Foreign Relations DEPARTMENT OF STATE INTERNATIONAL TRAFFIC IN ARMS REGULATIONS PURPOSE AND DEFINITIONS § 120.39 Regular employee. (a) A regular employee means for purposes of this subchapter: (1) An individual...

  5. Half-quadratic variational regularization methods for speckle-suppression and edge-enhancement in SAR complex image

    NASA Astrophysics Data System (ADS)

    Zhao, Xia; Wang, Guang-xin

    2008-12-01

    Synthetic aperture radar (SAR) is an active remote sensing sensor. It is a coherent imaging system, the speckle is its inherent default, which affects badly the interpretation and recognition of the SAR targets. Conventional methods of removing the speckle is studied usually in real SAR image, which reduce the edges of the images at the same time as depressing the speckle. Morever, Conventional methods lost the information about images phase. Removing the speckle and enhancing the target and edge simultaneously are still a puzzle. To suppress the spckle and enhance the targets and the edges simultaneously, a half-quadratic variational regularization method in complex SAR image is presented, which is based on the prior knowledge of the targets and the edge. Due to the non-quadratic and non- convex quality and the complexity of the cost function, a half-quadratic variational regularization variation is used to construct a new cost function,which is solved by alternate optimization. In the proposed scheme, the construction of the model, the solution of the model and the selection of the model peremeters are studied carefully. In the end, we validate the method using the real SAR data.Theoretic analysis and the experimental results illustrate the the feasibility of the proposed method. Further more, the proposed method can preserve the information about images phase.

  6. Bodies obliged and unbound: differentiated response tendencies for injunctive and descriptive social norms.

    PubMed

    Jacobson, Ryan P; Mortensen, Chad R; Cialdini, Robert B

    2011-03-01

    The authors suggest that injunctive and descriptive social norms engage different psychological response tendencies when made selectively salient. On the basis of suggestions derived from the focus theory of normative conduct and from consideration of the norms' functions in social life, the authors hypothesized that the 2 norms would be cognitively associated with different goals, would lead individuals to focus on different aspects of self, and would stimulate different levels of conflict over conformity decisions. Additionally, a unique role for effortful self-regulation was hypothesized for each type of norm-used as a means to resist conformity to descriptive norms but as a means to facilitate conformity for injunctive norms. Four experiments supported these hypotheses. Experiment 1 demonstrated differences in the norms' associations to the goals of making accurate/efficient decisions and gaining/maintaining social approval. Experiment 2 provided evidence that injunctive norms lead to a more interpersonally oriented form of self-awareness and to a greater feeling of conflict about conformity decisions than descriptive norms. In the final 2 experiments, conducted in the lab (Experiment 3) and in a naturalistic environment (Experiment 4), self-regulatory depletion decreased conformity to an injunctive norm (Experiments 3 and 4) and increased conformity to a descriptive norm (Experiment 4)-even though the norms advocated identical behaviors. By illustrating differentiated response tendencies for each type of social norm, this research provides new and converging support for the focus theory of normative conduct. (c) 2011 APA, all rights reserved

  7. Association between physician supply, local practice norms, and outpatient visit rates

    PubMed Central

    Yasaitis, Laura C.; Bynum, Julie P.W.; Skinner, Jonathan S.

    2013-01-01

    Background There is considerable regional variation in Medicare outpatient visit rates; such variations may be the consequence of patient health, race/ethnicity differences, patient preferences, or physician supply and beliefs about the efficacy of frequently scheduled visits. Objective To test associations between varying regional Medicare outpatient visit rates and beneficiaries’ health, race/ethnicity, preferences, and physician practice norms and supply. Methods We used Medicare claims from 2006 and 2007, and data from national surveys of three different groups in 2005 – Medicare beneficiaries, cardiologists, and primary care physicians. Regression analysis tested explanations for outpatient visit rates: patient health (self-reported and hierarchical condition category (HCC) score), self-reported race/ethnicity, preferences for care, and local physician practice norms and supply in beneficiaries’ Hospital Referral Regions (HRRs) of residence. Results Beneficiaries in the highest quintile of HCC scores experienced 4.99 more visits than those in the lowest. Beneficiaries who were black experienced 2.14 fewer visits than others with similar health and preferences. Higher care-seeking preferences were marginally significantly associated with more visits, while education and poverty were insignificant. HRRs with high physician supply and high frequency practice norms were associated with 2.04 additional visits per year, while HRRs with high supply but low frequency norms were associated with 1.45 additional visits. Adjusting for all individual beneficiary covariates explained less than 20% of the original associations between visit rates and physician supply and practice norms. Conclusion Medicare beneficiaries’ health status, race, and preferences help explain individual office visit frequency; in particular, African-American patients appear to experience lower access to care. Yet, these factors explain a small fraction of the observed regional differences

  8. Evaluation of genotype x environment interactions in cotton using the method proposed by Eberhart and Russell and reaction norm models.

    PubMed

    Alves, R S; Teodoro, P E; Farias, F C; Farias, F J C; Carvalho, L P; Rodrigues, J I S; Bhering, L L; Resende, M D V

    2017-08-17

    Cotton produces one of the most important textile fibers of the world and has great relevance in the world economy. It is an economically important crop in Brazil, which is the world's fifth largest producer. However, studies evaluating the genotype x environment (G x E) interactions in cotton are scarce in this country. Therefore, the goal of this study was to evaluate the G x E interactions in two important traits in cotton (fiber yield and fiber length) using the method proposed by Eberhart and Russell (simple linear regression) and reaction norm models (random regression). Eight trials with sixteen upland cotton genotypes, conducted in a randomized block design, were used. It was possible to identify a genotype with wide adaptability and stability for both traits. Reaction norm models have excellent theoretical and practical properties and led to more informative and accurate results than the method proposed by Eberhart and Russell and should, therefore, be preferred. Curves of genotypic values as a function of the environmental gradient, which predict the behavior of the genotypes along the environmental gradient, were generated. These curves make possible the recommendation to untested environmental levels.

  9. Serotonin and Social Norms

    PubMed Central

    Bilderbeck, Amy C.; Brown, Gordon D. A.; Read, Judi; Woolrich, Mark; Cowen, Phillip J.; Behrens, Tim E. J.

    2014-01-01

    How do people sustain resources for the benefit of individuals and communities and avoid the tragedy of the commons, in which shared resources become exhausted? In the present study, we examined the role of serotonin activity and social norms in the management of depletable resources. Healthy adults, alongside social partners, completed a multiplayer resource-dilemma game in which they repeatedly harvested from a partially replenishable monetary resource. Dietary tryptophan depletion, leading to reduced serotonin activity, was associated with aggressive harvesting strategies and disrupted use of the social norms given by distributions of other players’ harvests. Tryptophan-depleted participants more frequently exhausted the resource completely and also accumulated fewer rewards than participants who were not tryptophan depleted. Our findings show that rank-based social comparisons are crucial to the management of depletable resources, and that serotonin mediates responses to social norms. PMID:24815611

  10. PD-L1 gene polymorphisms and low serum level of PD-L1 protein are associated to type 1 diabetes in Chile.

    PubMed

    Pizarro, Carolina; García-Díaz, Diego F; Codner, Ethel; Salas-Pérez, Francisca; Carrasco, Elena; Pérez-Bravo, Francisco

    2014-11-01

    Type 1 diabetes (T1D) has a complex etiology in which genetic and environmental factors are involved, whose interactions have not yet been completely clarified. In this context, the role in PD-1 pathway and its ligands 1 and 2 (PD-L1 and PD-L2) have been proposed as candidates in several autoimmune diseases. The aim of this work was to determine the allele and haplotype frequency of six gene polymorphisms of PD-ligands (PD-L1 and PD-L2) in Chilean T1D patients and their effect on serum levels of PD-L1 and autoantibody profile (GAD65 and IA2). This study cohort comprised 205 T1D patients and 205 normal children. We performed genotypic analysis of PD-L1 and PD-L2 genes by TaqMan method. Determination of anti-GAD65 and anti-IA-2 autoantibodies was performed by ELISA. The PD-L1 serum levels were measured. The allelic distribution of PD-L1 variants (rs2297137 and rs4143815) showed differences between T1D patients and controls (p = 0.035 and p = 0.022, respectively). No differences were detected among the PD-L2 polymorphisms, and only the rs16923189 showed genetic variation. T1D patients showed decreased serum levels of PD-L1 compared to controls: 1.42 [0.23-7.45] ng/mL versus 3.35 [0.49-5.89] ng/mL (p < 0.025). In addition, the CGG haplotype in PD-L1 associated with T1D (constructed from rs822342, rs2297137 and rs4143815 polymorphisms) showed an OR = 1.44 [1.08 to 1.93]. Finally, no association of these genetic variants was observed with serum concentrations of PD ligands or auto-antibody profile, although a correlation between PD-L1 ligand serum concentration and the age at disease onset was detected. Two polymorphism of PD-L1 are presented in different allelic variants between T1D and healthy subjects, also PDL-1 serum levels are significantly lowered in diabetics patients. Moreover, the age of onset of the disease determine differences between serum ligand levels in diabetics, being lower in younger. These results points to a possible establishment of

  11. The Moderating Role of Close versus Distal Peer Injunctive Norms and Interdependent Self-Construal in the Effects of Descriptive Norms on College Drinking.

    PubMed

    Yang, Bo

    2018-06-01

    Based on the theory of normative social behavior (Rimal & Real, 2005), this study examined the effects of descriptive norms, close versus distal peer injunctive norms, and interdependent self-construal on college students' intentions to consume alcohol. Results of a cross-sectional study conducted among U.S. college students (N = 581) found that descriptive norms, close, and distal peer injunctive norms had independent effects on college students' intentions to consume alcohol. Furthermore, close peer injunctive norms moderated the effects of descriptive norms on college students' intentions to consume alcohol and the interaction showed different patterns among students with a strong and weak interdependent self-construal. High levels of close peer injunctive norms weakened the relationship between descriptive norms and intentions to consume alcohol among students with a strong interdependent self-construal but strengthened the relationship between descriptive norms and intentions to consume alcohol among students with a weak interdependent self-construal. Implications of the findings for norms-based research and college drinking interventions are discussed.

  12. Time lag and communication in changing unpopular norms.

    PubMed

    Gërxhani, Klarita; Bruggeman, Jeroen

    2015-01-01

    Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm.

  13. Time Lag and Communication in Changing Unpopular Norms

    PubMed Central

    Gërxhani, Klarita; Bruggeman, Jeroen

    2015-01-01

    Humans often coordinate their social lives through norms. When a large majority of people are dissatisfied with an existing norm, it seems obvious that they will change it. Often, however, this does not occur. We investigate how a time lag between individual support of a norm change and the change itself hinders such change, related to the critical mass of supporters needed to effectuate the change, and the (im)possibility of communicating about it. To isolate these factors, we utilize a laboratory experiment. As predicted, we find unambiguous effects of time lag on precluding norm change; a higher threshold for a critical mass does so as well. Communication facilitates choosing superior norms but it does not necessarily lead to norm change when the uncertainty on whether there will be a norm change in the future is high. Communication seems to help coordination on actions at the present but not the future. Hence, the uncertainty driven by time lag makes individuals choose the status quo, here the unpopular norm. PMID:25880200

  14. Cultural Norms and Nonverbal Communication: An Illustration

    ERIC Educational Resources Information Center

    Chang, Yanrong

    2015-01-01

    Nonverbal communication takes place in specific cultural contexts and is influenced by cultural norms. Cultural norms are "social rules for what certain types of people should and should not do" (Hall, 2005). Different cultures might have different norms for nonverbal behaviors in specific social, relational, and geographical contexts.…

  15. Interval-valued intuitionistic fuzzy matrix games based on Archimedean t-conorm and t-norm

    NASA Astrophysics Data System (ADS)

    Xia, Meimei

    2018-04-01

    Fuzzy game theory has been applied in many decision-making problems. The matrix game with interval-valued intuitionistic fuzzy numbers (IVIFNs) is investigated based on Archimedean t-conorm and t-norm. The existing matrix games with IVIFNs are all based on Algebraic t-conorm and t-norm, which are special cases of Archimedean t-conorm and t-norm. In this paper, the intuitionistic fuzzy aggregation operators based on Archimedean t-conorm and t-norm are employed to aggregate the payoffs of players. To derive the solution of the matrix game with IVIFNs, several mathematical programming models are developed based on Archimedean t-conorm and t-norm. The proposed models can be transformed into a pair of primal-dual linear programming models, based on which, the solution of the matrix game with IVIFNs is obtained. It is proved that the theorems being valid in the exiting matrix game with IVIFNs are still true when the general aggregation operator is used in the proposed matrix game with IVIFNs. The proposed method is an extension of the existing ones and can provide more choices for players. An example is given to illustrate the validity and the applicability of the proposed method.

  16. Low thrust spacecraft transfers optimization method with the stepwise control structure in the Earth-Moon system in terms of the L1-L2 transfer

    NASA Astrophysics Data System (ADS)

    Fain, M. K.; Starinova, O. L.

    2016-04-01

    The paper outlines the method for determination of the locally optimal stepwise control structure in the problem of the low thrust spacecraft transfer optimization in the Earth-Moon system, including the L1-L2 transfer. The total flight time as an optimization criterion is considered. The optimal control programs were obtained by using the Pontryagin's maximum principle. As a result of optimization, optimal control programs, corresponding trajectories, and minimal total flight times were determined.

  17. General Population Norms about Child Abuse and Neglect and Associations with Childhood Experiences

    ERIC Educational Resources Information Center

    Bensley, L.; Ruggles, D.; Simmons, K.W.; Harris, C.; Williams, K.; Putvin, T.; Allen, M.

    2004-01-01

    Background:: A variety of definitions of child abuse and neglect exist. However, little is known about norms in the general population as to what constitutes child abuse and neglect or how perceived norms may be related to personal experiences. Methods:: We conducted a random-digit-dialed telephone survey of 504 Washington State adults.…

  18. Nonnative Processing of Verbal Morphology: In Search of Regularity

    ERIC Educational Resources Information Center

    Gor, Kira; Cook, Svetlana

    2010-01-01

    There is little agreement on the mechanisms involved in second language (L2) processing of regular and irregular inflectional morphology and on the exact role of age, amount, and type of exposure to L2 resulting in differences in L2 input and use. The article contributes to the ongoing debates by reporting the results of two experiments on Russian…

  19. The Effect of Descriptive Norms on Pregaming Frequency: Tests of Five Moderators

    PubMed Central

    Merrill, Jennifer E.; Kenney, Shannon R.; Carey, Kate B.

    2016-01-01

    Background Pregaming is highly prevalent on college campuses and associated with heightened levels of intoxication and risk of alcohol consequences. However, research examining the correlates of pregaming behavior is limited. Descriptive norms (i.e., perceptions about the prevalence or frequency of a behavior) are reliable and comparatively strong predictors of general drinking behavior, with recent evidence indicating that they are also associated with pregaming. Objectives We tested the hypothesis that higher descriptive norms for pregaming frequency would be associated with personal pregaming frequency. We also tested whether this effect would be stronger in the context of several theory-based moderators: female gender, higher injunctive norms (i.e., perceptions of others' attitudes toward a particular behavior), a more positive attitude toward pregaming, a stronger sense of identification with the drinking habits of other students, and stronger social comparison tendencies. Methods College student drinkers (N=198, 63% female) participated in an online survey assessing frequency of pregaming, descriptive norms, and hypothesized moderators. Results A multiple regression model revealed that higher descriptive norms, a more positive attitude toward pregaming, and stronger peer identification were significantly associated with greater pregaming frequency among drinkers. However, no moderators of the association between descriptive norms and pregaming frequency were observed. Conclusions/Importance Descriptive norms are robust predictors of pregaming behavior, for both genders and across levels of several potential moderators. Future research seeking to understand pregaming behavior should consider descriptive norms, as well as personal attitudes and identification with student peers, as targets of interventions designed to reduce pregaming. PMID:27070494

  20. The Influence of Social Norms on Flu Vaccination among African American and White Adults

    ERIC Educational Resources Information Center

    Quinn, Sandra Crouse; Hilyard, Karen M.; Jamison, Amelia M.; An, Ji; Hancock, Gregory R.; Musa, Donald; Freimuth, Vicki S.

    2017-01-01

    Adult influenza vaccination rates remain suboptimal, particularly among African Americans. Social norms may influence vaccination behavior, but little research has focused on influenza vaccine and almost no research has focused on racially-specific norms. This mixed methods investigation utilizes qualitative interviews and focus groups (n = 118)…