Sample records for lp sparsity regularization

  1. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

    PubMed Central

    Zhang, Jie; Fan, Shangang; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki

    2017-01-01

    Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0

  2. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    PubMed

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0

  3. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  4. Compressive sensing of electrocardiogram signals by promoting sparsity on the second-order difference and by using dictionary learning.

    PubMed

    Pant, Jeevan K; Krishnan, Sridhar

    2014-04-01

    A new algorithm for the reconstruction of electrocardiogram (ECG) signals and a dictionary learning algorithm for the enhancement of its reconstruction performance for a class of signals are proposed. The signal reconstruction algorithm is based on minimizing the lp pseudo-norm of the second-order difference, called as the lp(2d) pseudo-norm, of the signal. The optimization involved is carried out using a sequential conjugate-gradient algorithm. The dictionary learning algorithm uses an iterative procedure wherein a signal reconstruction and a dictionary update steps are repeated until a convergence criterion is satisfied. The signal reconstruction step is implemented by using the proposed signal reconstruction algorithm and the dictionary update step is implemented by using the linear least-squares method. Extensive simulation results demonstrate that the proposed algorithm yields improved reconstruction performance for temporally correlated ECG signals relative to the state-of-the-art lp(1d)-regularized least-squares and Bayesian learning based algorithms. Also for a known class of signals, the reconstruction performance of the proposed algorithm can be improved by applying it in conjunction with a dictionary obtained using the proposed dictionary learning algorithm.

  5. SparseBeads data: benchmarking sparsity-regularized computed tomography

    NASA Astrophysics Data System (ADS)

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  6. Composite SAR imaging using sequential joint sparsity

    NASA Astrophysics Data System (ADS)

    Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.

    2017-06-01

    This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.

  7. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE PAGES

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.; ...

    2018-03-09

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  8. Two-level structural sparsity regularization for identifying lattices and defects in noisy images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xin; Belianinov, Alex; Dyck, Ondrej E.

    Here, this paper presents a regularized regression model with a two-level structural sparsity penalty applied to locate individual atoms in a noisy scanning transmission electron microscopy image (STEM). In crystals, the locations of atoms is symmetric, condensed into a few lattice groups. Therefore, by identifying the underlying lattice in a given image, individual atoms can be accurately located. We propose to formulate the identification of the lattice groups as a sparse group selection problem. Furthermore, real atomic scale images contain defects and vacancies, so atomic identification based solely on a lattice group may result in false positives and false negatives.more » To minimize error, model includes an individual sparsity regularization in addition to the group sparsity for a within-group selection, which results in a regression model with a two-level sparsity regularization. We propose a modification of the group orthogonal matching pursuit (gOMP) algorithm with a thresholding step to solve the atom finding problem. The convergence and statistical analyses of the proposed algorithm are presented. The proposed algorithm is also evaluated through numerical experiments with simulated images. The applicability of the algorithm on determination of atom structures and identification of imaging distortions and atomic defects was demonstrated using three real STEM images. In conclusion, we believe this is an important step toward automatic phase identification and assignment with the advent of genomic databases for materials.« less

  9. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  10. Regularized matrix regression

    PubMed Central

    Zhou, Hua; Li, Lexin

    2014-01-01

    Summary Modern technologies are producing a wealth of data with complex structures. For instance, in two-dimensional digital imaging, flow cytometry and electroencephalography, matrix-type covariates frequently arise when measurements are obtained for each combination of two underlying variables. To address scientific questions arising from those data, new regression methods that take matrices as covariates are needed, and sparsity or other forms of regularization are crucial owing to the ultrahigh dimensionality and complex structure of the matrix data. The popular lasso and related regularization methods hinge on the sparsity of the true signal in terms of the number of its non-zero coefficients. However, for the matrix data, the true signal is often of, or can be well approximated by, a low rank structure. As such, the sparsity is frequently in the form of low rank of the matrix parameters, which may seriously violate the assumption of the classical lasso. We propose a class of regularized matrix regression methods based on spectral regularization. A highly efficient and scalable estimation algorithm is developed, and a degrees-of-freedom formula is derived to facilitate model selection along the regularization path. Superior performance of the method proposed is demonstrated on both synthetic and real examples. PMID:24648830

  11. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    PubMed Central

    Jørgensen, J. S.; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620

  12. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    PubMed

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  13. Sparse filtering with the generalized lp/lq norm and its applications to the condition monitoring of rotating machinery

    NASA Astrophysics Data System (ADS)

    Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay

    2018-03-01

    Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.

  14. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    NASA Astrophysics Data System (ADS)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal decomposition technique for an important biomedical signal processing problem: the detection of sleep spindles and K-complexes in human sleep electroencephalography (EEG). We propose a non-linear model for the EEG consisting of three components: (1) a transient (sparse piecewise constant) component, (2) a low-frequency component, and (3) an oscillatory component. The oscillatory component admits a sparse time-frequency representation. Using a convex objective function, we propose a fast non-linear optimization algorithm to estimate the three components in the proposed signal model. The low-frequency and oscillatory components are then used to estimate the K-complexes and sleep spindles respectively. The proposed detection method is shown to outperform several state-of-the-art automated sleep spindles detection methods.

  15. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  16. s-SMOOTH: Sparsity and Smoothness Enhanced EEG Brain Tomography

    PubMed Central

    Li, Ying; Qin, Jing; Hsin, Yue-Loong; Osher, Stanley; Liu, Wentai

    2016-01-01

    EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios. PMID:27965529

  17. Hyperspectral imagery super-resolution by compressive sensing inspired dictionary learning and spatial-spectral regularization.

    PubMed

    Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui

    2015-01-19

    Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.

  18. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  19. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  20. Group-sparse representation with dictionary learning for medical image denoising and fusion.

    PubMed

    Li, Shutao; Yin, Haitao; Fang, Leyuan

    2012-12-01

    Recently, sparse representation has attracted a lot of interest in various areas. However, the standard sparse representation does not consider the intrinsic structure, i.e., the nonzero elements occur in clusters, called group sparsity. Furthermore, there is no dictionary learning method for group sparse representation considering the geometrical structure of space spanned by atoms. In this paper, we propose a novel dictionary learning method, called Dictionary Learning with Group Sparsity and Graph Regularization (DL-GSGR). First, the geometrical structure of atoms is modeled as the graph regularization. Then, combining group sparsity and graph regularization, the DL-GSGR is presented, which is solved by alternating the group sparse coding and dictionary updating. In this way, the group coherence of learned dictionary can be enforced small enough such that any signal can be group sparse coded effectively. Finally, group sparse representation with DL-GSGR is applied to 3-D medical image denoising and image fusion. Specifically, in 3-D medical image denoising, a 3-D processing mechanism (using the similarity among nearby slices) and temporal regularization (to perverse the correlations across nearby slices) are exploited. The experimental results on 3-D image denoising and image fusion demonstrate the superiority of our proposed denoising and fusion approaches.

  1. Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations

    NASA Astrophysics Data System (ADS)

    Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian

    2018-04-01

    Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.

  2. Manifold optimization-based analysis dictionary learning with an ℓ1∕2-norm regularizer.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie; Yang, Zuyuan; Xie, Shengli; Chen, Wuhui

    2018-02-01

    Recently there has been increasing attention towards analysis dictionary learning. In analysis dictionary learning, it is an open problem to obtain the strong sparsity-promoting solutions efficiently while simultaneously avoiding the trivial solutions of the dictionary. In this paper, to obtain the strong sparsity-promoting solutions, we employ the ℓ 1∕2 norm as a regularizer. The very recent study on ℓ 1∕2 norm regularization theory in compressive sensing shows that its solutions can give sparser results than using the ℓ 1 norm. We transform a complex nonconvex optimization into a number of one-dimensional minimization problems. Then the closed-form solutions can be obtained efficiently. To avoid trivial solutions, we apply manifold optimization to update the dictionary directly on the manifold satisfying the orthonormality constraint, so that the dictionary can avoid the trivial solutions well while simultaneously capturing the intrinsic properties of the dictionary. The experiments with synthetic and real-world data verify that the proposed algorithm for analysis dictionary learning can not only obtain strong sparsity-promoting solutions efficiently, but also learn more accurate dictionary in terms of dictionary recovery and image processing than the state-of-the-art algorithms. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  4. Hydrolyzed Formula With Reduced Protein Content Supports Adequate Growth: A Randomized Controlled Noninferiority Trial.

    PubMed

    Ahrens, Birgit; Hellmuth, Christian; Haiden, Nadja; Olbertz, Dirk; Hamelmann, Eckard; Vusurovic, Milica; Fleddermann, Manja; Roehle, Robert; Knoll, Anette; Koletzko, Berthold; Wahn, Ulrich; Beyer, Kirsten

    2018-05-01

    A high protein content of nonhydrolyzed infant formula exceeding metabolic requirements can induce rapid weight gain and obesity. Hydrolyzed formula with too low protein (LP) content may result in inadequate growth. The aim of this study was to investigate noninferiority of partial and extensively hydrolyzed formulas (pHF, eHF) with lower hydrolyzed protein content than conventionally, regularly used formulas, with or without synbiotics for normal growth of healthy term infants. In an European multi-center, parallel, prospective, controlled, double-blind trial, 402 formula-fed infants were randomly assigned to four groups: LP-formulas (1.9 g protein/100 kcal) as pHF with or without synbiotics, LP-eHF formula with synbiotics, or regular protein eHF (2.3 g protein/100 kcal). One hundred and one breast-fed infants served as observational reference group. As primary endpoint, noninferiority of daily weight gain during the first 4 months of life was investigated comparing the LP-group to a regular protein eHF group. A comparison of daily weight gain in infants receiving LPpHF (2.15 g/day CI -0.18 to inf.) with infants receiving regular protein eHF showed noninferior weight gain (-3.5 g/day margin; per protocol [PP] population). Noninferiority was also confirmed for the other tested LP formulas. Likewise, analysis of metabolic parameters and plasma amino acid concentrations demonstrated a safe and balanced nutritional composition. Energetic efficiency for growth (weight) was slightly higher in LPeHF and synbiotics compared with LPpHF and synbiotics. All tested hydrolyzed LP formulas allowed normal weight gain without being inferior to regular protein eHF in the first 4 months of life. This trial was registered at clinicaltrials.gov, NCT01143233.

  5. Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction

    PubMed Central

    Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.

    2016-01-01

    X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902

  6. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    PubMed

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  7. Low-dose CT reconstruction with patch based sparsity and similarity constraints

    NASA Astrophysics Data System (ADS)

    Xu, Qiong; Mou, Xuanqin

    2014-03-01

    As the rapid growth of CT based medical application, low-dose CT reconstruction becomes more and more important to human health. Compared with other methods, statistical iterative reconstruction (SIR) usually performs better in lowdose case. However, the reconstructed image quality of SIR highly depends on the prior based regularization due to the insufficient of low-dose data. The frequently-used regularization is developed from pixel based prior, such as the smoothness between adjacent pixels. This kind of pixel based constraint cannot distinguish noise and structures effectively. Recently, patch based methods, such as dictionary learning and non-local means filtering, have outperformed the conventional pixel based methods. Patch is a small area of image, which expresses structural information of image. In this paper, we propose to use patch based constraint to improve the image quality of low-dose CT reconstruction. In the SIR framework, both patch based sparsity and similarity are considered in the regularization term. On one hand, patch based sparsity is addressed by sparse representation and dictionary learning methods, on the other hand, patch based similarity is addressed by non-local means filtering method. We conducted a real data experiment to evaluate the proposed method. The experimental results validate this method can lead to better image with less noise and more detail than other methods in low-count and few-views cases.

  8. Nonconvex Sparse Logistic Regression With Weakly Convex Regularization

    NASA Astrophysics Data System (ADS)

    Shen, Xinyue; Gu, Yuantao

    2018-06-01

    In this work we propose to fit a sparse logistic regression model by a weakly convex regularized nonconvex optimization problem. The idea is based on the finding that a weakly convex function as an approximation of the $\\ell_0$ pseudo norm is able to better induce sparsity than the commonly used $\\ell_1$ norm. For a class of weakly convex sparsity inducing functions, we prove the nonconvexity of the corresponding sparse logistic regression problem, and study its local optimality conditions and the choice of the regularization parameter to exclude trivial solutions. Despite the nonconvexity, a method based on proximal gradient descent is used to solve the general weakly convex sparse logistic regression, and its convergence behavior is studied theoretically. Then the general framework is applied to a specific weakly convex function, and a necessary and sufficient local optimality condition is provided. The solution method is instantiated in this case as an iterative firm-shrinkage algorithm, and its effectiveness is demonstrated in numerical experiments by both randomly generated and real datasets.

  9. Structured sparse linear graph embedding.

    PubMed

    Wang, Haixian

    2012-03-01

    Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. MO-DE-207A-07: Filtered Iterative Reconstruction (FIR) Via Proximal Forward-Backward Splitting: A Synergy of Analytical and Iterative Reconstruction Method for CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected tomore » the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).« less

  11. Regularization Methods for High-Dimensional Instrumental Variables Regression With an Application to Genetical Genomics

    PubMed Central

    Lin, Wei; Feng, Rui; Li, Hongzhe

    2014-01-01

    In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642

  12. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    NASA Astrophysics Data System (ADS)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  13. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  14. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  15. Detection of mouse liver cancer via a parallel iterative shrinkage method in hybrid optical/microcomputed tomography imaging

    NASA Astrophysics Data System (ADS)

    Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie

    2012-12-01

    Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.

  16. Computational photoacoustic imaging with sparsity-based optimization of the initial pressure distribution

    NASA Astrophysics Data System (ADS)

    Shang, Ruibo; Archibald, Richard; Gelb, Anne; Luke, Geoffrey P.

    2018-02-01

    In photoacoustic (PA) imaging, the optical absorption can be acquired from the initial pressure distribution (IPD). An accurate reconstruction of the IPD will be very helpful for the reconstruction of the optical absorption. However, the image quality of PA imaging in scattering media is deteriorated by the acoustic diffraction, imaging artifacts, and weak PA signals. In this paper, we propose a sparsity-based optimization approach that improves the reconstruction of the IPD in PA imaging. A linear imaging forward model was set up based on time-and-delay method with the assumption that the point spread function (PSF) is spatial invariant. Then, an optimization equation was proposed with a regularization term to denote the sparsity of the IPD in a certain domain to solve this inverse problem. As a proof of principle, the approach was applied to reconstructing point objects and blood vessel phantoms. The resolution and signal-to-noise ratio (SNR) were compared between conventional back-projection and our proposed approach. Overall these results show that computational imaging can leverage the sparsity of PA images to improve the estimation of the IPD.

  17. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  18. Visual tracking based on the sparse representation of the PCA subspace

    NASA Astrophysics Data System (ADS)

    Chen, Dian-bing; Zhu, Ming; Wang, Hui-li

    2017-09-01

    We construct a collaborative model of the sparse representation and the subspace representation. First, we represent the tracking target in the principle component analysis (PCA) subspace, and then we employ an L 1 regularization to restrict the sparsity of the residual term, an L 2 regularization term to restrict the sparsity of the representation coefficients, and an L 2 norm to restrict the distance between the reconstruction and the target. Then we implement the algorithm in the particle filter framework. Furthermore, an iterative method is presented to get the global minimum of the residual and the coefficients. Finally, an alternative template update scheme is adopted to avoid the tracking drift which is caused by the inaccurate update. In the experiment, we test the algorithm on 9 sequences, and compare the results with 5 state-of-art methods. According to the results, we can conclude that our algorithm is more robust than the other methods.

  19. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  20. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise

    PubMed Central

    Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang

    2015-01-01

    The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860

  1. MRI reconstruction with joint global regularization and transform learning.

    PubMed

    Tanc, A Korhan; Eksioglu, Ender M

    2016-10-01

    Sparsity based regularization has been a popular approach to remedy the measurement scarcity in image reconstruction. Recently, sparsifying transforms learned from image patches have been utilized as an effective regularizer for the Magnetic Resonance Imaging (MRI) reconstruction. Here, we infuse additional global regularization terms to the patch-based transform learning. We develop an algorithm to solve the resulting novel cost function, which includes both patchwise and global regularization terms. Extensive simulation results indicate that the introduced mixed approach has improved MRI reconstruction performance, when compared to the algorithms which use either of the patchwise transform learning or global regularization terms alone. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. SART-Type Half-Threshold Filtering Approach for CT Reconstruction

    PubMed Central

    YU, HENGYONG; WANG, GE

    2014-01-01

    The ℓ1 regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the ℓp norm (0 < p < 1) and solve the ℓp minimization problem. Very recently, Xu et al. developed an analytic solution for the ℓ1∕2 regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering. PMID:25530928

  3. SART-Type Half-Threshold Filtering Approach for CT Reconstruction.

    PubMed

    Yu, Hengyong; Wang, Ge

    2014-01-01

    The [Formula: see text] regularization problem has been widely used to solve the sparsity constrained problems. To enhance the sparsity constraint for better imaging performance, a promising direction is to use the [Formula: see text] norm (0 < p < 1) and solve the [Formula: see text] minimization problem. Very recently, Xu et al. developed an analytic solution for the [Formula: see text] regularization via an iterative thresholding operation, which is also referred to as half-threshold filtering. In this paper, we design a simultaneous algebraic reconstruction technique (SART)-type half-threshold filtering framework to solve the computed tomography (CT) reconstruction problem. In the medical imaging filed, the discrete gradient transform (DGT) is widely used to define the sparsity. However, the DGT is noninvertible and it cannot be applied to half-threshold filtering for CT reconstruction. To demonstrate the utility of the proposed SART-type half-threshold filtering framework, an emphasis of this paper is to construct a pseudoinverse transforms for DGT. The proposed algorithms are evaluated with numerical and physical phantom data sets. Our results show that the SART-type half-threshold filtering algorithms have great potential to improve the reconstructed image quality from few and noisy projections. They are complementary to the counterparts of the state-of-the-art soft-threshold filtering and hard-threshold filtering.

  4. Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.

    PubMed

    Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin

    2013-09-01

    Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.

  5. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  6. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  7. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  8. Sparse regularization for force identification using dictionaries

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  9. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  10. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  11. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE PAGES

    Chipman, Hugh A.; Hamada, Michael S.

    2016-06-02

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  12. Using Bayesian variable selection to analyze regular resolution IV two-level fractional factorial designs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chipman, Hugh A.; Hamada, Michael S.

    Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.

  13. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery

    NASA Astrophysics Data System (ADS)

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.

  14. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    PubMed

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed Split Augmented Lagrangian Shrinkage (SALSA) algorithm to effectively solve the proposed variational formulations. Experimental results on simulated and real remote sensing images show the effectiveness of the proposed pansharpening method compared to the state-of-the-art.

  15. Sparse representation and dictionary learning penalized image reconstruction for positron emission tomography.

    PubMed

    Chen, Shuhang; Liu, Huafeng; Shi, Pengcheng; Chen, Yunmei

    2015-01-21

    Accurate and robust reconstruction of the radioactivity concentration is of great importance in positron emission tomography (PET) imaging. Given the Poisson nature of photo-counting measurements, we present a reconstruction framework that integrates sparsity penalty on a dictionary into a maximum likelihood estimator. Patch-sparsity on a dictionary provides the regularization for our effort, and iterative procedures are used to solve the maximum likelihood function formulated on Poisson statistics. Specifically, in our formulation, a dictionary could be trained on CT images, to provide intrinsic anatomical structures for the reconstructed images, or adaptively learned from the noisy measurements of PET. Accuracy of the strategy with very promising application results from Monte-Carlo simulations, and real data are demonstrated.

  16. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    NASA Astrophysics Data System (ADS)

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  17. Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.

    PubMed

    Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L

    2016-12-01

    Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Efficient and sparse feature selection for biomedical text classification via the elastic net: Application to ICU risk stratification from nursing notes.

    PubMed

    Marafino, Ben J; Boscardin, W John; Dudley, R Adams

    2015-04-01

    Sparsity is often a desirable property of statistical models, and various feature selection methods exist so as to yield sparser and interpretable models. However, their application to biomedical text classification, particularly to mortality risk stratification among intensive care unit (ICU) patients, has not been thoroughly studied. To develop and characterize sparse classifiers based on the free text of nursing notes in order to predict ICU mortality risk and to discover text features most strongly associated with mortality. We selected nursing notes from the first 24h of ICU admission for 25,826 adult ICU patients from the MIMIC-II database. We then developed a pair of stochastic gradient descent-based classifiers with elastic-net regularization. We also studied the performance-sparsity tradeoffs of both classifiers as their regularization parameters were varied. The best-performing classifier achieved a 10-fold cross-validated AUC of 0.897 under the log loss function and full L2 regularization, while full L1 regularization used just 0.00025% of candidate input features and resulted in an AUC of 0.889. Using the log loss (range of AUCs 0.889-0.897) yielded better performance compared to the hinge loss (0.850-0.876), but the latter yielded even sparser models. Most features selected by both classifiers appear clinically relevant and correspond to predictors already present in existing ICU mortality models. The sparser classifiers were also able to discover a number of informative - albeit nonclinical - features. The elastic-net-regularized classifiers perform reasonably well and are capable of reducing the number of features required by over a thousandfold, with only a modest impact on performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Stationary wavelet transform for under-sampled MRI reconstruction.

    PubMed

    Kayvanrad, Mohammad H; McLeod, A Jonathan; Baxter, John S H; McKenzie, Charles A; Peters, Terry M

    2014-12-01

    In addition to coil sensitivity data (parallel imaging), sparsity constraints are often used as an additional lp-penalty for under-sampled MRI reconstruction (compressed sensing). Penalizing the traditional decimated wavelet transform (DWT) coefficients, however, results in visual pseudo-Gibbs artifacts, some of which are attributed to the lack of translation invariance of the wavelet basis. We show that these artifacts can be greatly reduced by penalizing the translation-invariant stationary wavelet transform (SWT) coefficients. This holds with various additional reconstruction constraints, including coil sensitivity profiles and total variation. Additionally, SWT reconstructions result in lower error values and faster convergence compared to DWT. These concepts are illustrated with extensive experiments on in vivo MRI data with particular emphasis on multiple-channel acquisitions. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Leveraging tagging and rating for recommendation: RMF meets weighted diffusion on tripartite graphs

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Tang, Yong; Chen, Jiemin

    2017-10-01

    Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with the tag co-occurrence and context of tags, they neglect the issue of tag sparsity that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves regularized matrix factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.

  1. SU-E-T-446: Group-Sparsity Based Angle Generation Method for Beam Angle Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, H

    2015-06-15

    Purpose: This work is to develop the effective algorithm for beam angle optimization (BAO), with the emphasis on enabling further improvement from existing treatment-dependent templates based on clinical knowledge and experience. Methods: The proposed BAO algorithm utilizes a priori beam angle templates as the initial guess, and iteratively generates angular updates for this initial set, namely angle generation method, with improved dose conformality that is quantitatively measured by the objective function. That is, during each iteration, we select “the test angle” in the initial set, and use group-sparsity based fluence map optimization to identify “the candidate angle” for updating “themore » test angle”, for which all the angles in the initial set except “the test angle”, namely “the fixed set”, are set free, i.e., with no group-sparsity penalty, and the rest of angles including “the test angle” during this iteration are in “the working set”. And then “the candidate angle” is selected with the smallest objective function value from the angles in “the working set” with locally maximal group sparsity, and replaces “the test angle” if “the fixed set” with “the candidate angle” has a smaller objective function value by solving the standard fluence map optimization (with no group-sparsity regularization). Similarly other angles in the initial set are in turn selected as “the test angle” for angular updates and this chain of updates is iterated until no further new angular update is identified for a full loop. Results: The tests using the MGH public prostate dataset demonstrated the effectiveness of the proposed BAO algorithm. For example, the optimized angular set from the proposed BAO algorithm was better the MGH template. Conclusion: A new BAO algorithm is proposed based on the angle generation method via group sparsity, with improved dose conformality from the given template. Hao Gao was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less

  2. Lp-Norm Regularization in Volumetric Imaging of Cardiac Current Sources

    PubMed Central

    Rahimi, Azar; Xu, Jingjia; Wang, Linwei

    2013-01-01

    Advances in computer vision have substantially improved our ability to analyze the structure and mechanics of the heart. In comparison, our ability to observe and analyze cardiac electrical activities is much limited. The progress to computationally reconstruct cardiac current sources from noninvasive voltage data sensed on the body surface has been hindered by the ill-posedness and the lack of a unique solution of the reconstruction problem. Common L2- and L1-norm regularizations tend to produce a solution that is either too diffused or too scattered to reflect the complex spatial structure of current source distribution in the heart. In this work, we propose a general regularization with Lp-norm (1 < p < 2) constraint to bridge the gap and balance between an overly smeared and overly focal solution in cardiac source reconstruction. In a set of phantom experiments, we demonstrate the superiority of the proposed Lp-norm method over its L1 and L2 counterparts in imaging cardiac current sources with increasing extents. Through computer-simulated and real-data experiments, we further demonstrate the feasibility of the proposed method in imaging the complex structure of excitation wavefront, as well as current sources distributed along the postinfarction scar border. This ability to preserve the spatial structure of source distribution is important for revealing the potential disruption to the normal heart excitation. PMID:24348735

  3. Ordinal feature selection for iris and palmprint recognition.

    PubMed

    Sun, Zhenan; Wang, Libin; Tan, Tieniu

    2014-09-01

    Ordinal measures have been demonstrated as an effective feature representation model for iris and palmprint recognition. However, ordinal measures are a general concept of image analysis and numerous variants with different parameter settings, such as location, scale, orientation, and so on, can be derived to construct a huge feature space. This paper proposes a novel optimization formulation for ordinal feature selection with successful applications to both iris and palmprint recognition. The objective function of the proposed feature selection method has two parts, i.e., misclassification error of intra and interclass matching samples and weighted sparsity of ordinal feature descriptors. Therefore, the feature selection aims to achieve an accurate and sparse representation of ordinal measures. And, the optimization subjects to a number of linear inequality constraints, which require that all intra and interclass matching pairs are well separated with a large margin. Ordinal feature selection is formulated as a linear programming (LP) problem so that a solution can be efficiently obtained even on a large-scale feature pool and training database. Extensive experimental results demonstrate that the proposed LP formulation is advantageous over existing feature selection methods, such as mRMR, ReliefF, Boosting, and Lasso for biometric recognition, reporting state-of-the-art accuracy on CASIA and PolyU databases.

  4. A new approach to global seismic tomography based on regularization by sparsity in a novel 3D spherical wavelet basis

    NASA Astrophysics Data System (ADS)

    Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean

    2010-05-01

    Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients across scales. We also focus on the likely gains of future inversions of finite-frequency seismic data using a sparsity promoting penalty in combination with our new wavelet approach.

  5. Glycemic control and pregnancy outcomes in women with type 1 diabetes mellitus using lispro versus regular insulin: a systematic review and meta-analysis.

    PubMed

    González Blanco, Cintia; Chico Ballesteros, Ana; Gich Saladich, Ignasi; Corcoy Pla, Rosa

    2011-09-01

    This study performed a systematic review and meta-analysis on glycemic control and pregnancy outcomes in women with type 1 diabetes mellitus (T1DM) treated with lispro (LP) versus regular insulin (RI) since before pregnancy. We performed a MEDLINE and EMBASE search. Abstracts (and full articles when appropriate) were reviewed by two independent researchers. Inclusion criteria were patients with T1DM, data on women treated with RI and LP since before pregnancy until delivery in the same article, at least five pregnancies in each group, and information on at least one pregnancy outcome. Quality assessment was performed using the Newcastle-Ottawa Quality Assessment Scale for cohort studies. Outcome data were summarized with Revman version 5.0 (ims.cochrane.org/revman/download [The Nordic Cochrane Centre, The Cochrane Collaboration, Copenhagen, Denmark]), applying a random effects model. Two hundred sixty-seven abstracts were identified, and four full articles fulfilled inclusion criteria, all of them corresponding to observational studies. Baseline characteristics were similar in women treated with LP or RI. Regarding outcome data, no differences between LP and RI groups were observed in hemoglobin A1c, gestational age at birth, birth weight, and rate of diabetic ketoacidosis, pregnancy-induced hypertension, pre-eclampsia, spontaneous miscarriages, interruptions, total abortions, cesarean section, preterm birth, macrosomia, small-for gestational-age newborns, stillbirth, neonatal and perinatal mortality, neonatal hypoglycemia, and major malformations. The rate of large-for-gestational age newborns was higher in the LP group (relative risk 1.38; 95% confidence interval 1.14-1.68). In relation to women with T1DM treated with RI, those treated with LP display similar baseline characteristics and no differences in metabolic control or perinatal outcome with the exception of a higher rate of large-for-gestational-age newborns.

  6. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  7. Subevents of long-period seismicity: implications for hydrothermal dynamics during the 2004-2008 eruption of Mount St. Helens

    USGS Publications Warehouse

    Matoza, Robin S.; Chouet, Bernard A.

    2010-01-01

    One of the most striking aspects of seismicity during the 2004–2008 eruption of Mount St. Helens (MSH) was the precise regularity in occurrence of repetitive long-period (LP) or “drumbeat” events over sustained time periods. However, this precise regularity was not always observed, and at times the temporal occurrence of LP events became more random. In addition, accompanying the dominant LP class of events during the 2004–2008 MSH eruption, there was a near-continuous, randomly occurring series of smaller seismic events. These subevents are not always simply small-amplitude versions of the dominant LP class of events but appear instead to result from a separate random process only loosely coupled to the main LP source mechanism. We present an analysis of the interevent time and amplitude distributions of the subevents, using waveform cross correlation to separate LP events from the subevents. We also discuss seismic tremor that accompanied the 8 March 2005 phreatic explosion event at MSH. This tremor consists of a rapid succession of LPs and subevents triggered during the explosion, in addition to broadband noise from the sustained degassing. Immediately afterward, seismicity returned to the pre-explosion occurrence pattern. This triggering in relation to the rapid ejection of steam from the system, and subsequent return to pre-explosion seismicity, suggests that both seismic event types originated in a region of the subsurface hydrothermal system that was (1) in contact with the reservoir feeding the 8 March 2005 phreatic explosion but (2) not destroyed or drained by the explosion event. Finally, we discuss possible thermodynamic conditions in a pressurized hydrothermal crack that could give rise to seismicity. Pressure drop estimates for typical LP events are not generally large enough to perturb pure water in a shallow hydrothermal crack into an unstable state. However, dissolved volatiles such as CO2 may lead to a more unstable system, increasing the seismogenic potential of a hydrothermal crack subject to rapid heat flux. The interaction of hydrothermal and magmatic systems beneath MSH in 2004–2008 thus appears able to explain a wide range of observed phenomena, including subevents, LP events, larger (Md > 2) events, and phreatic explosions.

  8. Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring.

    PubMed

    Kallenberg, Michiel; Petersen, Kersten; Nielsen, Mads; Ng, Andrew Y; Pengfei Diao; Igel, Christian; Vachon, Celine M; Holland, Katharina; Winkel, Rikke Rass; Karssemeijer, Nico; Lillholm, Martin

    2016-05-01

    Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.

  9. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  10. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  11. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    PubMed

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-01-01

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes the structural similarity between the reconstructed image and prior image to modify the distorted edges by slope artifacts; (2) it adopts wavelet tight frames to obtain the first and high derivative in several directions and levels; and (3) it takes advantage of l0 regularization to promote the sparsity of wavelet coefficients, which is effective for the inhibition of the slope artifacts. Therefore, the new method can address the limited-angle CT reconstruction problem effectively and have practical significance.

  12. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models

    PubMed Central

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A.; Valdés-Hernández, Pedro A.; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A.

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website. PMID:29200994

  13. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    PubMed

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods, Quality Measures and Visualization Routines are freely available in a public website.

  14. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  15. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  17. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction

    PubMed Central

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410

  18. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  19. Localized Spatio-Temporal Constraints for Accelerated CMR Perfusion

    PubMed Central

    Akçakaya, Mehmet; Basha, Tamer A.; Pflugi, Silvio; Foppa, Murilo; Kissinger, Kraig V.; Hauser, Thomas H.; Nezafat, Reza

    2013-01-01

    Purpose To develop and evaluate an image reconstruction technique for cardiac MRI (CMR)perfusion that utilizes localized spatio-temporal constraints. Methods CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution, and improved coverage. In this study, we propose a novel compressed sensing based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique is compared to conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zerofilled data in multi-slice 2D and 3D CMR perfusion. Qualitative image scores are used (1=poor, 4=excellent) to evaluate the technique in 3D perfusion in 10 patients and 5 healthy subjects. On 4 healthy subjects, the proposed technique was also compared to a breath-hold multi-slice 2D acquisition with parallel imaging in terms of signal intensity curves. Results The proposed technique results in images that are superior in terms of spatial and temporal blurring compared to the other techniques, even in free-breathing datasets. The image scores indicate a significant improvement compared to other techniques in 3D perfusion (2.8±0.5 vs. 2.3±0.5 for x-pc regularization, 1.7±0.5 for dynamic-by-dynamic, 1.1±0.2 for zerofilled). Signal intensity curves indicate similar dynamics of uptake between the proposed method with a 3D acquisition and the breath-hold multi-slice 2D acquisition with parallel imaging. Conclusion The proposed reconstruction utilizes sparsity regularization based on localized information in both spatial and temporal domains for highly-accelerated CMR perfusion with potential utility in free-breathing 3D acquisitions. PMID:24123058

  20. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    NASA Astrophysics Data System (ADS)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  1. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  2. Sparse regularization for EIT reconstruction incorporating structural information derived from medical imaging.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut

    2016-06-01

    Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.

  3. Operating the EOSDIS at the land processes DAAC managing expectations, requirements, and performance across agencies, missions, instruments, systems, and user communities

    USGS Publications Warehouse

    Kalvelage, T.A.; ,

    2002-01-01

    NASA developed the Earth Observing System (EOS) during the 1990'S. At the Land Processes Distributed Active Archive Center (LP DAAC), located at the USGS EROS Data Center, the EOS Data and Information System (EOSDIS) is required to support heritage missions as well as Landsat 7, Terra, and Aqua. The original system concept of the early 1990'S changed as each community had its say - first the managers, then engineers, scientists, developers, operators, and then finally the general public. The systems at the LP DAAC - particularly the largest single system, the EOSDIS Core System (ECS) - are changing as experience accumulates, technology changes, and each user group gains influence. The LP DAAC has adapted as contingencies were planned for, requirements and therefore plans were modified, and expectations changed faster than requirements could hope to be satisfied. Although not responsible for Quality Assurance of the science data, the LP DAAC works to ensure the data are accessible and useable by influencing systems, capabilities, and data formats where possible, and providing tools and user support as necessary. While supporting multiple missions and instruments, the LP DAAC also works with and learns from multiple management and oversight groups as they review mission requirements, system capabilities, and the overall operation of the LP DAAC. Stakeholders, including the Land Science community, are consulted regularly to ensure that the LP DAAC remains cognizant and responsive to the evolving needs of the user community. Today, the systems do not look or function as originally planned, but they do work, and they allow customers to search and order of an impressive amount of diverse data.

  4. Nonlocal sparse model with adaptive structural clustering for feature extraction of aero-engine bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Li, Xiang; Yan, Ruqiang

    2016-04-01

    Fault information of aero-engine bearings presents two particular phenomena, i.e., waveform distortion and impulsive feature frequency band dispersion, which leads to a challenging problem for current techniques of bearing fault diagnosis. Moreover, although many progresses of sparse representation theory have been made in feature extraction of fault information, the theory also confronts inevitable performance degradation due to the fact that relatively weak fault information has not sufficiently prominent and sparse representations. Therefore, a novel nonlocal sparse model (coined NLSM) and its algorithm framework has been proposed in this paper, which goes beyond simple sparsity by introducing more intrinsic structures of feature information. This work adequately exploits the underlying prior information that feature information exhibits nonlocal self-similarity through clustering similar signal fragments and stacking them together into groups. Within this framework, the prior information is transformed into a regularization term and a sparse optimization problem, which could be solved through block coordinate descent method (BCD), is formulated. Additionally, the adaptive structural clustering sparse dictionary learning technique, which utilizes k-Nearest-Neighbor (kNN) clustering and principal component analysis (PCA) learning, is adopted to further enable sufficient sparsity of feature information. Moreover, the selection rule of regularization parameter and computational complexity are described in detail. The performance of the proposed framework is evaluated through numerical experiment and its superiority with respect to the state-of-the-art method in the field is demonstrated through the vibration signals of experimental rig of aircraft engine bearings.

  5. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  6. Low-Dose Dynamic Cerebral Perfusion Computed Tomography Reconstruction via Kronecker-Basis Representation Tensor Sparsity Regularization

    PubMed Central

    Zeng, Dong; Xie, Qi; Cao, Wenfei; Lin, Jiahui; Zhang, Hao; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Meng, Deyu; Xu, Zongben; Liang, Zhengrong; Chen, Wufan

    2017-01-01

    Dynamic cerebral perfusion computed tomography (DCPCT) has the ability to evaluate the hemodynamic information throughout the brain. However, due to multiple 3-D image volume acquisitions protocol, DCPCT scanning imposes high radiation dose on the patients with growing concerns. To address this issue, in this paper, based on the robust principal component analysis (RPCA, or equivalently the low-rank and sparsity decomposition) model and the DCPCT imaging procedure, we propose a new DCPCT image reconstruction algorithm to improve low dose DCPCT and perfusion maps quality via using a powerful measure, called Kronecker-basis-representation tensor sparsity regularization, for measuring low-rankness extent of a tensor. For simplicity, the first proposed model is termed tensor-based RPCA (T-RPCA). Specifically, the T-RPCA model views the DCPCT sequential images as a mixture of low-rank, sparse, and noise components to describe the maximum temporal coherence of spatial structure among phases in a tensor framework intrinsically. Moreover, the low-rank component corresponds to the “background” part with spatial–temporal correlations, e.g., static anatomical contribution, which is stationary over time about structure, and the sparse component represents the time-varying component with spatial–temporal continuity, e.g., dynamic perfusion enhanced information, which is approximately sparse over time. Furthermore, an improved nonlocal patch-based T-RPCA (NL-T-RPCA) model which describes the 3-D block groups of the “background” in a tensor is also proposed. The NL-T-RPCA model utilizes the intrinsic characteristics underlying the DCPCT images, i.e., nonlocal self-similarity and global correlation. Two efficient algorithms using alternating direction method of multipliers are developed to solve the proposed T-RPCA and NL-T-RPCA models, respectively. Extensive experiments with a digital brain perfusion phantom, preclinical monkey data, and clinical patient data clearly demonstrate that the two proposed models can achieve more gains than the existing popular algorithms in terms of both quantitative and visual quality evaluations from low-dose acquisitions, especially as low as 20 mAs. PMID:28880164

  7. Condition Number Regularized Covariance Estimation*

    PubMed Central

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  8. Condition Number Regularized Covariance Estimation.

    PubMed

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  9. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  10. Identification of spatially-localized initial conditions via sparse PCA

    NASA Astrophysics Data System (ADS)

    Dwivedi, Anubhav; Jovanovic, Mihailo

    2017-11-01

    Principal Component Analysis involves maximization of a quadratic form subject to a quadratic constraint on the initial flow perturbations and it is routinely used to identify the most energetic flow structures. For general flow configurations, principal components can be efficiently computed via power iteration of the forward and adjoint governing equations. However, the resulting flow structures typically have a large spatial support leading to a question of physical realizability. To obtain spatially-localized structures, we modify the quadratic constraint on the initial condition to include a convex combination with an additional regularization term which promotes sparsity in the physical domain. We formulate this constrained optimization problem as a nonlinear eigenvalue problem and employ an inverse power-iteration-based method to solve it. The resulting solution is guaranteed to converge to a nonlinear eigenvector which becomes increasingly localized as our emphasis on sparsity increases. We use several fluids examples to demonstrate that our method indeed identifies the most energetic initial perturbations that are spatially compact. This work was supported by Office of Naval Research through Grant Number N00014-15-1-2522.

  11. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  12. EIT Imaging Regularization Based on Spectral Graph Wavelets.

    PubMed

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Vauhkonen, Marko; Wolf, Gerhard; Mueller-Lisse, Ullrich; Moeller, Knut

    2017-09-01

    The objective of electrical impedance tomographic reconstruction is to identify the distribution of tissue conductivity from electrical boundary conditions. This is an ill-posed inverse problem usually solved under the finite-element method framework. In previous studies, standard sparse regularization was used for difference electrical impedance tomography to achieve a sparse solution. However, regarding elementwise sparsity, standard sparse regularization interferes with the smoothness of conductivity distribution between neighboring elements and is sensitive to noise. As an effect, the reconstructed images are spiky and depict a lack of smoothness. Such unexpected artifacts are not realistic and may lead to misinterpretation in clinical applications. To eliminate such artifacts, we present a novel sparse regularization method that uses spectral graph wavelet transforms. Single-scale or multiscale graph wavelet transforms are employed to introduce local smoothness on different scales into the reconstructed images. The proposed approach relies on viewing finite-element meshes as undirected graphs and applying wavelet transforms derived from spectral graph theory. Reconstruction results from simulations, a phantom experiment, and patient data suggest that our algorithm is more robust to noise and produces more reliable images.

  13. Iterative Correction Scheme Based on Discrete Cosine Transform and L1 Regularization for Fluorescence Molecular Tomography With Background Fluorescence.

    PubMed

    Zhang, Jiulou; Shi, Junwei; Guang, Huizhi; Zuo, Simin; Liu, Fei; Bai, Jing; Luo, Jianwen

    2016-06-01

    High-intensity background fluorescence is generally encountered in fluorescence molecular tomography (FMT), because of the accumulation of fluorescent probes in nontarget tissues or the existence of autofluorescence in biological tissues. The reconstruction results are affected or even distorted by the background fluorescence, especially when the distribution of fluorescent targets is relatively sparse. The purpose of this paper is to reduce the negative effect of background fluorescence on FMT reconstruction. After each iteration of the Tikhonov regularization algorithm, 3-D discrete cosine transform is adopted to filter the intermediate results. And then, a sparsity constraint step based on L1 regularization is applied to restrain the energy of the objective function. Phantom experiments with different fluorescence intensities of homogeneous and heterogeneous background are carried out to validate the performance of the proposed scheme. The results show that the reconstruction quality can be improved with the proposed iterative correction scheme. The influence of background fluorescence in FMT can be reduced effectively because of the filtering of the intermediate results, the detail preservation, and noise suppression of L1 regularization.

  14. ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION

    PubMed Central

    Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey

    2013-01-01

    MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053

  15. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  16. Gene selection in cancer classification using sparse logistic regression with Bayesian regularization.

    PubMed

    Cawley, Gavin C; Talbot, Nicola L C

    2006-10-01

    Gene selection algorithms for cancer classification, based on the expression of a small number of biomarker genes, have been the subject of considerable research in recent years. Shevade and Keerthi propose a gene selection algorithm based on sparse logistic regression (SLogReg) incorporating a Laplace prior to promote sparsity in the model parameters, and provide a simple but efficient training procedure. The degree of sparsity obtained is determined by the value of a regularization parameter, which must be carefully tuned in order to optimize performance. This normally involves a model selection stage, based on a computationally intensive search for the minimizer of the cross-validation error. In this paper, we demonstrate that a simple Bayesian approach can be taken to eliminate this regularization parameter entirely, by integrating it out analytically using an uninformative Jeffrey's prior. The improved algorithm (BLogReg) is then typically two or three orders of magnitude faster than the original algorithm, as there is no longer a need for a model selection step. The BLogReg algorithm is also free from selection bias in performance estimation, a common pitfall in the application of machine learning algorithms in cancer classification. The SLogReg, BLogReg and Relevance Vector Machine (RVM) gene selection algorithms are evaluated over the well-studied colon cancer and leukaemia benchmark datasets. The leave-one-out estimates of the probability of test error and cross-entropy of the BLogReg and SLogReg algorithms are very similar, however the BlogReg algorithm is found to be considerably faster than the original SLogReg algorithm. Using nested cross-validation to avoid selection bias, performance estimation for SLogReg on the leukaemia dataset takes almost 48 h, whereas the corresponding result for BLogReg is obtained in only 1 min 24 s, making BLogReg by far the more practical algorithm. BLogReg also demonstrates better estimates of conditional probability than the RVM, which are of great importance in medical applications, with similar computational expense. A MATLAB implementation of the sparse logistic regression algorithm with Bayesian regularization (BLogReg) is available from http://theoval.cmp.uea.ac.uk/~gcc/cbl/blogreg/

  17. Point-spread function reconstruction in ground-based astronomy by l(1)-l(p) model.

    PubMed

    Chan, Raymond H; Yuan, Xiaoming; Zhang, Wenxing

    2012-11-01

    In ground-based astronomy, images of objects in outer space are acquired via ground-based telescopes. However, the imaging system is generally interfered by atmospheric turbulence, and hence images so acquired are blurred with unknown point-spread function (PSF). To restore the observed images, the wavefront of light at the telescope's aperture is utilized to derive the PSF. A model with the Tikhonov regularization has been proposed to find the high-resolution phase gradients by solving a least-squares system. Here we propose the l(1)-l(p) (p=1, 2) model for reconstructing the phase gradients. This model can provide sharper edges in the gradients while removing noise. The minimization models can easily be solved by the Douglas-Rachford alternating direction method of a multiplier, and the convergence rate is readily established. Numerical results are given to illustrate that the model can give better phase gradients and hence a more accurate PSF. As a result, the restored images are much more accurate when compared to the traditional Tikhonov regularization model.

  18. Sparsity prediction and application to a new steganographic technique

    NASA Astrophysics Data System (ADS)

    Phillips, David; Noonan, Joseph

    2004-10-01

    Steganography is a technique of embedding information in innocuous data such that only the innocent data is visible. The wavelet transform lends itself to image steganography because it generates a large number of coefficients representing the information in the image. Altering a small set of these coefficients allows embedding of information (payload) into an image (cover) without noticeably altering the original image. We propose a novel, dual-wavelet steganographic technique, using transforms selected such that the transform of the cover image has low sparsity, while the payload transform has high sparsity. Maximizing the sparsity of the payload transform reduces the amount of information embedded in the cover, and minimizing the sparsity of the cover increases the locations that can be altered without significantly altering the image. Making this system effective on any given image pair requires a metric to indicate the best (maximum sparsity) and worst (minimum sparsity) wavelet transforms to use. This paper develops the first stage of this metric, which can predict, averaged across many wavelet families, which of two images will have a higher sparsity. A prototype implementation of the dual-wavelet system as a proof of concept is also developed.

  19. Robust Kriged Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  20. Superiorization-based multi-energy CT image reconstruction

    PubMed Central

    Yang, Q; Cong, W; Wang, G

    2017-01-01

    The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142

  1. SIRF: Simultaneous Satellite Image Registration and Fusion in a Unified Framework.

    PubMed

    Chen, Chen; Li, Yeqing; Liu, Wei; Huang, Junzhou

    2015-11-01

    In this paper, we propose a novel method for image fusion with a high-resolution panchromatic image and a low-resolution multispectral (Ms) image at the same geographical location. The fusion is formulated as a convex optimization problem which minimizes a linear combination of a least-squares fitting term and a dynamic gradient sparsity regularizer. The former is to preserve accurate spectral information of the Ms image, while the latter is to keep sharp edges of the high-resolution panchromatic image. We further propose to simultaneously register the two images during the fusing process, which is naturally achieved by virtue of the dynamic gradient sparsity property. An efficient algorithm is then devised to solve the optimization problem, accomplishing a linear computational complexity in the size of the output image in each iteration. We compare our method against six state-of-the-art image fusion methods on Ms image data sets from four satellites. Extensive experimental results demonstrate that the proposed method substantially outperforms the others in terms of both spatial and spectral qualities. We also show that our method can provide high-quality products from coarsely registered real-world IKONOS data sets. Finally, a MATLAB implementation is provided to facilitate future research.

  2. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  3. Efficient operator splitting algorithm for joint sparsity-regularized SPIRiT-based parallel MR imaging reconstruction.

    PubMed

    Duan, Jizhong; Liu, Yu; Jing, Peiguang

    2018-02-01

    Self-consistent parallel imaging (SPIRiT) is an auto-calibrating model for the reconstruction of parallel magnetic resonance imaging, which can be formulated as a regularized SPIRiT problem. The Projection Over Convex Sets (POCS) method was used to solve the formulated regularized SPIRiT problem. However, the quality of the reconstructed image still needs to be improved. Though methods such as NonLinear Conjugate Gradients (NLCG) can achieve higher spatial resolution, these methods always demand very complex computation and converge slowly. In this paper, we propose a new algorithm to solve the formulated Cartesian SPIRiT problem with the JTV and JL1 regularization terms. The proposed algorithm uses the operator splitting (OS) technique to decompose the problem into a gradient problem and a denoising problem with two regularization terms, which is solved by our proposed split Bregman based denoising algorithm, and adopts the Barzilai and Borwein method to update step size. Simulation experiments on two in vivo data sets demonstrate that the proposed algorithm is 1.3 times faster than ADMM for datasets with 8 channels. Especially, our proposal is 2 times faster than ADMM for the dataset with 32 channels. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Accelerated Edge-Preserving Image Restoration Without Boundary Artifacts

    PubMed Central

    Matakos, Antonios; Ramani, Sathish; Fessler, Jeffrey A.

    2013-01-01

    To reduce blur in noisy images, regularized image restoration methods have been proposed that use non-quadratic regularizers (like l1 regularization or total-variation) that suppress noise while preserving edges in the image. Most of these methods assume a circulant blur (periodic convolution with a blurring kernel) that can lead to wraparound artifacts along the boundaries of the image due to the implied periodicity of the circulant model. Using a non-circulant model could prevent these artifacts at the cost of increased computational complexity. In this work we propose to use a circulant blur model combined with a masking operator that prevents wraparound artifacts. The resulting model is non-circulant, so we propose an efficient algorithm using variable splitting and augmented Lagrangian (AL) strategies. Our variable splitting scheme, when combined with the AL framework and alternating minimization, leads to simple linear systems that can be solved non-iteratively using FFTs, eliminating the need for more expensive CG-type solvers. The proposed method can also efficiently tackle a variety of convex regularizers including edge-preserving (e.g., total-variation) and sparsity promoting (e.g., l1 norm) regularizers. Simulation results show fast convergence of the proposed method, along with improved image quality at the boundaries where the circulant model is inaccurate. PMID:23372080

  5. Predictive sparse modeling of fMRI data for improved classification, regression, and visualization using the k-support norm.

    PubMed

    Belilovsky, Eugene; Gkirtzou, Katerina; Misyrlis, Michail; Konova, Anna B; Honorio, Jean; Alia-Klein, Nelly; Goldstein, Rita Z; Samaras, Dimitris; Blaschko, Matthew B

    2015-12-01

    We explore various sparse regularization techniques for analyzing fMRI data, such as the ℓ1 norm (often called LASSO in the context of a squared loss function), elastic net, and the recently introduced k-support norm. Employing sparsity regularization allows us to handle the curse of dimensionality, a problem commonly found in fMRI analysis. In this work we consider sparse regularization in both the regression and classification settings. We perform experiments on fMRI scans from cocaine-addicted as well as healthy control subjects. We show that in many cases, use of the k-support norm leads to better predictive performance, solution stability, and interpretability as compared to other standard approaches. We additionally analyze the advantages of using the absolute loss function versus the standard squared loss which leads to significantly better predictive performance for the regularization methods tested in almost all cases. Our results support the use of the k-support norm for fMRI analysis and on the clinical side, the generalizability of the I-RISA model of cocaine addiction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    PubMed

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  7. Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, Russel

    This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.

  8. A Novel Sky-Subtraction Method Based on Non-negative Matrix Factorisation with Sparsity for Multi-object Fibre Spectroscopy

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Zhang, Long; Ye, Zhongfu

    2016-12-01

    A novel sky-subtraction method based on non-negative matrix factorisation with sparsity is proposed in this paper. The proposed non-negative matrix factorisation with sparsity method is redesigned for sky-subtraction considering the characteristics of the skylights. It has two constraint terms, one for sparsity and the other for homogeneity. Different from the standard sky-subtraction techniques, such as the B-spline curve fitting methods and the Principal Components Analysis approaches, sky-subtraction based on non-negative matrix factorisation with sparsity method has higher accuracy and flexibility. The non-negative matrix factorisation with sparsity method has research value for the sky-subtraction on multi-object fibre spectroscopic telescope surveys. To demonstrate the effectiveness and superiority of the proposed algorithm, experiments are performed on Large Sky Area Multi-Object Fiber Spectroscopic Telescope data, as the mechanisms of the multi-object fibre spectroscopic telescopes are similar.

  9. The effect of the menstrual cycle and water consumption on physiological responses during prolonged exercise at moderate intensity in hot conditions.

    PubMed

    Hashimoto, Hideki; Ishijima, Toshimichi; Suzuki, Katsuhiko; Higuchi, Mitsuru

    2016-09-01

    Reproductive hormones are likely to be involved in thermoregulation through body fluid dynamics. In the present study, we aimed to investigate the effect of the menstrual cycle and water consumption on physiological responses to prolonged exercise at moderate intensity in hot conditions. Eight healthy young women with regular menstrual cycles performed cycling exercise for 90 minutes at 50% V̇O2peak intensity during the low progesterone (LP) level phase and high progesterone (HP) level phase, with or without water consumption, under hot conditions (30°C, 50% relative humidity). For the water consumption trials, subjects ingested water equivalent to the loss in body weight that occurred in the earlier non-consumption trial. For all four trials, rectal temperature, cardiorespiratory responses, and ratings of perceived exertion (RPE) were measured. Throughout the 90-minute exercise period, rectal temperatures during HP were higher than during LP by an average of 0.4 °C in the non-consumption trial (P<0.01) and 0.2 °C in the water consumption trial (P<0.05). During exercise, water consumption affected the changes in rectal temperature and heat rate (HR) during HP, but it did not exert these effects during LP. Furthermore, we found a negative correlation between estradiol levels and rectal temperature during LP. During prolonged exercise at moderate intensity under hot conditions, water consumption is likely to be useful for suppressing the associated increase in body temperature and HR, particularly during HP, whereas estradiol appears to be useful for suppressing the increase in rectal temperature during LP.

  10. Single image super-resolution based on approximated Heaviside functions and iterative refinement

    PubMed Central

    Wang, Xin-Yu; Huang, Ting-Zhu; Deng, Liang-Jian

    2018-01-01

    One method of solving the single-image super-resolution problem is to use Heaviside functions. This has been done previously by making a binary classification of image components as “smooth” and “non-smooth”, describing these with approximated Heaviside functions (AHFs), and iteration including l1 regularization. We now introduce a new method in which the binary classification of image components is extended to different degrees of smoothness and non-smoothness, these components being represented by various classes of AHFs. Taking into account the sparsity of the non-smooth components, their coefficients are l1 regularized. In addition, to pick up more image details, the new method uses an iterative refinement for the residuals between the original low-resolution input and the downsampled resulting image. Experimental results showed that the new method is superior to the original AHF method and to four other published methods. PMID:29329298

  11. Motion-compensated compressed sensing for dynamic contrast-enhanced MRI using regional spatiotemporal sparsity and region tracking: Block LOw-rank Sparsity with Motion-guidance (BLOSM)

    PubMed Central

    Chen, Xiao; Salerno, Michael; Yang, Yang; Epstein, Frederick H.

    2014-01-01

    Purpose Dynamic contrast-enhanced MRI of the heart is well-suited for acceleration with compressed sensing (CS) due to its spatiotemporal sparsity; however, respiratory motion can degrade sparsity and lead to image artifacts. We sought to develop a motion-compensated CS method for this application. Methods A new method, Block LOw-rank Sparsity with Motion-guidance (BLOSM), was developed to accelerate first-pass cardiac MRI, even in the presence of respiratory motion. This method divides the images into regions, tracks the regions through time, and applies matrix low-rank sparsity to the tracked regions. BLOSM was evaluated using computer simulations and first-pass cardiac datasets from human subjects. Using rate-4 acceleration, BLOSM was compared to other CS methods such as k-t SLR that employs matrix low-rank sparsity applied to the whole image dataset, with and without motion tracking, and to k-t FOCUSS with motion estimation and compensation that employs spatial and temporal-frequency sparsity. Results BLOSM was qualitatively shown to reduce respiratory artifact compared to other methods. Quantitatively, using root mean squared error and the structural similarity index, BLOSM was superior to other methods. Conclusion BLOSM, which exploits regional low rank structure and uses region tracking for motion compensation, provides improved image quality for CS-accelerated first-pass cardiac MRI. PMID:24243528

  12. GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media.

    PubMed

    Zhang, Chao; Zhang, Keyang; Yuan, Quan; Zhang, Luming; Hanratty, Tim; Han, Jiawei

    2016-08-01

    Understanding human mobility is of great importance to various applications, such as urban planning, traffic scheduling, and location prediction. While there has been fruitful research on modeling human mobility using tracking data ( e.g. , GPS traces), the recent growth of geo-tagged social media (GeoSM) brings new opportunities to this task because of its sheer size and multi-dimensional nature. Nevertheless, how to obtain quality mobility models from the highly sparse and complex GeoSM data remains a challenge that cannot be readily addressed by existing techniques. We propose GMove, a group-level mobility modeling method using GeoSM data. Our insight is that the GeoSM data usually contains multiple user groups, where the users within the same group share significant movement regularity. Meanwhile, user grouping and mobility modeling are two intertwined tasks: (1) better user grouping offers better within-group data consistency and thus leads to more reliable mobility models; and (2) better mobility models serve as useful guidance that helps infer the group a user belongs to. GMove thus alternates between user grouping and mobility modeling, and generates an ensemble of Hidden Markov Models (HMMs) to characterize group-level movement regularity. Furthermore, to reduce text sparsity of GeoSM data, GMove also features a text augmenter. The augmenter computes keyword correlations by examining their spatiotemporal distributions. With such correlations as auxiliary knowledge, it performs sampling-based augmentation to alleviate text sparsity and produce high-quality HMMs. Our extensive experiments on two real-life data sets demonstrate that GMove can effectively generate meaningful group-level mobility models. Moreover, with context-aware location prediction as an example application, we find that GMove significantly outperforms baseline mobility models in terms of prediction accuracy.

  13. A Dictionary Learning Approach with Overlap for the Low Dose Computed Tomography Reconstruction and Its Vectorial Application to Differential Phase Tomography

    PubMed Central

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography. PMID:25531987

  14. A dictionary learning approach with overlap for the low dose computed tomography reconstruction and its vectorial application to differential phase tomography.

    PubMed

    Mirone, Alessandro; Brun, Emmanuel; Coan, Paola

    2014-01-01

    X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing L1 norm of the patch basis functions coefficients, and a coefficient multiplying the L2 norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.

  15. GMove: Group-Level Mobility Modeling Using Geo-Tagged Social Media

    PubMed Central

    Zhang, Chao; Zhang, Keyang; Yuan, Quan; Zhang, Luming; Hanratty, Tim; Han, Jiawei

    2017-01-01

    Understanding human mobility is of great importance to various applications, such as urban planning, traffic scheduling, and location prediction. While there has been fruitful research on modeling human mobility using tracking data (e.g., GPS traces), the recent growth of geo-tagged social media (GeoSM) brings new opportunities to this task because of its sheer size and multi-dimensional nature. Nevertheless, how to obtain quality mobility models from the highly sparse and complex GeoSM data remains a challenge that cannot be readily addressed by existing techniques. We propose GMove, a group-level mobility modeling method using GeoSM data. Our insight is that the GeoSM data usually contains multiple user groups, where the users within the same group share significant movement regularity. Meanwhile, user grouping and mobility modeling are two intertwined tasks: (1) better user grouping offers better within-group data consistency and thus leads to more reliable mobility models; and (2) better mobility models serve as useful guidance that helps infer the group a user belongs to. GMove thus alternates between user grouping and mobility modeling, and generates an ensemble of Hidden Markov Models (HMMs) to characterize group-level movement regularity. Furthermore, to reduce text sparsity of GeoSM data, GMove also features a text augmenter. The augmenter computes keyword correlations by examining their spatiotemporal distributions. With such correlations as auxiliary knowledge, it performs sampling-based augmentation to alleviate text sparsity and produce high-quality HMMs. Our extensive experiments on two real-life data sets demonstrate that GMove can effectively generate meaningful group-level mobility models. Moreover, with context-aware location prediction as an example application, we find that GMove significantly outperforms baseline mobility models in terms of prediction accuracy. PMID:28163978

  16. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.; Le, Hanh N. D.; Kang, Jin U.; Roland, Per E.; Wong, Dean F.; Rahmim, Arman

    2017-02-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via l1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1, absorption coefficient: 0.1 cm-1 and tomographic measurements made using pixelated detectors. In different experiments, fluorescent sources of varying size and intensity were simulated. The proposed reconstruction method provided accurate estimates of the fluorescent source intensity, with a 20% lower root mean square error on average compared to the pure-homotopy method for all considered source intensities and sizes. Further, compared with conventional l2 regularized algorithm, overall, the proposed method reconstructed substantially more accurate fluorescence distribution. The proposed method shows considerable promise and will be tested using more realistic simulations and experimental setups.

  17. Accelerated high-resolution photoacoustic tomography via compressed sensing

    NASA Astrophysics Data System (ADS)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  18. LP-stability for the strong solutions of the Navier-Stokes equations in the whole space

    NASA Astrophysics Data System (ADS)

    Beiraodaveiga, H.; Secchi, P.

    1985-10-01

    We consider the motion of a viscous fluid filling the whole space R3, governed by the classical Navier-Stokes equations (1). Existence of global (in time) regular solutions for that system of non-linear partial differential equations, is still an open problem. From either the mathematical and the physical point of view, an interesting property is the stability (or not) of the (eventual) global regular solutions. Here, we assume that v1(t,x) is a solution, with initial data a1(x). For small perturbations of a1, we want the solution v1(t,x) being slightly perturbed, too. Due to viscosity, it is even expected that the perturbed solution v2(t,x) approaches the unperturbed one, as time goes to + infinity. This is just the result proved in this paper. To measure the distance between v1(t,x) and v2(t,x), at each time t, suitable norms are introduced (LP-norms). For fluids filling a bounded vessel, exponential decay of the above distance, is expected. Such a strong result is not reasonable, for fluids filling the entire space.

  19. Dictionary learning and time sparsity in dynamic MRI.

    PubMed

    Caballero, Jose; Rueckert, Daniel; Hajnal, Joseph V

    2012-01-01

    Sparse representation methods have been shown to tackle adequately the inherent speed limits of magnetic resonance imaging (MRI) acquisition. Recently, learning-based techniques have been used to further accelerate the acquisition of 2D MRI. The extension of such algorithms to dynamic MRI (dMRI) requires careful examination of the signal sparsity distribution among the different dimensions of the data. Notably, the potential of temporal gradient (TG) sparsity in dMRI has not yet been explored. In this paper, a novel method for the acceleration of cardiac dMRI is presented which investigates the potential benefits of enforcing sparsity constraints on patch-based learned dictionaries and TG at the same time. We show that an algorithm exploiting sparsity on these two domains can outperform previous sparse reconstruction techniques.

  20. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  1. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  2. Ethylene sensitivity and relative air humidity regulate root hydraulic properties in tomato plants.

    PubMed

    Calvo-Polanco, Monica; Ibort, Pablo; Molina, Sonia; Ruiz-Lozano, Juan Manuel; Zamarreño, Angel María; García-Mina, Jose María; Aroca, Ricardo

    2017-11-01

    The effect of ethylene and its precursor ACC on root hydraulic properties, including aquaporin expression and abundance, is modulated by relative air humidity and plant sensitivity to ethylene. Relative air humidity (RH) is a main factor contributing to water balance in plants. Ethylene (ET) is known to be involved in the regulation of root water uptake and stomatal opening although its role on plant water balance under different RH is not very well understood. We studied, at the physiological, hormonal and molecular levels (aquaporins expression, abundance and phosphorylation state), the plant responses to exogenous 1-aminocyclopropane-1-carboxylic acid (ACC; precursor of ET) and 2-aminoisobutyric acid (AIB; inhibitor of ET biosynthesis), after 24 h of application to the roots of tomato wild type (WT) plants and its ET-insensitive never ripe (nr) mutant, at two RH levels: regular (50%) and close to saturation RH. Highest RH induced an increase of root hydraulic conductivity (Lp o ) of non-treated WT plants, and the opposite effect in nr mutants. The treatment with ACC reduced Lp o in WT plants at low RH and in nr plants at high RH. The application of AIB increased Lp o only in nr plants at high RH. In untreated plants, the RH treatment changed the abundance and phosphorylation of aquaporins that affected differently both genotypes according to their ET sensitivity. We show that RH is critical in regulating root hydraulic properties, and that Lp o is affected by the plant sensitivity to ET, and possibly to ACC, by regulating aquaporins expression and their phosphorylation status. These results incorporate the relationship between RH and ET in the response of Lp o to environmental changes.

  3. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    NASA Astrophysics Data System (ADS)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  4. Assessing cardiac function from total-variation-regularized 4D C-arm CT in the presence of angular undersampling

    NASA Astrophysics Data System (ADS)

    Taubmann, O.; Haase, V.; Lauritsch, G.; Zheng, Y.; Krings, G.; Hornegger, J.; Maier, A.

    2017-04-01

    Time-resolved tomographic cardiac imaging using an angiographic C-arm device may support clinicians during minimally invasive therapy by enabling a thorough analysis of the heart function directly in the catheter laboratory. However, clinically feasible acquisition protocols entail a highly challenging reconstruction problem which suffers from sparse angular sampling of the trajectory. Compressed sensing theory promises that useful images can be recovered despite massive undersampling by means of sparsity-based regularization. For a multitude of reasons—most notably the desired reduction of scan time, dose and contrast agent required—it is of great interest to know just how little data is actually sufficient for a certain task. In this work, we apply a convex optimization approach based on primal-dual splitting to 4D cardiac C-arm computed tomography. We examine how the quality of spatially and temporally total-variation-regularized reconstruction degrades when using as few as 6.9+/- 1.2 projection views per heart phase. First, feasible regularization weights are determined in a numerical phantom study, demonstrating the individual benefits of both regularizers. Secondly, a task-based evaluation is performed in eight clinical patients. Semi-automatic segmentation-based volume measurements of the left ventricular blood pool performed on strongly undersampled images show a correlation of close to 99% with measurements obtained from less sparsely sampled data.

  5. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less

  6. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction.

    PubMed

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A

    2016-04-01

    The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  7. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    PubMed Central

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  8. Z-Index Parameterization for Volumetric CT Image Reconstruction via 3-D Dictionary Learning.

    PubMed

    Bai, Ti; Yan, Hao; Jia, Xun; Jiang, Steve; Wang, Ge; Mou, Xuanqin

    2017-12-01

    Despite the rapid developments of X-ray cone-beam CT (CBCT), image noise still remains a major issue for the low dose CBCT. To suppress the noise effectively while retain the structures well for low dose CBCT image, in this paper, a sparse constraint based on the 3-D dictionary is incorporated into a regularized iterative reconstruction framework, defining the 3-D dictionary learning (3-DDL) method. In addition, by analyzing the sparsity level curve associated with different regularization parameters, a new adaptive parameter selection strategy is proposed to facilitate our 3-DDL method. To justify the proposed method, we first analyze the distributions of the representation coefficients associated with the 3-D dictionary and the conventional 2-D dictionary to compare their efficiencies in representing volumetric images. Then, multiple real data experiments are conducted for performance validation. Based on these results, we found: 1) the 3-D dictionary-based sparse coefficients have three orders narrower Laplacian distribution compared with the 2-D dictionary, suggesting the higher representation efficiencies of the 3-D dictionary; 2) the sparsity level curve demonstrates a clear Z-shape, and hence referred to as Z-curve, in this paper; 3) the parameter associated with the maximum curvature point of the Z-curve suggests a nice parameter choice, which could be adaptively located with the proposed Z-index parameterization (ZIP) method; 4) the proposed 3-DDL algorithm equipped with the ZIP method could deliver reconstructions with the lowest root mean squared errors and the highest structural similarity index compared with the competing methods; 5) similar noise performance as the regular dose FDK reconstruction regarding the standard deviation metric could be achieved with the proposed method using (1/2)/(1/4)/(1/8) dose level projections. The contrast-noise ratio is improved by ~2.5/3.5 times with respect to two different cases under the (1/8) dose level compared with the low dose FDK reconstruction. The proposed method is expected to reduce the radiation dose by a factor of 8 for CBCT, considering the voted strongly discriminated low contrast tissues.

  9. Blind Compressed Sensing Enables 3-Dimensional Dynamic Free Breathing Magnetic Resonance Imaging of Lung Volumes and Diaphragm Motion.

    PubMed

    Bhave, Sampada; Lingala, Sajan Goud; Newell, John D; Nagle, Scott K; Jacob, Mathews

    2016-06-01

    The objective of this study was to increase the spatial and temporal resolution of dynamic 3-dimensional (3D) magnetic resonance imaging (MRI) of lung volumes and diaphragm motion. To achieve this goal, we evaluate the utility of the proposed blind compressed sensing (BCS) algorithm to recover data from highly undersampled measurements. We evaluated the performance of the BCS scheme to recover dynamic data sets from retrospectively and prospectively undersampled measurements. We also compared its performance against that of view-sharing, the nuclear norm minimization scheme, and the l1 Fourier sparsity regularization scheme. Quantitative experiments were performed on a healthy subject using a fully sampled 2D data set with uniform radial sampling, which was retrospectively undersampled with 16 radial spokes per frame to correspond to an undersampling factor of 8. The images obtained from the 4 reconstruction schemes were compared with the fully sampled data using mean square error and normalized high-frequency error metrics. The schemes were also compared using prospective 3D data acquired on a Siemens 3 T TIM TRIO MRI scanner on 8 healthy subjects during free breathing. Two expert cardiothoracic radiologists (R1 and R2) qualitatively evaluated the reconstructed 3D data sets using a 5-point scale (0-4) on the basis of spatial resolution, temporal resolution, and presence of aliasing artifacts. The BCS scheme gives better reconstructions (mean square error = 0.0232 and normalized high frequency = 0.133) than the other schemes in the 2D retrospective undersampling experiments, producing minimally distorted reconstructions up to an acceleration factor of 8 (16 radial spokes per frame). The prospective 3D experiments show that the BCS scheme provides visually improved reconstructions than the other schemes do. The BCS scheme provides improved qualitative scores over nuclear norm and l1 Fourier sparsity regularization schemes in the temporal blurring and spatial blurring categories. The qualitative scores for aliasing artifacts in the images reconstructed by nuclear norm scheme and BCS scheme are comparable.The comparisons of the tidal volume changes also show that the BCS scheme has less temporal blurring as compared with the nuclear norm minimization scheme and the l1 Fourier sparsity regularization scheme. The minute ventilation estimated by BCS for tidal breathing in supine position (4 L/min) and the measured supine inspiratory capacity (1.5 L) is in good correlation with the literature. The improved performance of BCS can be explained by its ability to efficiently adapt to the data, thus providing a richer representation of the signal. The feasibility of the BCS scheme was demonstrated for dynamic 3D free breathing MRI of lung volumes and diaphragm motion. A temporal resolution of ∼500 milliseconds, spatial resolution of 2.7 × 2.7 × 10 mm, with whole lung coverage (16 slices) was achieved using the BCS scheme.

  10. Single-view phase retrieval of an extended sample by exploiting edge detection and sparsity

    DOE PAGES

    Tripathi, Ashish; McNulty, Ian; Munson, Todd; ...

    2016-10-14

    We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.

  11. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    PubMed

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  12. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    PubMed Central

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  13. The Issues of Sparsity in Providing Educational Opportunity in the State of Wyoming.

    ERIC Educational Resources Information Center

    Hobbs, Max E.

    Wyoming's funding programs for public education that relate to the issues of sparsity and the state's attempt to provide equal educational opportunity are reviewed. School district problems that relate to the issue of sparsity are also discussed. School district size in Wyoming ranges from the smallest district, by area, of 186 square miles to the…

  14. [The effects of the short-term regular exercise-diet program on lipid profile in sedentary subjects].

    PubMed

    Yalin, S; Gök, H; Toksöz, R

    2001-09-01

    Regular aerobic exercise leads to changes in plasma lipids, lipoprotein and apoprotein levels. The aim of this study was to examine the training effects of the intervention program consisted of regular exercise and low fat diet on plasma lipid profile. The effects of the four weeks intervention programme which consisted of walking and dietary restriction on lipid profile in sedentary subjects were investigated. Subjects, who had dyslipidemia or obesity, were instructed to walk (consecutive 60 minutes, one times daily) and to consume no more than 20% total fat and 300 mg/d of cholesterol for four weeks. At the end of fourth week, 41 subjects who had implemented exercise-diet programme, were assigned to study (intervention) group; 21 subjects who had remained sedentary, nondieting, were included into the control group. Total-C, triglycerides, LDL-C, HDL-C, Lp (a), apo A1 and apo B100 were measured in fasting blood samples before and after 4 weeks of intervention programme. At the end of four weeks, subjects in the exercise-diet group, as compared with the control group, showed a significant reduction in body weight (respectively 1.67 +/- 2.36 kg versus -0.21 +/- 1.36 kg, p = 0.001), total cholesterol (35 +/- 37 mg/dl vs -20 +/- 25 mg/dl, p < 0.001), triglycerides (30 +/- 68 mg/dl vs -10 +/- 52 mg/dl, p = 0.024) and LDL-C (29 +/- 41 mg/dl vs -18 +/- 25 mg/dl, p < 0.001) levels. However, at the end of programme, in the exercise-diet group, as compared with the control group, the changes in HDL-C (respectively -0.85 +/- 7.30 mg/dl vs 1.05 +/- 5.64 mg/dl, p = 0.302), Lp (a) (1.59 +/- 3.06 mg/dl vs -0.09 +/- 3.96 mg/dl, p = 0.069), apo A1 (0.61 +/- 22.69 mg/dl vs -0.66 +/- 17.27 mg/dl, p = 0.822) and apo B100 (5.41 +/- 19.33 mg/dl vs -4.00 +/- 20.51 mg/dl, p = 0.080) were not significant. The data of this study demonstrate that the four weeks programme based on regular daily aerobic exercise and low fat diet is capable of decreasing total cholesterol, triglycerides and LDL-C levels and that this short-term intervention is insufficient in increasing HDL-C, in decreasing Lp (a) and improving apoprotein levels.

  15. Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.

    PubMed

    Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong

    2011-09-01

    Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  17. Blind calibration of radio interferometric arrays using sparsity constraints and its implications for self-calibration

    NASA Astrophysics Data System (ADS)

    Chiarucci, Simone; Wijnholds, Stefan J.

    2018-02-01

    Blind calibration, i.e. calibration without a priori knowledge of the source model, is robust to the presence of unknown sources such as transient phenomena or (low-power) broad-band radio frequency interference that escaped detection. In this paper, we present a novel method for blind calibration of a radio interferometric array assuming that the observed field only contains a small number of discrete point sources. We show the huge computational advantage over previous blind calibration methods and we assess its statistical efficiency and robustness to noise and the quality of the initial estimate. We demonstrate the method on actual data from a Low-Frequency Array low-band antenna station showing that our blind calibration is able to recover the same gain solutions as the regular calibration approach, as expected from theory and simulations. We also discuss the implications of our findings for the robustness of regular self-calibration to poor starting models.

  18. An optimization method for speech enhancement based on deep neural network

    NASA Astrophysics Data System (ADS)

    Sun, Haixia; Li, Sikun

    2017-06-01

    Now, this document puts forward a deep neural network (DNN) model with more credible data set and more robust structure. First, we take two regularization skills, dropout and sparsity constraint to strengthen the generalization ability of the model. In this way, not only the model is able to reach the consistency between the pre-training model and the fine-tuning model, but also it reduce resource consumption. Then network compression by weights sharing and quantization is allowed to reduce storage cost. In the end, we evaluate the quality of the reconstructed speech according to different criterion. The result proofs that the improved framework has good performance on speech enhancement and meets the requirement of speech processing.

  19. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes

    PubMed Central

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data. PMID:26201006

  20. A P-Norm Robust Feature Extraction Method for Identifying Differentially Expressed Genes.

    PubMed

    Liu, Jian; Liu, Jin-Xing; Gao, Ying-Lian; Kong, Xiang-Zhen; Wang, Xue-Song; Wang, Dong

    2015-01-01

    In current molecular biology, it becomes more and more important to identify differentially expressed genes closely correlated with a key biological process from gene expression data. In this paper, based on the Schatten p-norm and Lp-norm, a novel p-norm robust feature extraction method is proposed to identify the differentially expressed genes. In our method, the Schatten p-norm is used as the regularization function to obtain a low-rank matrix and the Lp-norm is taken as the error function to improve the robustness to outliers in the gene expression data. The results on simulation data show that our method can obtain higher identification accuracies than the competitive methods. Numerous experiments on real gene expression data sets demonstrate that our method can identify more differentially expressed genes than the others. Moreover, we confirmed that the identified genes are closely correlated with the corresponding gene expression data.

  1. Long-period grating and its cascaded counterpart in photonic crystal fiber for gas phase measurement.

    PubMed

    Tian, Fei; Kanka, Jiri; Du, Henry

    2012-09-10

    Regular and cascaded long period gratings (LPG, C-LPG) of periods ranging from 460 to 590 μm were inscribed in an endlessly single mode photonic crystal fiber (PCF) using CO(2) laser for sensing measurements of helium, argon and acetylene. High index sensitivities in excess of 1700 nm/RIU were achieved in both grating schemes with a period of 460 μm. The sharp interference fringes in the transmission spectrum of C-PCF-LPG afforded not only greatly enhanced sensing resolution, but also accuracy when the phase-shift of the fringe pattern is determined through spectral processing. Comparative numerical and experimental studies indicated LP(01) to LP(03) mode coupling as the principal coupling step for both PCF-LPG and C-PCF-LPG with emergence of multi-mode coupling at shorter grating periods or longer resonance wavelengths.

  2. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  3. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  4. Discovering mutated driver genes through a robust and sparse co-regularized matrix factorization framework with prior information from mRNA expression patterns and interaction network.

    PubMed

    Xi, Jianing; Wang, Minghui; Li, Ao

    2018-06-05

    Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.

  5. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    PubMed

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (p<0.001) for predicting the task being performed within each scan using artifact-cleaned components. The NMF algorithms, which suppressed negative BOLD signal, had the poorest accuracy compared to the ICA and sparse coding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (p<0.001). Lower classification accuracy occurred when the extracted spatial maps contained more CSF regions (p<0.001). The success of sparse coding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Wavelet-based 3-D inversion for frequency-domain airborne EM data

    NASA Astrophysics Data System (ADS)

    Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.

    2018-04-01

    In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.

  7. Effects of improved fat meat products consumption on emergent cardiovascular disease markers of male volunteers at cardiovascular risk.

    PubMed

    Celada, Paloma; Sánchez-Muniz, Francisco J; Delgado-Pando, Gonzalo; Bastida, Sara; Rodilla, Manuel Espárrago; Jiménez-Colmenero, Francisco; Olmedilla-Alonso, Begoña

    2016-12-01

    High meat-product consumption has been related to cardiovascular disease (CVD). However, previous results suggest the benefits of consuming improved fat meat products on lipoprotein-cholesterol and anthropometric measurements. Present study aims to assess the effect of consuming different Pâté and Frankfurter formulations on emergent CVD biomarkers in male volunteers at increased CVD risk. Eighteen male volunteers with at least two CVD risk factors were enrolled in a sequentially controlled study where different pork-products were tested: reduced-fat (RF), omega-3-enriched-RF (n-3RF), and normal-fat (NF). Pork-products were consumed during 4-week periods separated by 4-week washout. The cardiometabolic index (CI), oxidized low density lipoproteins (oxLDL), apolipoproteins (Apo) A1 and B, homocysteine (tHcys), arylesterase (AE), C-reactive Protein (CRP), tumor necrotic factor-alpha (TNFα), and lipoprotein (a) (Lp(a)) were tested and some other related ratios calculated. AE, oxLDL and Lp(a), AE/HDLc, LDLc/Apo B, and AE/oxLDL rate of change were differently affected (P<0.01) by pork-products consumption. RF increased (P < 0.05) AE, AE/HDLc and AE/oxLDL ratios and decreased TNFα, tHcys; n-3RF increased (P < 0.001) AE, AE/HDLc and AE/oxLDL ratios and decreased (P < 0.05) Lp(a); while NF increased (P<0.05) oxLDL and Lp(a) levels. In conclusion, RF and n-3RF products affected positively the level of some emergent CVD markers. The high regular consumption of NF-products should be limited as significantly increased Lp(a) and oxLDL values. The high variability in response observed for some markers suggests the need to perform more studies to identify targets for RF- and n-3RF-products. Graphical Abstract Emergent CVD markers.

  8. Observations of IO hot-spots at coastal sites with the combination of a mobile CE- and LP- DOAS

    NASA Astrophysics Data System (ADS)

    Pöhler, D.; Horbanski, M.; Schmitt, S.; Anthofer, M.; Tschritter, J.; Platt, U.

    2012-04-01

    Reactive iodine species are emitted by seaweed in the intertidal zone of coastal sites during low tide. Beside their oxidation to iodine oxide (IO) and reduction of ozone, they act as precursors for particle formation and therefore have a potential impact on climate. A correlation between iodine oxide and particle formation could be observed in several field studies. However, modelling studies suggest that the so far observed mixing ratios of iodine oxide are too low to explain the observed particle formation. This may be caused by the so far applied measurement techniques which either average over a long measurement path of several km (LP-DOAS) or by immobile in-situ techniques (LIF or BB-CEAS) located typically few 10-100m of the intertidal area. Thus both techniques could not observe local "hot-spots", locations with locally elevated IO levels above the background with small spatial extend (e.g. above a source). We present a new developed Cavity Enhanced Differential Optical Absorption Spectroscopy (CE- DOAS) instrument for the direct identification of IO down to 1ppt. This technique gives the possibility to achieve long absorption light paths in a compact setup (<2.0m) and thus apply the DOAS principle to in-situ measurements. The resonator of the cavity is formed by two high reflective mirrors in the spectral window from 430-460nm. To avoid any interference of reactive iodine compounds with tubes, walls or filters, the resonator is open similar to a LP-DOAS setup. A blue LED is used as light source. The total instrument setup is relatively light (25kg) and can easily be located at different locations. Hence it is possible to setup this instrument directly over the macro algae in the intertidal area during low tide to investigate the IO spatial distribution and "hot-spots". As IO concentrations vary strongly due to different meteorological parameters, the CE-DOAS measurements are combined with LP-DOAS in the same area. Thus the combination allows deriving a spatial variability. The results from the first application during the HaloCave2010 campaign on Cape Verde will be presented. Opposite to former measurements both instruments could not observe IO at any coastal site close to the CVAO station. Recently measurements were performed along the Irish west coast (partly at the research station Mace Head during MaCloud field campaign) to investigate the IO levels emitted by macro algae. During low tide the CE-DOAS instrument was regularly set-up directly in the intertidal area above the macro algae. Results of different coastal sites will be presented in detail. Elevated IO concentrations up to several 10ppt could be observed with the CE-DOAS instrument regularly, but LP-DOAS concentrations are typically more than an order of magnitude lower. The data will be discussed according to the IO "hot-spot" theory. Even at unfavorable meteorological conditions (clouds, strong wind) the CE-DOAS instrument could regularly observe enhanced IO levels. Different coastal sites show different IO emission strength and spatial distribution. The spatial distribution of IO at different coastal sites and its impact on atmospheric chemistry will be discussed.

  9. Experiments on sparsity assisted phase retrieval of phase objects

    NASA Astrophysics Data System (ADS)

    Gaur, Charu; Lochab, Priyanka; Khare, Kedar

    2017-05-01

    Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.

  10. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia

    PubMed Central

    Kim, Junghoe; Calhoun, Vince D.; Shim, Eunsoo; Lee, Jong-Hwan

    2015-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. PMID:25987366

  11. Regularization iteration imaging algorithm for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Tong, Guowei; Liu, Shi; Chen, Hongyan; Wang, Xueyao

    2018-03-01

    The image reconstruction method plays a crucial role in real-world applications of the electrical capacitance tomography technique. In this study, a new cost function that simultaneously considers the sparsity and low-rank properties of the imaging targets is proposed to improve the quality of the reconstruction images, in which the image reconstruction task is converted into an optimization problem. Within the framework of the split Bregman algorithm, an iterative scheme that splits a complicated optimization problem into several simpler sub-tasks is developed to solve the proposed cost function efficiently, in which the fast-iterative shrinkage thresholding algorithm is introduced to accelerate the convergence. Numerical experiment results verify the effectiveness of the proposed algorithm in improving the reconstruction precision and robustness.

  12. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  13. Enhancing Sparsity by Reweighted l(1) Minimization

    DTIC Science & Technology

    2008-07-01

    recovery depends on the sparsity level k. The dashed curves represent a reweighted ℓ1 algorithm that outperforms the traditional unweighted ℓ1...approach (solid curve ). (a) Performance after 4 reweighting iterations as a function of ǫ. (b) Performance with fixed ǫ = 0.1 as a function of the number of...signal recovery (declared when ‖x0 − x‖ℓ∞ ≤ 10−3) for the unweighted ℓ1 algorithm as a function of the sparsity level k. The dashed curves represent the

  14. Salient Object Detection via Structured Matrix Decomposition.

    PubMed

    Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J

    2016-05-04

    Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.

  15. Poisson image reconstruction with Hessian Schatten-norm regularization.

    PubMed

    Lefkimmiatis, Stamatios; Unser, Michael

    2013-11-01

    Poisson inverse problems arise in many modern imaging applications, including biomedical and astronomical ones. The main challenge is to obtain an estimate of the underlying image from a set of measurements degraded by a linear operator and further corrupted by Poisson noise. In this paper, we propose an efficient framework for Poisson image reconstruction, under a regularization approach, which depends on matrix-valued regularization operators. In particular, the employed regularizers involve the Hessian as the regularization operator and Schatten matrix norms as the potential functions. For the solution of the problem, we propose two optimization algorithms that are specifically tailored to the Poisson nature of the noise. These algorithms are based on an augmented-Lagrangian formulation of the problem and correspond to two variants of the alternating direction method of multipliers. Further, we derive a link that relates the proximal map of an l(p) norm with the proximal map of a Schatten matrix norm of order p. This link plays a key role in the development of one of the proposed algorithms. Finally, we provide experimental results on natural and biological images for the task of Poisson image deblurring and demonstrate the practical relevance and effectiveness of the proposed framework.

  16. Image Reconstruction from Under sampled Fourier Data Using the Polynomial Annihilation Transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Archibald, Richard K.; Gelb, Anne; Platte, Rodrigo

    Fourier samples are collected in a variety of applications including magnetic resonance imaging and synthetic aperture radar. The data are typically under-sampled and noisy. In recent years, l 1 regularization has received considerable attention in designing image reconstruction algorithms from under-sampled and noisy Fourier data. The underlying image is assumed to have some sparsity features, that is, some measurable features of the image have sparse representation. The reconstruction algorithm is typically designed to solve a convex optimization problem, which consists of a fidelity term penalized by one or more l 1 regularization terms. The Split Bregman Algorithm provides a fastmore » explicit solution for the case when TV is used for the l1l1 regularization terms. Due to its numerical efficiency, it has been widely adopted for a variety of applications. A well known drawback in using TV as an l 1 regularization term is that the reconstructed image will tend to default to a piecewise constant image. This issue has been addressed in several ways. Recently, the polynomial annihilation edge detection method was used to generate a higher order sparsifying transform, and was coined the “polynomial annihilation (PA) transform.” This paper adapts the Split Bregman Algorithm for the case when the PA transform is used as the l 1 regularization term. In so doing, we achieve a more accurate image reconstruction method from under-sampled and noisy Fourier data. Our new method compares favorably to the TV Split Bregman Algorithm, as well as to the popular TGV combined with shearlet approach.« less

  17. Homogeneity Pursuit

    PubMed Central

    Ke, Tracy; Fan, Jianqing; Wu, Yichao

    2014-01-01

    This paper explores the homogeneity of coefficients in high-dimensional regression, which extends the sparsity concept and is more general and suitable for many applications. Homogeneity arises when regression coefficients corresponding to neighboring geographical regions or a similar cluster of covariates are expected to be approximately the same. Sparsity corresponds to a special case of homogeneity with a large cluster of known atom zero. In this article, we propose a new method called clustering algorithm in regression via data-driven segmentation (CARDS) to explore homogeneity. New mathematics are provided on the gain that can be achieved by exploring homogeneity. Statistical properties of two versions of CARDS are analyzed. In particular, the asymptotic normality of our proposed CARDS estimator is established, which reveals better estimation accuracy for homogeneous parameters than that without homogeneity exploration. When our methods are combined with sparsity exploration, further efficiency can be achieved beyond the exploration of sparsity alone. This provides additional insights into the power of exploring low-dimensional structures in high-dimensional regression: homogeneity and sparsity. Our results also shed lights on the properties of the fussed Lasso. The newly developed method is further illustrated by simulation studies and applications to real data. Supplementary materials for this article are available online. PMID:26085701

  18. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  19. Bayesian nonparametric dictionary learning for compressed sensing MRI.

    PubMed

    Huang, Yue; Paisley, John; Lin, Qin; Ding, Xinghao; Fu, Xueyang; Zhang, Xiao-Ping

    2014-12-01

    We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRIs) from highly undersampled k -space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov chain Monte Carlo for the Bayesian model, and use the alternating direction method of multipliers for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Ashish; McNulty, Ian; Munson, Todd

    We propose a new approach to robustly retrieve the exit wave of an extended sample from its coherent diffraction pattern by exploiting sparsity of the sample's edges. This approach enables imaging of an extended sample with a single view, without ptychography. We introduce nonlinear optimization methods that promote sparsity, and we derive update rules to robustly recover the sample's exit wave. We test these methods on simulated samples by varying the sparsity of the edge-detected representation of the exit wave. Finally, our tests illustrate the strengths and limitations of the proposed method in imaging extended samples.

  1. Manifold Regularized Multitask Feature Learning for Multimodality Disease Classification

    PubMed Central

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-01-01

    Multimodality based methods have shown great advantages in classification of Alzheimer’s disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. PMID:25277605

  2. Effects of Apple Consumption on Lipid Profile of Hyperlipidemic and Overweight Men

    PubMed Central

    Vafa, Mohammad Reza; Haghighatjoo, Elham; Shidfar, Farzad; Afshari, Shirin; Gohari, Mahmood Reza; Ziaee, Amir

    2011-01-01

    Objectives: Fruits and vegetables may be beneficial on lipid profile of hyperlipidemic subjects. The present study was aimed to verify the effect of golden delicious apple on Lipid Profile in hyperlipidemic and overweight men. Methods: Forty six hyperlipidemic and overweight men were randomly divided into two groups. Intervention group received 300g golden delicious apple per day for 8 weeks. Control group had the regular dietary regimen for the same period of time. Blood samples were analyzed for serum triglycerides (TG), total cholesterol (TC), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C), very low density lipoprotein-cholesterol (VLDL), apolipoprotein B (Apo B), lipoprotein a (Lp a) and LDL/HDL ratio at baseline and after intervention. Results: Total polyphenols and fibers were 485 mg/kg and 4.03 g/100g in fresh apple respectively. After 8 weeks, significant statistical differences were observed considering the TG and VLDL levels between two groups, but no significant differences were observed regarding TC, LDL-C, HDL-C, Apo (B), Lp (a) and LDL/HDL ratio. Conclusions: Consumption of Golden delicious apple may be increased serum TG and VLDL in hyperlipidemic and overweight men. We need more studies to assay the effect of apple consumption on serum TC, LDL-C, HDL-C, Apo (B), Lp (a) and LDL/HDL ratio. PMID:21603015

  3. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  4. Sparsity-driven tomographic reconstruction of atmospheric water vapor using GNSS and InSAR observations

    NASA Astrophysics Data System (ADS)

    Heublein, Marion; Alshawaf, Fadwa; Zhu, Xiao Xiang; Hinz, Stefan

    2016-04-01

    An accurate knowledge of the 3D distribution of water vapor in the atmosphere is a key element for weather forecasting and climate research. On the other hand, as water vapor causes a delay in the microwave signal propagation within the atmosphere, a precise determination of water vapor is required for accurate positioning and deformation monitoring using Global Navigation Satellite Systems (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). However, due to its high variability in time and space, the atmospheric water vapor distribution is difficult to model. Since GNSS meteorology was introduced about twenty years ago, it has increasingly been used as a geodetic technique to generate maps of 2D Precipitable Water Vapor (PWV). Moreover, several approaches for 3D tomographic water vapor reconstruction from GNSS-based estimates using the simple least squares adjustment were presented. In this poster, we present an innovative and sophisticated Compressive Sensing (CS) concept for sparsity-driven tomographic reconstruction of 3D atmospheric wet refractivity fields using data from GNSS and InSAR. The 2D zenith wet delay (ZWD) estimates are obtained by a combination of point-wise estimates of the wet delay using GNSS observations and partial InSAR wet delay maps. These ZWD estimates are aggregated to derive realistic wet delay input data of 100 points as if corresponding to 100 GNSS sites within an area of 100 km × 100 km in the test region of the Upper Rhine Graben. The made-up ZWD values can be mapped into different elevation and azimuth angles. Using the Cosine transform, a sparse representation of the wet refractivity field is obtained. In contrast to existing tomographic approaches, we exploit sparsity as a prior for the regularization of the underdetermined inverse system. The new aspects of this work include both the combination of GNSS and InSAR data for water vapor tomography and the sophisticated CS estimation. The accuracy of the estimated 3D water vapor field is determined by comparing slant integrated wet delays computed from the estimated wet refractivities with real GNSS wet delay estimates. This comparison is performed along different elevation and azimuth angles.

  5. Structured Sparse Principal Components Analysis With the TV-Elastic Net Penalty.

    PubMed

    de Pierrefeu, Amicie; Lofstedt, Tommy; Hadj-Selem, Fouad; Dubois, Mathieu; Jardri, Renaud; Fovet, Thomas; Ciuciu, Philippe; Frouin, Vincent; Duchesnay, Edouard

    2018-02-01

    Principal component analysis (PCA) is an exploratory tool widely used in data analysis to uncover the dominant patterns of variability within a population. Despite its ability to represent a data set in a low-dimensional space, PCA's interpretability remains limited. Indeed, the components produced by PCA are often noisy or exhibit no visually meaningful patterns. Furthermore, the fact that the components are usually non-sparse may also impede interpretation, unless arbitrary thresholding is applied. However, in neuroimaging, it is essential to uncover clinically interpretable phenotypic markers that would account for the main variability in the brain images of a population. Recently, some alternatives to the standard PCA approach, such as sparse PCA (SPCA), have been proposed, their aim being to limit the density of the components. Nonetheless, sparsity alone does not entirely solve the interpretability problem in neuroimaging, since it may yield scattered and unstable components. We hypothesized that the incorporation of prior information regarding the structure of the data may lead to improved relevance and interpretability of brain patterns. We therefore present a simple extension of the popular PCA framework that adds structured sparsity penalties on the loading vectors in order to identify the few stable regions in the brain images that capture most of the variability. Such structured sparsity can be obtained by combining, e.g., and total variation (TV) penalties, where the TV regularization encodes information on the underlying structure of the data. This paper presents the structured SPCA (denoted SPCA-TV) optimization framework and its resolution. We demonstrate SPCA-TV's effectiveness and versatility on three different data sets. It can be applied to any kind of structured data, such as, e.g., -dimensional array images or meshes of cortical surfaces. The gains of SPCA-TV over unstructured approaches (such as SPCA and ElasticNet PCA) or structured approach (such as GraphNet PCA) are significant, since SPCA-TV reveals the variability within a data set in the form of intelligible brain patterns that are easier to interpret and more stable across different samples.

  6. Sparsity-promoting inversion for modeling of irregular volcanic deformation source

    NASA Astrophysics Data System (ADS)

    Zhai, G.; Shirzaei, M.

    2016-12-01

    Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.

  7. Deblurring traffic sign images based on exemplars

    PubMed Central

    Qiu, Tianshuang; Luan, Shengyang; Song, Haiyu; Wu, Linxiu

    2018-01-01

    Motion blur appearing in traffic sign images may lead to poor recognition results, and therefore it is of great significance to study how to deblur the images. In this paper, a novel method for deblurring traffic sign is proposed based on exemplars and several related approaches are also made. First, an exemplar dataset construction method is proposed based on multiple-size partition strategy to lower calculation cost of exemplar matching. Second, a matching criterion based on gradient information and entropy correlation coefficient is also proposed to enhance the matching accuracy. Third, L0.5-norm is introduced as the regularization item to maintain the sparsity of blur kernel. Experiments verify the superiority of the proposed approaches and extensive evaluations against state-of-the-art methods demonstrate the effectiveness of the proposed algorithm. PMID:29513677

  8. A Scatter-Based Prototype Framework and Multi-Class Extension of Support Vector Machines

    PubMed Central

    Jenssen, Robert; Kloft, Marius; Zien, Alexander; Sonnenburg, Sören; Müller, Klaus-Robert

    2012-01-01

    We provide a novel interpretation of the dual of support vector machines (SVMs) in terms of scatter with respect to class prototypes and their mean. As a key contribution, we extend this framework to multiple classes, providing a new joint Scatter SVM algorithm, at the level of its binary counterpart in the number of optimization variables. This enables us to implement computationally efficient solvers based on sequential minimal and chunking optimization. As a further contribution, the primal problem formulation is developed in terms of regularized risk minimization and the hinge loss, revealing the score function to be used in the actual classification of test patterns. We investigate Scatter SVM properties related to generalization ability, computational efficiency, sparsity and sensitivity maps, and report promising results. PMID:23118845

  9. 76 FR 9771 - SFPP, L.P.; SFPP, L.P.; SFPP, L.P.; SFPP, L.P.; Notice of Filing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-22

    ..., L.P.; SFPP, L.P.; SFPP, L.P.; SFPP, L.P.; Notice of Filing Take notice that on February 10, 2011, the SFPP, L.P. filed with the Commission a proposal to provide refunds to shippers who were not... orders dated December 8, 2006 (SFPP, L.P., 117 FERC ] 61, 285 (2007)), December 26, 2007 (SFPP, L.P., 121...

  10. Spatially Common Sparsity Based Adaptive Channel Estimation and Feedback for FDD Massive MIMO

    NASA Astrophysics Data System (ADS)

    Gao, Zhen; Dai, Linglong; Wang, Zhaocheng; Chen, Sheng

    2015-12-01

    This paper proposes a spatially common sparsity based adaptive channel estimation and feedback scheme for frequency division duplex based massive multi-input multi-output (MIMO) systems, which adapts training overhead and pilot design to reliably estimate and feed back the downlink channel state information (CSI) with significantly reduced overhead. Specifically, a non-orthogonal downlink pilot design is first proposed, which is very different from standard orthogonal pilots. By exploiting the spatially common sparsity of massive MIMO channels, a compressive sensing (CS) based adaptive CSI acquisition scheme is proposed, where the consumed time slot overhead only adaptively depends on the sparsity level of the channels. Additionally, a distributed sparsity adaptive matching pursuit algorithm is proposed to jointly estimate the channels of multiple subcarriers. Furthermore, by exploiting the temporal channel correlation, a closed-loop channel tracking scheme is provided, which adaptively designs the non-orthogonal pilot according to the previous channel estimation to achieve an enhanced CSI acquisition. Finally, we generalize the results of the multiple-measurement-vectors case in CS and derive the Cramer-Rao lower bound of the proposed scheme, which enlightens us to design the non-orthogonal pilot signals for the improved performance. Simulation results demonstrate that the proposed scheme outperforms its counterparts, and it is capable of approaching the performance bound.

  11. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    PubMed Central

    Wang, Huaqing; Ke, Yanliang; Song, Liuyang; Tang, Gang; Chen, Peng

    2016-01-01

    The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS) theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples. PMID:27657063

  12. Effect of mipomersen on LDL-cholesterol in patients with severe LDL-hypercholesterolaemia and atherosclerosis treated by lipoprotein apheresis (The MICA-Study).

    PubMed

    Waldmann, Elisa; Vogt, Anja; Crispin, Alexander; Altenhofer, Julia; Riks, Ina; Parhofer, Klaus G

    2017-04-01

    In this study, we evaluated the effect of mipomersen in patients with severe LDL-hypercholesterolaemia and atherosclerosis, treated by lipid lowering drugs and regular lipoprotein apheresis. This prospective, randomized, controlled phase II single center trial enrolled 15 patients (9 males, 6 females; 59 ± 9 y, BMI 27 ± 4 kg/m 2 ) with established atherosclerosis, LDL-cholesterol ≥130 mg/dL (3.4 mmol/L) despite maximal possible drug therapy, and fulfilling German criteria for regular lipoprotein apheresis. All patients were on stable lipid lowering drug therapy and regular apheresis for >3 months. Patients randomized to treatment (n = 11) self-injected mipomersen 200 mg sc weekly, at day 4 after apheresis, for 26 weeks. Patients randomized to control (n = 4) continued apheresis without injection. The primary endpoint was the change in pre-apheresis LDL-cholesterol. Of the patients randomized to mipomersen, 3 discontinued the drug early (<12 weeks therapy) for side effects. For these, another 3 were recruited and randomized. Further, 4 patients discontinued mipomersen between 12 and 26 weeks for side effects (moderate to severe injection site reactions n = 3 and elevated liver enzymes n = 1). In those treated for >12 weeks, mipomersen reduced pre-apheresis LDL-cholesterol significantly by 22.6 ± 17.0%, from a baseline of 4.8 ± 1.2 mmol/L to 3.7 ± 0.9 mmol/L, while there was no significant change in the control group (+1.6 ± 9.3%), with the difference between the groups being significant (p=0.02). Mipomersen also decreased pre-apheresis lipoprotein(a) (Lp(a)) concentration from a median baseline of 40.2 mg/dL (32.5,71) by 16% (-19.4,13.6), though without significance (p=0.21). Mipomersen reduces LDL-cholesterol (significantly) and Lp(a) (non-significantly) in patients on maximal lipid-lowering drug therapy and regular apheresis, but is often associated with side effects. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Exponential series approaches for nonparametric graphical models

    NASA Astrophysics Data System (ADS)

    Janofsky, Eric

    Markov Random Fields (MRFs) or undirected graphical models are parsimonious representations of joint probability distributions. This thesis studies high-dimensional, continuous-valued pairwise Markov Random Fields. We are particularly interested in approximating pairwise densities whose logarithm belongs to a Sobolev space. For this problem we propose the method of exponential series which approximates the log density by a finite-dimensional exponential family with the number of sufficient statistics increasing with the sample size. We consider two approaches to estimating these models. The first is regularized maximum likelihood. This involves optimizing the sum of the log-likelihood of the data and a sparsity-inducing regularizer. We then propose a variational approximation to the likelihood based on tree-reweighted, nonparametric message passing. This approximation allows for upper bounds on risk estimates, leverages parallelization and is scalable to densities on hundreds of nodes. We show how the regularized variational MLE may be estimated using a proximal gradient algorithm. We then consider estimation using regularized score matching. This approach uses an alternative scoring rule to the log-likelihood, which obviates the need to compute the normalizing constant of the distribution. For general continuous-valued exponential families, we provide parameter and edge consistency results. As a special case we detail a new approach to sparse precision matrix estimation which has statistical performance competitive with the graphical lasso and computational performance competitive with the state-of-the-art glasso algorithm. We then describe results for model selection in the nonparametric pairwise model using exponential series. The regularized score matching problem is shown to be a convex program; we provide scalable algorithms based on consensus alternating direction method of multipliers (ADMM) and coordinate-wise descent. We use simulations to compare our method to others in the literature as well as the aforementioned TRW estimator.

  14. Analysis of Liquid Propellant Exposed to Elastomeric Materials

    DTIC Science & Technology

    1989-12-01

    Rubber LP-l NBR -2 B. Nitrile Rubber LP-2 N-BR-8 LP-3 NBR -9 LP-4 1203-F60-R2, RADIAN LP-5 VT-380 ( NBR /PVC), RADIAN LP-6 BJLT MI-40, UNIROYAL LP-7 OZO-HA...0221 (70% NBR /30% PVC), UNIROYAL C. Carboxylated Nitrile Rubber LP-8 XNBR-2 LP-9 XN BR-3 LP-10 XNBR-6 D. Polychioroprene Rubber LP-11 CR-i LP-12 CR-2...compatibility of liquid propellants is also determined by the degradation of the propellant by decomposition, by the solution of ballistically undesirable

  15. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  16. Sparsity-based super-resolved coherent diffraction imaging of one-dimensional objects.

    PubMed

    Sidorenko, Pavel; Kfir, Ofer; Shechtman, Yoav; Fleischer, Avner; Eldar, Yonina C; Segev, Mordechai; Cohen, Oren

    2015-09-08

    Phase-retrieval problems of one-dimensional (1D) signals are known to suffer from ambiguity that hampers their recovery from measurements of their Fourier magnitude, even when their support (a region that confines the signal) is known. Here we demonstrate sparsity-based coherent diffraction imaging of 1D objects using extreme-ultraviolet radiation produced from high harmonic generation. Using sparsity as prior information removes the ambiguity in many cases and enhances the resolution beyond the physical limit of the microscope. Our approach may be used in a variety of problems, such as diagnostics of defects in microelectronic chips. Importantly, this is the first demonstration of sparsity-based 1D phase retrieval from actual experiments, hence it paves the way for greatly improving the performance of Fourier-based measurement systems where 1D signals are inherent, such as diagnostics of ultrashort laser pulses, deciphering the complex time-dependent response functions (for example, time-dependent permittivity and permeability) from spectral measurements and vice versa.

  17. Exploiting the wavelet structure in compressed sensing MRI.

    PubMed

    Chen, Chen; Huang, Junzhou

    2014-12-01

    Sparsity has been widely utilized in magnetic resonance imaging (MRI) to reduce k-space sampling. According to structured sparsity theories, fewer measurements are required for tree sparse data than the data only with standard sparsity. Intuitively, more accurate image reconstruction can be achieved with the same number of measurements by exploiting the wavelet tree structure in MRI. A novel algorithm is proposed in this article to reconstruct MR images from undersampled k-space data. In contrast to conventional compressed sensing MRI (CS-MRI) that only relies on the sparsity of MR images in wavelet or gradient domain, we exploit the wavelet tree structure to improve CS-MRI. This tree-based CS-MRI problem is decomposed into three simpler subproblems then each of the subproblems can be efficiently solved by an iterative scheme. Simulations and in vivo experiments demonstrate the significant improvement of the proposed method compared to conventional CS-MRI algorithms, and the feasibleness on MR data compared to existing tree-based imaging algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Critical spaces for quasilinear parabolic evolution equations and applications

    NASA Astrophysics Data System (ADS)

    Prüss, Jan; Simonett, Gieri; Wilke, Mathias

    2018-02-01

    We present a comprehensive theory of critical spaces for the broad class of quasilinear parabolic evolution equations. The approach is based on maximal Lp-regularity in time-weighted function spaces. It is shown that our notion of critical spaces coincides with the concept of scaling invariant spaces in case that the underlying partial differential equation enjoys a scaling invariance. Applications to the vorticity equations for the Navier-Stokes problem, convection-diffusion equations, the Nernst-Planck-Poisson equations in electro-chemistry, chemotaxis equations, the MHD equations, and some other well-known parabolic equations are given.

  19. Joint Inversion of Body-Wave Arrival Times and Surface-Wave Dispersion Data in the Wavelet Domain Constrained by Sparsity Regularization

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.

    2014-12-01

    Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.

  20. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.

    PubMed

    Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan

    2016-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    NASA Astrophysics Data System (ADS)

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  2. Inter-class sparsity based discriminative least square regression.

    PubMed

    Wen, Jie; Xu, Yong; Li, Zuoyong; Ma, Zhongli; Xu, Yuanrong

    2018-06-01

    Least square regression is a very popular supervised classification method. However, two main issues greatly limit its performance. The first one is that it only focuses on fitting the input features to the corresponding output labels while ignoring the correlations among samples. The second one is that the used label matrix, i.e., zero-one label matrix is inappropriate for classification. To solve these problems and improve the performance, this paper presents a novel method, i.e., inter-class sparsity based discriminative least square regression (ICS_DLSR), for multi-class classification. Different from other methods, the proposed method pursues that the transformed samples have a common sparsity structure in each class. For this goal, an inter-class sparsity constraint is introduced to the least square regression model such that the margins of samples from the same class can be greatly reduced while those of samples from different classes can be enlarged. In addition, an error term with row-sparsity constraint is introduced to relax the strict zero-one label matrix, which allows the method to be more flexible in learning the discriminative transformation matrix. These factors encourage the method to learn a more compact and discriminative transformation for regression and thus has the potential to perform better than other methods. Extensive experimental results show that the proposed method achieves the best performance in comparison with other methods for multi-class classification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Texture Classification with Change Point Statistics.

    DTIC Science & Technology

    1981-07-01

    it is necessary to let T approach the value of n for an nxn image. This is motivated by the fact that the computation of Ut, T is so costly, and if T...LP4 - - + ML4 + + + pS4 + - + LP5 + - + ML5 + + + PS5 + + + LP6 + - + ML6 + + + PS6 + - + LP7 - + - ML7 - + - PS7 + - + LP8 - + - ML8 - - - PS8...LP2 + + + ML2 - - - PS2 + - - LP3 - + - ML3 - - - PS3 + + + LP4 - + + ML4 + + + PS4 - - - LP5 - - + ML5 - + - PS5 - + + LP6 - + + KL6 - - - PS6

  4. PLC-based mode multi/demultiplexers for mode division multiplexing

    NASA Astrophysics Data System (ADS)

    Saitoh, Kunimasa; Hanzawa, Nobutomo; Sakamoto, Taiji; Fujisawa, Takeshi; Yamashita, Yoko; Matsui, Takashi; Tsujikawa, Kyozo; Nakajima, Kazuhide

    2017-02-01

    Recently developed PLC-based mode multi/demultiplexers (MUX/DEMUXs) for mode division multiplexing (MDM) transmission are reviewed. We firstly show the operation principle and basic characteristics of PLC-based MUX/DEMUXs with an asymmetric directional coupler (ADC). We then demonstrate the 3-mode (2LP-mode) multiplexing of the LP01, LP11a, and LP11b modes by using fabricated PLC-based mode MUX/DEMUX on one chip. In order to excite LP11b mode in the same plane, a PLC-based LP11 mode rotator is introduced. Finally, we show the PLC-based 6-mode (4LP-mode) MUX/DEMUX with a uniform height by using ADCs, LP11 mode rotators, and tapered waveguides. It is shown that the LP21a mode can be excited from the LP11b mode by using ADC, and the two nearly degenerated LP21b and LP02 modes can be (de)multiplexed separately by using tapered mode converter from E13 (E31) mode to LP21b (LP02) mode.

  5. Electron paramagnetic resonance image reconstruction with total variation and curvelets regularization

    NASA Astrophysics Data System (ADS)

    Durand, Sylvain; Frapart, Yves-Michel; Kerebel, Maud

    2017-11-01

    Spatial electron paramagnetic resonance imaging (EPRI) is a recent method to localize and characterize free radicals in vivo or in vitro, leading to applications in material and biomedical sciences. To improve the quality of the reconstruction obtained by EPRI, a variational method is proposed to inverse the image formation model. It is based on a least-square data-fidelity term and the total variation and Besov seminorm for the regularization term. To fully comprehend the Besov seminorm, an implementation using the curvelet transform and the L 1 norm enforcing the sparsity is proposed. It allows our model to reconstruct both image where acquisition information are missing and image with details in textured areas, thus opening possibilities to reduce acquisition times. To implement the minimization problem using the algorithm developed by Chambolle and Pock, a thorough analysis of the direct model is undertaken and the latter is inverted while avoiding the use of filtered backprojection (FBP) and of non-uniform Fourier transform. Numerical experiments are carried out on simulated data, where the proposed model outperforms both visually and quantitatively the classical model using deconvolution and FBP. Improved reconstructions on real data, acquired on an irradiated distal phalanx, were successfully obtained.

  6. Compressed modes for variational problems in mathematical physics and compactly supported multiresolution basis for the Laplace operator

    NASA Astrophysics Data System (ADS)

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2014-03-01

    We will describe a general formalism for obtaining spatially localized (``sparse'') solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an L1 regularization term to the variational principle, which is shown to yield solutions with compact support (``compressed modes''). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. In addition, we introduce an L1 regularized variational framework for developing a spatially localized basis, compressed plane waves (CPWs), that spans the eigenspace of a differential operator, for instance, the Laplace operator. Our approach generalizes the concept of plane waves to an orthogonal real-space basis with multiresolution capabilities. Supported by NSF Award DMR-1106024 (VO), DOE Contract No. DE-FG02-05ER25710 (RC) and ONR Grant No. N00014-11-1-719 (SO).

  7. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  8. Designing a stable feedback control system for blind image deconvolution.

    PubMed

    Cheng, Shichao; Liu, Risheng; Fan, Xin; Luo, Zhongxuan

    2018-05-01

    Blind image deconvolution is one of the main low-level vision problems with wide applications. Many previous works manually design regularization to simultaneously estimate the latent sharp image and the blur kernel under maximum a posterior framework. However, it has been demonstrated that such joint estimation strategies may lead to the undesired trivial solution. In this paper, we present a novel perspective, using a stable feedback control system, to simulate the latent sharp image propagation. The controller of our system consists of regularization and guidance, which decide the sparsity and sharp features of latent image, respectively. Furthermore, the formational model of blind image is introduced into the feedback process to avoid the image restoration deviating from the stable point. The stability analysis of the system indicates the latent image propagation in blind deconvolution task can be efficiently estimated and controlled by cues and priors. Thus the kernel estimation used for image restoration becomes more precision. Experimental results show that our system is effective on image propagation, and can perform favorably against the state-of-the-art blind image deconvolution methods on different benchmark image sets and special blurred images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  10. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-01-01

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri–Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method. PMID:26569241

  11. Real-Valued Covariance Vector Sparsity-Inducing DOA Estimation for Monostatic MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Jing

    2015-11-10

    In this paper, a real-valued covariance vector sparsity-inducing method for direction of arrival (DOA) estimation is proposed in monostatic multiple-input multiple-output (MIMO) radar. Exploiting the special configuration of monostatic MIMO radar, low-dimensional real-valued received data can be obtained by using the reduced-dimensional transformation and unitary transformation technique. Then, based on the Khatri-Rao product, a real-valued sparse representation framework of the covariance vector is formulated to estimate DOA. Compared to the existing sparsity-inducing DOA estimation methods, the proposed method provides better angle estimation performance and lower computational complexity. Simulation results verify the effectiveness and advantage of the proposed method.

  12. Updated secondary implant stability data of two dental implant systems. A retrospective cohort study

    PubMed Central

    Verleye, Gino; Mavreas, Dimitrios; Vande-Vannet, Bart

    2017-01-01

    Background At present, updated secondary implant stability data generated by actual versions of resonance frequency analysis (RFA) and mobility measurement (MM) electronic devices of 2 different implant systems with actual manufactured surfaces seem to lack and/or are incomplete. Material and Methods Secondary implant stability data based on both RFA and MM measurements were collected and analyzed from 44 formerly treated patients (24 f, 20 m) that received either Ankylos Cellplus (Ø3.5mm) (A) (n=36) or Straumann regular neck SLA tissue level (Ø4.1mm) (S) (n=37) implants in posterior positions of both jawbones (total number= 72). These results were interpretated in view of formerly published data. Results Estimated RFA outcomes (mean±SD) for A implants were of 81.23 (±0.65) (LP) - 76.15 (±1.57) (UP) isq; for S implants 76.15 (±1.48) (LP) - 73.88 (±2.34) (UP) isq. Estimated MM outcomes for A implants were (-4.0) (±0.23) (LP) - (-3.2) (±0.33) (UP) ptv; for S implants (-5.15) (±0.39) (LP) - (-4.4) (±0.84) (UP) ptv. According to GEE statistical modelling, implant type and – position seems to influence the outcome variables (p<0.05), gender and implant length did not (p >0.05). Conclusions Secondary implant stability values, recorded with current RFA and MM devices, of A Cellplus implants are provided for the first time. A difference of 14.7-9.7 isq values was noted for CellPlus versus TPS S implants recorded with a cabled RFA device. This study supports the assumption that RFA outcomes generated with first generation RFA devices are different from those obtained with current RFA devices, meaning that its use in reviews need caution and correction. Key words:Secondary implant stability, resonance frequency analysis, Periotest, Osstell Mentor, Straumann, Ankylos, CellPlus, SLA. PMID:29075415

  13. 3-D Modeling of Irregular Volcanic Sources Using Sparsity-Promoting Inversions of Geodetic Data and Boundary Element Method

    NASA Astrophysics Data System (ADS)

    Zhai, Guang; Shirzaei, Manoochehr

    2017-12-01

    Geodetic observations of surface deformation associated with volcanic activities can be used to constrain volcanic source parameters and their kinematics. Simple analytical models, such as point and spherical sources, are widely used to model deformation data. The inherent nature of oversimplified model geometries makes them unable to explain fine details of surface deformation. Current nonparametric, geometry-free inversion approaches resolve the distributed volume change, assuming it varies smoothly in space, which may detect artificial volume change outside magmatic source regions. To obtain a physically meaningful representation of an irregular volcanic source, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations, namely, outliers in the background crust. First, surface deformation data are inverted using a hybrid L1- and L2-norm regularization scheme to solve for sparse volume change distributions. Next, a boundary element method is implemented to solve for the displacement discontinuity distribution of the reservoir, which satisfies a uniform pressure boundary condition. The inversion approach is thoroughly validated using benchmark and synthetic tests, of which the results show that source dimension, depth, and shape can be recovered appropriately. We apply this modeling scheme to deformation observed at Kilauea summit for periods of uplift and subsidence leading to and following the 2007 Father's Day event. We find that the magmatic source geometries for these periods are statistically distinct, which may be an indicator that magma is released from isolated compartments due to large differential pressure leading to the rift intrusion.

  14. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  15. Sparsity-optimized separation of body waves and ground-roll by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Chen, Wenchao; Wang, Xiaokai; Wang, Wei

    2017-10-01

    Low-frequency oscillatory ground-roll is regarded as one of the main regular interference waves, which obscures primary reflections in land seismic data. Suppressing the ground-roll can reasonably improve the signal-to-noise ratio of seismic data. Conventional suppression methods, such as high-pass and various f-k filtering, usually cause waveform distortions and loss of body wave information because of their simple cut-off operation. In this study, a sparsity-optimized separation of body waves and ground-roll, which is based on morphological component analysis theory, is realized by constructing dictionaries using tunable Q-factor wavelet transforms with different Q-factors. Our separation model is grounded on the fact that the input seismic data are composed of low-oscillatory body waves and high-oscillatory ground-roll. Two different waveform dictionaries using a low Q-factor and a high Q-factor, respectively, are confirmed as able to sparsely represent each component based on their diverse morphologies. Thus, seismic data including body waves and ground-roll can be nonlinearly decomposed into low-oscillatory and high-oscillatory components. This is a new noise attenuation approach according to the oscillatory behaviour of the signal rather than the scale or frequency. We illustrate the method using both synthetic and field shot data. Compared with results from conventional high-pass and f-k filtering, the results of the proposed method prove this method to be effective and advantageous in preserving the waveform and bandwidth of reflections.

  16. Exploratory Phase for Optimizing Lifetime Position 4 of the COS/FUV Detector

    NASA Astrophysics Data System (ADS)

    Roman-Duval, Julia; Indriolo, Nick; De Rosa, Gisella; Fox, Andrew; Oliveira, Cristina; Penton, Steve; Sahnow, David; Sonnentrucker, Paule; White, James

    2018-05-01

    The COS/FUV detector uses a microchannel plate, whose response (gain) decreases with usage, a process called gain-sag. To mitigate these gain-sag effects, COS/FUV science spectra are periodically moved to pristine locations of the detector, i.e. different lifetime positions (LP). Preparations for the move from LP3 to LP4 started with an exploratory phase between May and October 2016, while the LP4 move occurred on October 2, 2017. This ISR describes the LP4 exploratory phase, during which the feasibility of placing LP4 at -2.5'' below LP3 (-5'' below LP1) was examined, the effects of the LP4 move on the science quality and calibration accuracy of spectra were investigated, and the final location of LP4 (- 2.5'' below LP3) was determined. We describe in detail the strategy adopted for the LP4 exploratory phase to ensure that all potential issues were identified and resolved well in advance of the LP4 move.

  17. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Xiaorun; Zhao, Liaoying

    2016-01-01

    Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.

  19. RBOOST: RIEMANNIAN DISTANCE BASED REGULARIZED BOOSTING

    PubMed Central

    Liu, Meizhu; Vemuri, Baba C.

    2011-01-01

    Boosting is a versatile machine learning technique that has numerous applications including but not limited to image processing, computer vision, data mining etc. It is based on the premise that the classification performance of a set of weak learners can be boosted by some weighted combination of them. There have been a number of boosting methods proposed in the literature, such as the AdaBoost, LPBoost, SoftBoost and their variations. However, the learning update strategies used in these methods usually lead to overfitting and instabilities in the classification accuracy. Improved boosting methods via regularization can overcome such difficulties. In this paper, we propose a Riemannian distance regularized LPBoost, dubbed RBoost. RBoost uses Riemannian distance between two square-root densities (in closed form) – used to represent the distribution over the training data and the classification error respectively – to regularize the error distribution in an iterative update formula. Since this distance is in closed form, RBoost requires much less computational cost compared to other regularized Boosting algorithms. We present several experimental results depicting the performance of our algorithm in comparison to recently published methods, LP-Boost and CAVIAR, on a variety of datasets including the publicly available OASIS database, a home grown Epilepsy database and the well known UCI repository. Results depict that the RBoost algorithm performs better than the competing methods in terms of accuracy and efficiency. PMID:21927643

  20. Teachers' and Researchers' Beliefs of Learning and the use of Learning Progressions

    NASA Astrophysics Data System (ADS)

    Clapp, Francis Neely

    In the last decade, science education reform in the United States has emphasized the exploration of cognitive learning pathways, which are theories on how a person learns a particular science subject matter. These theories are based, in part, by Piagetian developmental theory. One such model, called Learning Progressions (LP), has become prominent within science education reform. Science education researchers design LPs which in turn are used by science educators to sequence their curricula. The new national science standards released in April 2013 (Next Generation Science Standards) are, in part, grounded in the LP model. Understanding how teachers apply and use LPs, therefore, is valuable because professional development programs are likely to use this model, given the federal attention LP have received in science education reform. I sought to identify the beliefs and discourse that both LP developers and intended LP implementers have around student learning, teaching, and learning progressions. However, studies measuring beliefs or perspectives of LP-focused projects are absent in published works. A qualitative research is therefore warranted to explore this rather uncharted research area. Research questions were examined through the use of an instrumental case study. A case study approach was selected over other methodologies, as the research problem is, in part, bound within a clearly identifiable case (a professional development experience centering on a single LP model). One of the broadest definitions of a case study is noted by Becker (1968), who stated that goals of case studies are "to arrive at a comprehensive understanding of the groups under study" and to develop "general theoretical statements about regularities in social structure and process." (p.233). Based on Merriam (1985) the general consensus in the case study literature is that the assumptions underlying this method are common to naturalistic inquiry with research conducted primarily in the field with little control of variables. Beyond this similarity, different researchers have varying definitions to case studies. Merriam's (1985) provided a summary of the delineations and varying types of case studies. Merriam divided the various case study methods by their functions, with a marked divide between theory building and non-theory building methods. Non-theory building case studies are generally descriptive, and interpretive methods that apply theory to a case or context allow researchers to better understand the phenomena observed (Lijphart, 1971; Merriam, 1985). Conversely, theory building case studies focus on hypothesis generation, theory confirming, theory informing, or theory refuting (Lijphart, 1971; Merriam, 1985). Though there are many definitions and methods labeled as 'case studies,' for the purpose of this study, Yin's (1981) definition of a case study will be used. Yin (1981) defined a case study as a method to examine "(a) a contemporary phenomenon in its real-life context, especially when (b) the boundaries between phenomenon and context are not clearly evident" (p. 59). My study seeks to apply theory and study phenomena in their context, as I will examine teachers' practice in context of their respective classrooms. This study focuses on the lived experiences of both teacher and research stakeholders within the study. Specifically, I interviewed teachers who participated in a year-long teacher-in-residence (TiR) program. In addition, researchers/content experts who conceptualized the LP were also interviewed. Because the TiR experience was a form of professional development, I propose to study the impact that it had on participants' perceptions of the LP and any teacher-reported changes in their respective classrooms. However, because beliefs influence the language that we use to describe phenomena (such as learning and teaching), it is informative to also describe patterns in how LP developers explain learning and teaching. Subsequently, the results of this study will inform literature on both science teacher professional development and LPs theory to practice.

  1. Lipoprotein(a): more interesting than ever after 50 years.

    PubMed

    Dubé, Joseph B; Boffa, Michael B; Hegele, Robert A; Koschinsky, Marlys L

    2012-04-01

    Lipoprotein(a) [Lp(a)] is a risk factor for cardiovascular disease; we highlight the most recent research initiatives that have sought to define Lp(a)-dependent pathogenicity as well as pharmacologic approaches to lowering Lp(a). Recent large-scale meta-analyses have confirmed elevated Lp(a) concentrations to be a moderate but consistent prospective coronary heart disease (CHD) risk factor. The Mendelian randomization approach has also associated LPA variants with Lp(a) concentration and CHD risk. Discoveries linking Lp(a) to oxidized phospholipid burden have implicated a proinflammatory role for Lp(a) hinting at a new mechanism underlying the association with CHD risk, which adds to previous atherogenic and thrombogenic mechanisms. Most existing Lp(a)-lowering drug treatments almost always show simultaneous effects on other lipoproteins, making it difficult to assign any clinical outcome specifically to the effects of Lp(a) lowering. Early experiments with antisense oligonucleotides targeting apolipoprotein(a) mRNA seem to indicate the pleiotropic effects of Lp(a) reduction on LDL and HDL in mice. The mechanism linking Lp(a) concentration with concentrations of other blood lipids remains unknown but may provide an insight into Lp(a) metabolism. Despite the wealth of epidemiologic evidence supporting Lp(a) concentration as a CHD risk factor, the lack of a definitive functional mechanism involving an Lp(a)-dependent pathway in CHD pathogenesis has limited the potential clinical connotation of Lp(a). However, the application of novel technologies to the long-standing mysteries of Lp(a) biology seems to provide the opportunity for expanding our understanding of Lp(a) and its complex role in cardiovascular health.

  2. Detection of varicella-zoster virus antigens in lesional skin of zosteriform lichen planus but not in that of linear lichen planus.

    PubMed

    Mizukawa, Y; Horie, C; Yamazaki, Y; Shiohara, T

    2012-01-01

    Distinctions between 'linear lichen planus' (LP) and 'zosteriform LP' are difficult to determine solely based on clinical findings. The aim of this study is to determine whether the presence of the varicella-zoster virus (VZV) antigens could be used to differentiate the zosteriform LP from the linear LP. We immunohistochemically investigated the presence of in vivo localization of VZV antigens in 8 LP lesions (zosteriform LP: n = 5, linear LP: n = 3). We describe 2 cases of zosteriform LP without apparent prior episodes of herpes zoster, in whom VZV antigens were detected in the eccrine epithelium. Further analysis showed that VZV antigens were exclusively detected in the eccrine epithelium in the zosteriform LP lesions, but not in the linear LP lesions. Etiological differences exist between zosteriform LP and linear LP. The presence of VZV antigens in lesional skin of the former indicates a possible triggering role of this virus in the pathogenesis of this variant. Copyright © 2012 S. Karger AG, Basel.

  3. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-04-14

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.

  4. Transverse stress induced LP 02-LP 21 modal interference of stimulated Raman scattered light in a few-mode optical fiber

    NASA Astrophysics Data System (ADS)

    Sharma, A.; Posey, R.

    1996-02-01

    Four-photon mixing followed by stimulated Raman scattering is observed in LP 02 mode in a 7.9 μm core diameter optical fiber. A localized transverse stress efficiency couples LP 02 to the LP 21 mode with a macroscopic beat length of 1.8 mm. LP 02-LP 21 modal interference is investigated by detecting the 550-590 nm SRS through a pinhole in the far field exit plane. Quantitative explanation of wavelength dependent intensity modulation results in a precise experimental determination of {∂[β 02(λ) - β 21(λ)] }/{∂λ}, for mode-propagation constants β02( λ) and β21( λ) of LP 02 and LP 21 modes respectively, as well as Δ, the relative core-cladding refractive index difference. The LP 02-LP 21 modal interference is used for sensing of temperature between 50-300°C.

  5. Low-dose cerebral perfusion computed tomography image restoration via low-rank and total variation regularizations

    PubMed Central

    Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2016-01-01

    Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948

  6. Effect of resistance training and hypocaloric diets with different protein content on body composition and lipid profile in hypercholesterolemic obese women.

    PubMed

    García-Unciti, M; Martinez, J A; Izquierdo, M; Gorostiaga, E M; Grijalba, A; Ibañez, J

    2012-01-01

    Lifestyle changes such as following a hypocaloric diet and regular physical exercise are recognized as effective non-pharmacological interventions to reduce body fat mass and prevent cardiovascular disease risk factors. To evaluate the interactions of a higher protein (HP) vs. a lower protein (LP) diet with or without a concomitant progressive resistance training program (RT) on body composition and lipoprotein profile in hypercholesterolemic obese women. Retrospective study derived from a 16-week randomized controlled-intervention clinical trial. Twenty five sedentary, obese (BMI: 30-40 kg/m²) women, aged 40-60 with hypercholesterolemia were assigned to a 4-arm trial using a 2 x 2 factorial design (Diet x Exercise). Prescribed diets had the same calorie restriction (-500 kcal/day), and were categorized according to protein content as: lower protein (< 22% daily energy intake, LP) vs. higher protein (> 22% daily energy intake, HP). Exercise comparisons involved habitual activity (control) vs. a 16-week supervised whole-body resistance training program (RT), two sessions/wk. A significant decrease in weight and waist circumference was observed in all groups. A significant decrease in LDL-C and Total-Cholesterol levels was observed only when a LP diet was combined with a RT program, the RT being the most determining factor. Interestingly, an interaction between diet and exercise was found concerning LDL-C values. In this study, resistance training plays a key role in improving LDL-C and Total-Cholesterol; however, a lower protein intake (< 22% of daily energy intake as proteins) was found to achieve a significantly greater reduction in LDL-C.

  7. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    NASA Astrophysics Data System (ADS)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  8. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    PubMed

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  9. Low-dose CT reconstruction via L1 dictionary learning regularization using iteratively reweighted least-squares.

    PubMed

    Zhang, Cheng; Zhang, Tao; Li, Ming; Peng, Chengtao; Liu, Zhaobang; Zheng, Jian

    2016-06-18

    In order to reduce the radiation dose of CT (computed tomography), compressed sensing theory has been a hot topic since it provides the possibility of a high quality recovery from the sparse sampling data. Recently, the algorithm based on DL (dictionary learning) was developed to deal with the sparse CT reconstruction problem. However, the existing DL algorithm focuses on the minimization problem with the L2-norm regularization term, which leads to reconstruction quality deteriorating while the sampling rate declines further. Therefore, it is essential to improve the DL method to meet the demand of more dose reduction. In this paper, we replaced the L2-norm regularization term with the L1-norm one. It is expected that the proposed L1-DL method could alleviate the over-smoothing effect of the L2-minimization and reserve more image details. The proposed algorithm solves the L1-minimization problem by a weighting strategy, solving the new weighted L2-minimization problem based on IRLS (iteratively reweighted least squares). Through the numerical simulation, the proposed algorithm is compared with the existing DL method (adaptive dictionary based statistical iterative reconstruction, ADSIR) and other two typical compressed sensing algorithms. It is revealed that the proposed algorithm is more accurate than the other algorithms especially when further reducing the sampling rate or increasing the noise. The proposed L1-DL algorithm can utilize more prior information of image sparsity than ADSIR. By transforming the L2-norm regularization term of ADSIR with the L1-norm one and solving the L1-minimization problem by IRLS strategy, L1-DL could reconstruct the image more exactly.

  10. Hyperspectral Image Classification via Kernel Sparse Representation

    DTIC Science & Technology

    2013-01-01

    classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model , where...joint sparsity model , where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training...hyperspectral imagery, joint spar- sity model , kernel methods, sparse representation. I. INTRODUCTION HYPERSPECTRAL imaging sensors capture images

  11. Optimization-based image reconstruction in x-ray computed tomography by sparsity exploitation of local continuity and nonlocal spatial self-similarity

    NASA Astrophysics Data System (ADS)

    Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu

    2016-07-01

    The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).

  12. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  13. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    NASA Astrophysics Data System (ADS)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  14. Source sparsity control of sound field reproduction using the elastic-net and the lasso minimizers.

    PubMed

    Gauthier, P-A; Lecomte, P; Berry, A

    2017-04-01

    Sound field reproduction is aimed at the reconstruction of a sound pressure field in an extended area using dense loudspeaker arrays. In some circumstances, sound field reproduction is targeted at the reproduction of a sound field captured using microphone arrays. Although methods and algorithms already exist to convert microphone array recordings to loudspeaker array signals, one remaining research question is how to control the spatial sparsity in the resulting loudspeaker array signals and what would be the resulting practical advantages. Sparsity is an interesting feature for spatial audio since it can drastically reduce the number of concurrently active reproduction sources and, therefore, increase the spatial contrast of the solution at the expense of a difference between the target and reproduced sound fields. In this paper, the application of the elastic-net cost function to sound field reproduction is compared to the lasso cost function. It is shown that the elastic-net can induce solution sparsity and overcomes limitations of the lasso: The elastic-net solves the non-uniqueness of the lasso solution, induces source clustering in the sparse solution, and provides a smoother solution within the activated source clusters.

  15. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  16. Sparse matrix-vector multiplication on network-on-chip

    NASA Astrophysics Data System (ADS)

    Sun, C.-C.; Götze, J.; Jheng, H.-Y.; Ruan, S.-J.

    2010-12-01

    In this paper, we present an idea for performing matrix-vector multiplication by using Network-on-Chip (NoC) architecture. In traditional IC design on-chip communications have been designed with dedicated point-to-point interconnections. Therefore, regular local data transfer is the major concept of many parallel implementations. However, when dealing with the parallel implementation of sparse matrix-vector multiplication (SMVM), which is the main step of all iterative algorithms for solving systems of linear equation, the required data transfers depend on the sparsity structure of the matrix and can be extremely irregular. Using the NoC architecture makes it possible to deal with arbitrary structure of the data transfers; i.e. with the irregular structure of the sparse matrices. So far, we have already implemented the proposed SMVM-NoC architecture with the size 4×4 and 5×5 in IEEE 754 single float point precision using FPGA.

  17. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  18. Supervised Learning for Dynamical System Learning.

    PubMed

    Hefny, Ahmed; Downey, Carlton; Gordon, Geoffrey J

    2015-01-01

    Recently there has been substantial interest in spectral methods for learning dynamical systems. These methods are popular since they often offer a good tradeoff between computational and statistical efficiency. Unfortunately, they can be difficult to use and extend in practice: e.g., they can make it difficult to incorporate prior information such as sparsity or structure. To address this problem, we present a new view of dynamical system learning: we show how to learn dynamical systems by solving a sequence of ordinary supervised learning problems, thereby allowing users to incorporate prior knowledge via standard techniques such as L 1 regularization. Many existing spectral methods are special cases of this new framework, using linear regression as the supervised learner. We demonstrate the effectiveness of our framework by showing examples where nonlinear regression or lasso let us learn better state representations than plain linear regression does; the correctness of these instances follows directly from our general analysis.

  19. Iterative feature refinement for accurate undersampled MR image reconstruction

    NASA Astrophysics Data System (ADS)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  20. Elevated serum β₂-GPI-Lp(a) complexes levels in children with nephrotic syndrome.

    PubMed

    Zhang, Chunni; Luo, Yang; Huang, Zhongwei; Xia, Zhengkun; Cai, Xiaoyi; Yang, Yuhua; Niu, Dongmei; Wang, Junjun

    2012-10-09

    The complexes of β₂-glycoprotein I (β₂-GPI) with lipoprotein(a) [β₂-GPI-Lp(a)] exist in human circulation and are increased in serum from patients with some autoimmune diseases. This study aims to investigate the concentration of β₂-GPI-Lp(a) in serum of children with idiopathic nephrotic syndrome (NS) and its relationship with serum lipids, oxidized lipoprotein and renal function parameters to explore the potential of the complexes as an additional marker for evaluating pediatric NS. Serum concentrations of β₂-GPI-Lp(a) complexes and oxidized Lp(a) [ox-Lp(a)] were measured by "Sandwich" ELISAs in 80 NS children and 82 age/sex-matched healthy controls. The levels of serum lipids and kidney parameters were also determined. Multivariate logistic regression analysis was performed to identify correlate of β₂-GPI-Lp(a) and NS. The serum concentrations of β₂-GPI-Lp(a) complexes in children with NS were significantly higher than those in controls (median 0.95 U/ml vs 0.28 U/ml, P<0.0001). Ox-Lp(a) levels were also markedly elevated (median 14.55 mg/l vs 2.60 mg/l, P<0.0001] in NS children. The concentrations of β₂-GPI-Lp(a) were positively correlated with ox-Lp(a) (r=0.246, P=0.028), but not with Lp(a) level, and the concentrations of ox-Lp(a) were positively related with Lp(a) (r=0.301, P=0.007) in NS children. Multivariate logistic regression analysis identified a positive association between NS and β₂-GPI-Lp(a) (OR=13.694, 95% CI 6.400-29.299, P<0.0001), after adjusting for kidney function parameters, serum lipids and ox-Lp(a). Elevated β₂-GPI-Lp(a) level was an independent and significant risk factor for pediatric NS and, enhanced Lp(a) oxidation partly contributes to the formation of β₂-GPI-Lp(a) complexes. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Reference values for Lactate Pro 2™ in fetal blood sampling during labor: a cross-sectional study.

    PubMed

    Birgisdottir, Brynhildur Tinna; Holzmann, Malin; Varli, Ingela Hulthén; Graner, Sofie; Saltvedt, Sissel; Nordström, Lennart

    2017-04-01

    Lactate Pro™ (LP1) is the only lactate meter evaluated for fetal scalp blood sampling (FBS) in intrapartum use. The reference values for this meter are: normal value <4.2 mmol/L, preacidemia 4.2-4.8 mmol/L, and acidemia >4.8 mmol/L. The production of this meter has been discontinued. An updated version, Lactate Pro 2TM (LP2), has been launched and is shown to be differently calibrated. The aims of the study were to retrieve a conversion equation to convert lactate values in FBS measured with LP2 to an estimated value if using LP1 and to define reference values for clinical management when using LP2. A cross-sectional study was conducted at a university hospital in Sweden. A total of 113 laboring women with fetal heart rate abnormalities on cardiotocography (CTG) had FBS carried out. Lactate concentration was measured bedside with both LP1 and LP2 from the same blood sample capillary. A linear regression model was constructed to retrieve a conversion equation to convert LP2 values to LP1 values. LP2 measured higher values than LP1 in all analyses. We found that 4.2 mmol/L with LP1 corresponded to 6.4 mmol/L with LP2. Likewise, 4.8 mmol/L with LP1 corresponded to 7.3 mmol/L with LP2. The correlation between the analyses was excellent (Spearman's rank correlation, r=0.97). We recommend the following guidelines when interpreting lactate concentration in FBS with LP2: <6.4 mmol/L to be interpreted as normal, 6.4-7.3 mmol/L as preacidemia indicating a follow-up FBS within 20-30 min, and >7.3 mmol/L as acidemia indicating intervention.

  2. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-01-01

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345

  3. Translation of incremental talk test responses to steady-state exercise training intensity.

    PubMed

    Lyon, Ellen; Menke, Miranda; Foster, Carl; Porcari, John P; Gibson, Mark; Bubbers, Terresa

    2014-01-01

    The Talk Test (TT) is a submaximal, incremental exercise test that has been shown to be useful in prescribing exercise training intensity. It is based on a subject's ability to speak comfortably during exercise. This study defined the amount of reduction in absolute workload intensity from an incremental exercise test using the TT to give appropriate absolute training intensity for cardiac rehabilitation patients. Patients in an outpatient rehabilitation program (N = 30) performed an incremental exercise test with the TT given every 2-minute stage. Patients rated their speech comfort after reciting a standardized paragraph. Anything other than a "yes" response was considered the "equivocal" stage, while all preceding stages were "positive" stages. The last stage with the unequivocally positive ability to speak was the Last Positive (LP), and the preceding stages were (LP-1 and LP-2). Subsequently, three 20-minute steady-state training bouts were performed in random order at the absolute workload at the LP, LP-1, and LP-2 stages of the incremental test. Speech comfort, heart rate (HR), and rating of perceived exertion (RPE) were recorded every 5 minutes. The 20-minute exercise training bout was completed fully by LP (n = 19), LP-1 (n = 28), and LP-2 (n = 30). Heart rate, RPE, and speech comfort were similar through the LP-1 and LP-2 tests, but the LP stage was markedly more difficult. Steady-state exercise training intensity was easily and appropriately prescribed at intensity associated with the LP-1 and LP-2 stages of the TT. The LP stage may be too difficult for patients in a cardiac rehabilitation program.

  4. 75 FR 39680 - Houston Pipe Line Company LP, Worsham-Steed Gas Storage, L.P., Energy Transfer Fuel, LP, Mid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-12

    ... Company LP, Worsham-Steed Gas Storage, L.P., Energy Transfer Fuel, LP, Mid Continent Market Center, L.L.C... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR10-44-000; Docket No. PR10... the protest or intervention to the Federal Energy Regulatory Commission, 888 First Street, NE...

  5. Prevalence of Oral, Skin, and Oral and Skin Lesions of Lichen Planus in Patients Visiting a Dental School in Southern India

    PubMed Central

    Omal, PM; Jacob, Vimal; Prathap, Akhilesh; Thomas, Nebu George

    2012-01-01

    Background: Lichen planus (LP) is a mucocutaneous disease that is relatively common among adult population. LP can present as skin and oral lesions. This study highlights the prevalence of oral, skin, and oral and skin lesions of LP. Aims: The aim of this study was to evaluate the prevalence of oral, skin, and oral and skin lesions of LP from a population of patients attending the Department of Oral Medicine and Radiodiagnosis, Pushpagiri College of Dental Sciences, Tiruvalla, Kerala, India. Materials and Methods: A cross-sectional study was designed to evaluate the prevalence of oral, skin, and oral and skin lesions of LP. This is a ongoing prospective study with results of 2 years being reported. LP was diagnosed on the basis of clinical presentation and histopathological analysis of mucosal and skin biopsy done for all patients suspected of having LP. Statistical analysis was carried out using SPSS (Statistical package for social sciences) software version 14. To test the statistical significance, chi-square test was used. Results: Out of 18,306 patients screened, 8,040 were males and 10,266 females. LP was seen in 118 cases (0.64%). Increased prevalence of LP was observed in middle age adults (40–60 years age group) with lowest age of 12 years and highest age of 65 years. No statistically significant differences were observed between the genders in skin LP group (P=0.12) and in oral and skin LP groups (P=0.06); however, a strong female predilection was seen in oral LP group (P=0.000036). The prevalence of cutaneous LP in oral LP patients was 0.06%. Conclusion: This study showed an increased prevalence of oral LP than skin LP, and oral and skin LP with a female predominance. PMID:22615505

  6. 77 FR 58371 - Allegheny Hydro No. 8, L.P., Allegheny Hydro No. 9, L.P., and U.S. Bank National Association...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 3021-088] Allegheny Hydro No. 8, L.P., Allegheny Hydro No. 9, L.P., and U.S. Bank National Association Allegheny Hydro, LLC... 31, 2012, Allegheny Hydro No. 8, L.P., Allegheny Hydro No. 9, L.P., and U.S. Bank National...

  7. MO-FG-204-06: A New Algorithm for Gold Nano-Particle Concentration Identification in Dual Energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L; Shen, C; Ng, M

    Purpose: Gold nano-particle (GNP) has recently attracted a lot of attentions due to its potential as an imaging contrast agent and radiotherapy sensitiser. Imaging the GNP at its low contraction is a challenging problem. We propose a new algorithm to improve the identification of GNP based on dual energy CT (DECT). Methods: We consider three base materials: water, bone, and gold. Determining three density images from two images in DECT is an under-determined problem. We propose to solve this problem by exploring image domain sparsity via an optimization approach. The objective function contains four terms. A data-fidelity term ensures themore » fidelity between the identified material densities and the DECT images, while the other three terms enforces the sparsity in the gradient domain of the three images corresponding to the density of the base materials by using total variation (TV) regularization. A primal-dual algorithm is applied to solve the proposed optimization problem. We have performed simulation studies to test this model. Results: Our digital phantom in the tests contains water, bone regions and gold inserts of different sizes and densities. The gold inserts contain mixed material consisting of water with 1g/cm3 and gold at a certain density. At a low gold density of 0.0008 g/cm3, the insert is hardly visible in DECT images, especially for those with small sizes. Our algorithm is able to decompose the DECT into three density images. Those gold inserts at a low density can be clearly visualized in the density image. Conclusion: We have developed a new algorithm to decompose DECT images into three different material density images, in particular, to retrieve density of gold. Numerical studies showed promising results.« less

  8. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio

    2015-09-15

    Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less

  9. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  10. High resolution P-wave velocity structure beneath Northeastern Tibet from multiscale seismic tomography

    NASA Astrophysics Data System (ADS)

    Guo, B.; Gao, X.; Chen, J.; Liu, Q.; Li, S.

    2016-12-01

    The continuing collision of the northward advancing Indian continent with the Eurasia results in the high elevations and thickened Tibetan Plateau. Numerous geologic and geophysical studies engaged in the mechanics of the Tibetan Plateau deformation and uplift. Many seismic experiments were deployed in south and central Tibet, such as INDEPTH and Hi-climb, but very few in northeastern Tibet. Between 2013 and 2015, The China Seismic Array-experiment operated 670 broadband seismic stations with an average station spacing of 35km. This seismic array located in northeastern Tibet and covered the Qilian Mountains, Qaidam Basin, and part of Songpan-Ganzi, Gobi-Alashan, Yangzi, and Ordos. A new multiscale seismic traveltime tomography technique with sparsity constrains were used to map the upper mantle P-wave velocity structure beneath northeastern Tibet. The seismic tomography algorithm employs sparsity constrains on the wavelet representation velocity model via the L1-norm regularization. This algorithm can efficiently deal with the uneven-sampled volume, and give multiscale images of the model. Our preliminary results can be summarized as follows: 1) in the upper mantle down to 200km, significate low-velocity anomalies exist beneath the northeastern Tibet, and slight high-velocity anomalies beneath the Qaidam basin; 2) under Gobi-Alashan, Yangzi, and Ordos, high-velocity anomalies appear to extend to a depth of 250km, this high-velocity may correspond to the lithosphere; 3) there exist relative high-velocity anomalies at depth of 250km-350km underneath north Tibet, which suggests lithospheric delamination; 4) the strong velocity contrast between north Tibet and Yangzi, Gabi-Alashan is visible down to 200km, which implies the north Tibet boundary.

  11. Sparsity-Cognizant Algorithms with Applications to Communications, Signal Processing, and the Smart Grid

    NASA Astrophysics Data System (ADS)

    Zhu, Hao

    Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.

  12. Device orientation of a leadless pacemaker and subcutaneous implantable cardioverter-defibrillator in canine and human subjects and the effect on intrabody communication.

    PubMed

    Quast, Anne-Floor B E; Tjong, Fleur V Y; Koop, Brendan E; Wilde, Arthur A M; Knops, Reinoud E; Burke, Martin C

    2018-02-14

    The development of communicating modular cardiac rhythm management systems relies on effective intrabody communication between a subcutaneous implantable cardioverter-defibrillator (S-ICD) and a leadless pacemaker (LP), using conducted communication. Communication success is affected by the LP and S-ICD orientation. This study is designed to evaluate the orientation of the LP and S-ICD in canine subjects and measure success and threshold of intrabody communication. To gain more human insights, we will explore device orientation in LP and S-ICD patients. Canine subjects implanted with a prototype S-ICD and LP (both Boston Scientific, MA, USA) with anterior-posterior fluoroscopy images were included in this analysis. For comparison, a retrospective analysis of human S-ICD and LP patients was performed. The angle of the long axis of the LP towards the vertical axis of 0°, and distance between the coil and LP were measured. Twenty-three canine subjects were analysed. Median angle of the LP was 29° and median distance of the S-ICD coil to LP was 0.8 cm. All canine subjects had successful communication. The median communicating threshold was 2.5 V. In the human retrospective analysis, 72 LP patients and 100 S-ICD patients were included. The mean angle of the LP was 56° and the median distance between the S-ICD coil and LP was 4.6 cm. Despite the less favourable LP orientation in canine subjects, all communication attempts were successful. In the human subjects, we observed a greater and in theory more favourable LP angle towards the communication vector. These data suggests suitability of human anatomy for conductive intrabody communication.

  13. PLC-based LP₁₁ mode rotator for mode-division multiplexing transmission.

    PubMed

    Saitoh, Kunimasa; Uematsu, Takui; Hanzawa, Nobutomo; Ishizaka, Yuhei; Masumoto, Kohei; Sakamoto, Taiji; Matsui, Takashi; Tsujikawa, Kyozo; Yamamoto, Fumihiko

    2014-08-11

    A PLC-based LP11 mode rotator is proposed. The proposed mode rotator is composed of a waveguide with a trench that provides asymmetry of the waveguide. Numerical simulations show that converting LP11a (LP11b) mode to LP11b (LP11a) mode can be achieved with high conversion efficiency (more than 90%) and little polarization dependence over a wide wavelength range from 1450 nm to 1650 nm. In addition, we fabricate the proposed LP11 mode rotator using silica-based PLC. It is confirmed that the fabricated mode rotator can convert LP11a mode to LP11b mode over a wide wavelength range.

  14. Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain,more » as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.« less

  15. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    PubMed

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  16. Cell Membrane Tracking in Living Brain Tissue Using Differential Interference Contrast Microscopy.

    PubMed

    Lee, John; Kolb, Ilya; Forest, Craig R; Rozell, Christopher J

    2018-04-01

    Differential interference contrast (DIC) microscopy is widely used for observing unstained biological samples that are otherwise optically transparent. Combining this optical technique with machine vision could enable the automation of many life science experiments; however, identifying relevant features under DIC is challenging. In particular, precise tracking of cell boundaries in a thick ( ) slice of tissue has not previously been accomplished. We present a novel deconvolution algorithm that achieves the state-of-the-art performance at identifying and tracking these membrane locations. Our proposed algorithm is formulated as a regularized least squares optimization that incorporates a filtering mechanism to handle organic tissue interference and a robust edge-sparsity regularizer that integrates dynamic edge tracking capabilities. As a secondary contribution, this paper also describes new community infrastructure in the form of a MATLAB toolbox for accurately simulating DIC microscopy images of in vitro brain slices. Building on existing DIC optics modeling, our simulation framework additionally contributes an accurate representation of interference from organic tissue, neuronal cell-shapes, and tissue motion due to the action of the pipette. This simulator allows us to better understand the image statistics (to improve algorithms), as well as quantitatively test cell segmentation and tracking algorithms in scenarios, where ground truth data is fully known.

  17. Scaling Up Graph-Based Semisupervised Learning via Prototype Vector Machines

    PubMed Central

    Zhang, Kai; Lan, Liang; Kwok, James T.; Vucetic, Slobodan; Parvin, Bahram

    2014-01-01

    When the amount of labeled data are limited, semi-supervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via ℓ1-regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning. PMID:25720002

  18. Total variation-based method for radar coincidence imaging with model mismatch for extended target

    NASA Astrophysics Data System (ADS)

    Cao, Kaicheng; Zhou, Xiaoli; Cheng, Yongqiang; Fan, Bo; Qin, Yuliang

    2017-11-01

    Originating from traditional optical coincidence imaging, radar coincidence imaging (RCI) is a staring/forward-looking imaging technique. In RCI, the reference matrix must be computed precisely to reconstruct the image as preferred; unfortunately, such precision is almost impossible due to the existence of model mismatch in practical applications. Although some conventional sparse recovery algorithms are proposed to solve the model-mismatch problem, they are inapplicable to nonsparse targets. We therefore sought to derive the signal model of RCI with model mismatch by replacing the sparsity constraint item with total variation (TV) regularization in the sparse total least squares optimization problem; in this manner, we obtain the objective function of RCI with model mismatch for an extended target. A more robust and efficient algorithm called TV-TLS is proposed, in which the objective function is divided into two parts and the perturbation matrix and scattering coefficients are updated alternately. Moreover, due to the ability of TV regularization to recover sparse signal or image with sparse gradient, TV-TLS method is also applicable to sparse recovering. Results of numerical experiments demonstrate that, for uniform extended targets, sparse targets, and real extended targets, the algorithm can achieve preferred imaging performance both in suppressing noise and in adapting to model mismatch.

  19. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  20. Hair testing in postmortem diagnosis of substance abuse: An unusual case of slow-release oral morphine abuse in an adolescent.

    PubMed

    Baillif-Couniou, Valérie; Kintz, Pascal; Sastre, Caroline; Pok, Phak-Rop Pos; Chèze, Marjorie; Pépin, Gilbert; Leonetti, Georges; Pelissier-Alicot, Anne-Laure

    2015-11-01

    Morphine sulfate misuse is essentially observed among regular heroin injectors. To our knowledge, primary addiction to morphine sulfate is exceptional, especially among young adolescents. A 13-year-old girl, with no history of addiction, was found dead with three empty blisters of Skenan(®) LP 30 mg at her side. Opiates were detected in biological fluids and hair by chromatographic methods. Blood analyses confirmed morphine overdose (free morphine: 428 ng/mL; total morphine: 584 ng/mL) and segmental hair analysis confirmed regular exposure over several months (maximum morphine concentration 250 pg/mg). Suspecting the victim's mother of recreational use of Skenan(®), the magistrate ordered analysis of her hair, with negative results. From an epidemiological viewpoint, this case of oral morphine sulfate abuse in an adolescent with no previous history suggests the emergence of a new trend of morphine sulfate consumption. From a toxicological viewpoint, it demonstrates the value of hair testing, which documented the victim's regular exposure and made an important contribution to the police investigation. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  1. Design of cladding rods-assisted depressed-core few-mode fibers with improved modal spacing

    NASA Astrophysics Data System (ADS)

    Han, Jiawei; Zhang, Jie

    2018-03-01

    This paper investigates the design details of cladding rods-assisted (CRA) depressed-core (DC) few-mode fibers (FMFs) that feature more equally spaced linearly polarized (LP) modal effective indices, suitable for high-spatial-density weakly-coupled mode-division multiplexing systems. The influences of the index profile of cladding rods on LP mode-resolved effective index, bending sensitivity, and effective area Aeff, are numerically described. Based on the design considerations of LP modal Aeff-dependent spatial efficiency and LP modal bending loss-dependent robustness, the small LP21-LP02 and LP22-LP03 modal spacing limitations, encountered in state-of-the-art weakly-coupled step-index FMFs, have been substantially improved by at least 25%. In addition, the proposed CRA DC FMFs also show sufficiently large effective areas (in excess of 110 μm2) for all guided LP modes, which are expected to exhibit good nonlinear performance.

  2. Fuzzy linear model for production optimization of mining systems with multiple entities

    NASA Astrophysics Data System (ADS)

    Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar

    2011-12-01

    Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.

  3. Selective excitation of LP01 and LP02 in dual-concentric cores fiber using an adiabatically tapered microstructured mode converter

    NASA Astrophysics Data System (ADS)

    Sammouda, Marwa; Taher, Aymen Belhadj; Bahloul, Faouzi; Bin, Philippe Di

    2016-09-01

    We propose to connect a single-mode fiber (SMF) to a dual-concentric cores fiber (DCCF) using an adiabatically tapered microstructured mode converter, and to evaluate the SMF LP01 mode and the DCCF LP01 and LP02 modes selective excitations performances. We theoretically and numerically study this selective excitation method by calculating the effective indices of the propagated modes, the adiabaticity criteria, the coupling loss, and the modes amplitudes along the tapered structure. This study shows that this method is able to achieve excellent selective excitations of the first two linearly polarized modes (LP01 and LP02) among the five guided modes in the DCCF with a negligible loss. The part of the LP01 and LP02 modes from the total power are 99% and 84% corresponding to 0.1 and 0.8 dB losses, respectively.

  4. Immunogenicity of recombinant Lactobacillus casei-expressing F4 (K88) fimbrial adhesin FaeG in conjunction with a heat-labile enterotoxin A (LTAK63) and heat-labile enterotoxin B (LTB) of enterotoxigenic Escherichia coli as an oral adjuvant in mice.

    PubMed

    Yu, M; Qi, R; Chen, C; Yin, J; Ma, S; Shi, W; Wu, Y; Ge, J; Jiang, Y; Tang, L; Xu, Y; Li, Y

    2017-02-01

    The aims of this study were to develop an effective oral vaccine against enterotoxigenic Escherichia coli (ETEC) infection and to design new and more versatile mucosal adjuvants. Genetically engineered Lactobacillus casei strains expressing F4 (K88) fimbrial adhesin FaeG (rLpPG-2-FaeG) and either co-expressing heat-labile enterotoxin A (LTA) subunit with an amino acid mutation associated with reduced virulence (LTAK63) and a heat-labile enterotoxin B (LTB) subunit of E. coli (rLpPG-2-LTAK63-co-LTB) or fused-expressing LTAK63 and LTB (rLpPG-2-LTAK63-fu-LTB) were constructed. The immunogenicity of rLpPG-2-FaeG in conjunction with rLpPG-2-LTAK63-co-LTB or rLpPG-2-LTAK63-fu-LTB as an orally administered mucosal adjuvant in mice was evaluated. Results showed that the levels of FaeG-specific serum IgG and mucosal sIgA, as well as the proliferation of lymphocytes, were significantly higher in mice orally co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-fu-LTB compared with those administered rLpPG-2-FaeG alone, and were lower than those co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-co-LTB. Moreover, effective protection was observed after challenge with F4+ ETEC strain CVCC 230 in mice co-administered rLpPG-2-FaeG and rLpPG-2-LTAK63-co-LTB or rLpPG-2-FaeG and rLpPG-2-LTAK63-fu-LTB group compared with those that received rLpPG-2-FaeG alone. rLpPG-2-FaeG showed greater immunogenicity in combination with LTAK63 and LTB as molecular adjuvants. Recombinant Lactobacillus provides a promising platform for the development of vaccines against F4+ ETEC infection. © 2016 The Society for Applied Microbiology.

  5. Effects of NaCl and CaCl2 on Water Transport across Root Cells of Maize (Zea mays L.) Seedlings 1

    PubMed Central

    Azaizeh, Hassan; Gunse, Benito; Steudle, Ernst

    1992-01-01

    The effect of salinity and calcium levels on water flows and on hydraulic parameters of individual cortical cells of excised roots of young maize (Zea mays L. cv Halamish) plants have been measured using the cell pressure probe. Maize seedlings were grown in one-third strength Hoagland solution modified by additions of NaCl and/or extra calcium so that the seedlings received one of four treatments: control; +100 millimolar NaCl; +10 millimolar CaCl2; +100 millimolar NaCl + 10 millimolar CaCl2. From the hydrostatic and osmotic relaxations of turgor, the hydraulic conductivity (Lp) and the reflection coefficient (σs) of cortical cells of different root layers were determined. Mean Lp values in the different layers (first to third, fourth to sixth, seventh to ninth) of the four different treatments ranged from 11.8 to 14.5 (Control), 2.5 to 3.8 (+NaCl), 6.9 to 8.7 (+CaCl2), and 6.6 to 7.2 · 10−7 meter per second per megapascal (+NaCl + CaCl2). These results indicate that salinization of the growth media at regular calcium levels (0.5 millimolar) decreased Lp significantly (three to six times). The addition of extra calcium (10 millimolar) to the salinized media produced compensating effects. Mean cell σs values of NaCl ranged from 1.08 to 1.16, 1.15 to 1.22, 0.94 to 1.00, and 1.32 to 1.46 in different root cell layers of the four different treatments, respectively. Some of these σs values were probably overestimated due to an underestimation of the elastic modulus of cells, σs values of close to unity were in line with the fact that root cell membranes were practically not permeable to NaCl. However, the root cylinder exhibited some permeability to NaCl as was demonstrated by the root pressure probe measurements that resulted in σsr of less than unity. Compared with the controls, salinity and calcium increased the root cell diameter. Salinized seedlings grown at regular calcium levels resulted in shorter cell length compared with control (by a factor of 2). The results demonstrate that NaCl has adverse effects on water transport parameters of root cells. Extra calcium could, in part, compensate for these effects. The data suggest a considerable apoplasmic water flow in the root cortex. However, the cell-to-cell path also contributed to the overall water transport in maize roots and appeared to be responsible for the decrease in root hydraulic conductivity reported earlier (Azaizeh H, Steudle E [1991] Plant Physiol 97: 1136-1145). Accordingly, the effect of high salinity on the cell Lp was much larger than that on root Lpr. PMID:16669016

  6. Sparsity-based multi-height phase recovery in holographic microscopy

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan

    2016-11-01

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  7. HYPOTHESIS TESTING FOR HIGH-DIMENSIONAL SPARSE BINARY REGRESSION

    PubMed Central

    Mukherjee, Rajarshi; Pillai, Natesh S.; Lin, Xihong

    2015-01-01

    In this paper, we study the detection boundary for minimax hypothesis testing in the context of high-dimensional, sparse binary regression models. Motivated by genetic sequencing association studies for rare variant effects, we investigate the complexity of the hypothesis testing problem when the design matrix is sparse. We observe a new phenomenon in the behavior of detection boundary which does not occur in the case of Gaussian linear regression. We derive the detection boundary as a function of two components: a design matrix sparsity index and signal strength, each of which is a function of the sparsity of the alternative. For any alternative, if the design matrix sparsity index is too high, any test is asymptotically powerless irrespective of the magnitude of signal strength. For binary design matrices with the sparsity index that is not too high, our results are parallel to those in the Gaussian case. In this context, we derive detection boundaries for both dense and sparse regimes. For the dense regime, we show that the generalized likelihood ratio is rate optimal; for the sparse regime, we propose an extended Higher Criticism Test and show it is rate optimal and sharp. We illustrate the finite sample properties of the theoretical results using simulation studies. PMID:26246645

  8. Effects of high-fat diet on somatic growth, metabolic parameters and function of peritoneal macrophages of young rats submitted to a maternal low-protein diet.

    PubMed

    Alheiros-Lira, Maria Cláudia; Jurema-Santos, Gabriela Carvalho; da-Silva, Helyson Tomaz; da-Silva, Amanda Cabral; Moreno Senna, Sueli; Ferreira E Silva, Wylla Tatiana; Ferraz, José Candido; Leandro, Carol Góis

    2017-03-01

    This study evaluated the effects of a post-weaning high-fat (HF) diet on somatic growth, food consumption, metabolic parameters, phagocytic rate and nitric oxide (NO) production of peritoneal macrophages in young rats submitted to a maternal low-protein (LP) diet. Male Wistar rats (aged 60 d) were divided in two groups (n 22/each) according to their maternal diet during gestation and lactation: control (C, dams fed 17 % casein) and LP (dams fed 8 % casein). At weaning, half of the groups were fed HF diet and two more groups were formed (HF and low protein-high fat (LP-HF)). Somatic growth, food and energy intake, fat depots, serum glucose, cholesterol and leptin concentrations were evaluated. Phagocytic rate and NO production were analysed in peritoneal macrophages under stimulation of zymosan and lipopolysaccharide (LPS)+interferon γ (IFN-γ), respectively. The maternal LP diet altered the somatic parameters of growth and development of pups. LP and LP-HF pups showed a higher body weight gain and food intake than C pups. HF and LP-HF pups showed increased retroperitoneal and epididymal fat depots, serum level of TAG and total cholesterol compared with C and LP pups. After LPS+IFN-γ stimulation, LP and LP-HF pups showed reduced NO production when compared with their pairs. Increased phagocytic activity and NO production were seen in LP but not LP-HF peritoneal macrophages. However, peritoneal macrophages of LP pups were hyporesponsive to LPS+IFN-γ induced NO release, even after a post-weaning HF diet. Our data demonstrated that there was an immunomodulation related to dietary fatty acids after the maternal LP diet-induced metabolic programming.

  9. Lp-dual affine surface area

    NASA Astrophysics Data System (ADS)

    Wei, Wang; Binwu, He

    2008-12-01

    According to the notion of Lp-affine surface area by Lutwak, in this paper, we introduce the concept of Lp-dual affine surface area. Further, we establish the affine isoperimetric inequality and the Blaschke-Santaló inequality for Lp-dual affine surface area. Besides, the dual Brunn-Minkowski inequality for Lp-dual affine surface area is presented.

  10. Aerobic capacity, orthostatic tolerance, and exercise perceptions at discharge from inpatient spinal cord injury rehabilitation.

    PubMed

    Pelletier, Chelsea A; Jones, Graham; Latimer-Cheung, Amy E; Warburton, Darren E; Hicks, Audrey L

    2013-10-01

    To describe physical capacity, autonomic function, and perceptions of exercise among adults with subacute spinal cord injury (SCI). Cross-sectional. Two inpatient SCI rehabilitation programs in Canada. Participants (N=41; mean age ± SD, 38.9 ± 13.7y) with tetraplegia (TP; n=19), high paraplegia (HP; n=8), or low paraplegia (LP; n=14) completing inpatient SCI rehabilitation (mean ± SD, 112.9 ± 52.5d postinjury). Not applicable. Peak exercise capacity was determined by an arm ergometry test. As a measure of autonomic function, orthostatic tolerance was assessed by a passive sit-up test. Self-efficacy for exercise postdischarge was evaluated by a questionnaire. There was a significant difference in peak oxygen consumption and heart rate between participants with TP (11.2 ± 3.4;mL·kg(-1)·min(-1) 113.9 ± 19.7 beats/min) and LP (17.1 ± 7.5 mL·kg(-1)·min(-1); 142.8 ± 22.7 beats/min). Peak power output was also significantly lower in the TP group (30.0 ± 6.9W) compared with the HP (55.5 ± 7.56W) and LP groups (62.5 ± 12.2W). Systolic blood pressure responses to the postural challenge varied significantly between groups (-3.0 ± 33.5 mmHg in TP, 17.8 ± 14.7 mmHg in HP, 21.6 ± 18.7 mmHg in LP). Orthostatic hypotension was most prevalent among participants with motor complete TP (73%). Results from the questionnaire revealed that although participants value exercise and see benefits to regular participation, they have low confidence in their abilities to perform the task of either aerobic or strengthening exercise. Exercise is well tolerated in adults with subacute SCI. Exercise interventions at this stage should focus on improving task-specific self-efficacy, and attention should be made to blood pressure regulation, particularly in individuals with motor complete TP. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  11. Low protein and high-energy diet: a possible natural cause of fatty liver hemorrhagic syndrome in caged White Leghorn laying hens.

    PubMed

    Rozenboim, I; Mahato, J; Cohen, N A; Tirosh, O

    2016-03-01

    Fatty liver hemorrhagic syndrome (FLHS) is a metabolic condition of chicken and other birds caused by diverse nutritional, hormonal, environmental, and metabolic factors. Here we studied the effect of different diet composition on the induction of FLHS in single comb White Leghorn (WL) Hy-line laying hens. Seventy six (76) young WL (26 wks old) laying hens and 69 old hens (84 wks old) of the same breed were each divided into 4 treatment groups and provided 4 different diet treatments. The diet treatments included: control (C), 17.5% CP, 3.5% fat (F); normal protein, high fat (HF), 17.5% CP, 7% F; low protein, normal fat (LP), 13% CP, 3.5% F; and low protein, high fat (LPHF), 13% CP, 6.5% F. The diets containing high fat also had a higher ME of 3,000 kcal/kg of feed while the other 2 diets with normal fat had a regular lower amount of ME (2750 kcal/kg). Hen-day egg production (HDEP), ADFI, BW, egg weight, plasma enzymes indicating liver damage (alkaline phosphatase [ALP], aspartate aminotransferase [AST], gamma-glutamyl transferase [GGT]), liver and abdominal fat weight, liver color score (LCS), liver hemorrhagic score (LHS), liver fat content (LFC), liver histological examination, lipid peroxidation product in the liver, and genes indicating liver inflammation were evaluated. HDEP, ADFI, BW, and egg weight were significantly decreased in the LPHF diet group, while egg weight was also decreased in the LP diet group. In the young hens (LPHF group), ALP was found significantly higher at 30 d of diet treatment and was numerically higher throughout the experiment, while AST was significantly higher at 105 d of treatment. LCS, LHS, and LFC were significantly higher in young hens on the LPHF diet treatment. A liver histological examination shows more lipid vacuolization in the LPHF treatment diet. HF or LP alone had no significant effect on LFC, LHS, or LCS. We suggest that LP in the diet with higher ME from fat can be a possible natural cause for predisposing laying hens to FLHS. © 2016 Poultry Science Association Inc.

  12. Impact of Apolipoprotein(a) Isoform Size on Lipoprotein(a) Lowering in the HPS2-THRIVE Study

    PubMed Central

    Hopewell, Jemma C.; Hill, Michael R.; Marcovina, Santica; Valdes-Marquez, Elsa; Haynes, Richard; Offer, Alison; Pedersen, Terje R.; Baigent, Colin; Collins, Rory; Landray, Martin; Armitage, Jane

    2018-01-01

    Background: Genetic studies have shown lipoprotein(a) (Lp[a]) to be an important causal risk factor for coronary disease. Apolipoprotein(a) isoform size is the chief determinant of Lp(a) levels, but its impact on the benefits of therapies that lower Lp(a) remains unclear. Methods: HPS2-THRIVE (Heart Protection Study 2–Treatment of HDL to Reduce the Incidence of Vascular Events) is a randomized trial of niacin–laropiprant versus placebo on a background of simvastatin therapy. Plasma Lp(a) levels at baseline and 1 year post-randomization were measured in 3978 participants from the United Kingdom and China. Apolipoprotein(a) isoform size, estimated by the number of kringle IV domains, was measured by agarose gel electrophoresis and the predominantly expressed isoform identified. Results: Allocation to niacin–laropiprant reduced mean Lp(a) by 12 (SE, 1) nmol/L overall and 34 (6) nmol/L in the top quintile by baseline Lp(a) level (Lp[a] ≥128 nmol/L). The mean proportional reduction in Lp(a) with niacin–laropiprant was 31% but varied strongly with predominant apolipoprotein(a) isoform size (PTrend=4×10−29) and was only 18% in the quintile with the highest baseline Lp(a) level and low isoform size. Estimates from genetic studies suggest that these Lp(a) reductions during the short term of the trial might yield proportional reductions in coronary risk of ≈2% overall and 6% in the top quintile by Lp(a) levels. Conclusions: Proportional reductions in Lp(a) were dependent on apolipoprotein(a) isoform size. Taking this into account, the likely benefits of niacin–laropiprant on coronary risk through Lp(a) lowering are small. Novel therapies that reduce high Lp(a) levels by at least 80 nmol/L (≈40%) may be needed to produce worthwhile benefits in people at the highest risk because of Lp(a). Clinical Trial Registration: URL: https://clinicaltrials.gov. Unique identifier: NCT00461630. PMID:29449329

  13. Cost-effectiveness analysis of late prophylaxis vs. on-demand treatment for severe haemophilia A in Italy.

    PubMed

    Coppola, A; D'Ausilio, A; Aiello, A; Amoresano, S; Toumi, M; Mathew, P; Tagliaferri, A

    2017-05-01

    Long-term regular administrations of factor VIII (FVIII) concentrate (prophylaxis) initiated at an early age prevents bleeding in patients with severe haemophilia A (HA). The 5-year prospective Italian POTTER study provided evidence of benefits in adolescents and adults of late prophylaxis (LP) vs. on-demand therapy (OD) in reducing bleeding episodes and joint morbidity and improving quality of life; however, costs were increased. The aim of this study was to determine the cost-effectiveness of LP vs. OD with sucrose-formulated recombinant FVIII in adolescents and adults with severe HA in Italy. A Markov model evaluated lifetime cost-effectiveness of LP vs. OD in patients with severe HA in Italy, from both the healthcare and societal perspectives. Clinical input parameters were taken from the POTTER study and published literature. Health utility values were assigned to each health state as measured by the joint disease severity Pettersson score. Costs were expressed in Euro (€) 2014, including drug and other medical costs. Sensitivity analyses were performed considering societal perspective (including productivity lost) and varying relative risk of bleeding episodes between regimens. Clinical outcomes and costs were discounted at 6% according to previous studies. Lifetime incremental discounted quality-adjusted life-years (QALYs) were +4.26, whereas incremental discounted costs were +€229,694 from a healthcare perspective, with estimated incremental cost-effectiveness ratios (ICERs) equal to €53,978/QALY. Sensitivity analyses confirmed the base-case results showing lower ICERs with the societal perspective. Late prophylaxis vs. on-demand therapy results in a cost-effective approach with ICERs falling below the threshold considered acceptable in Italy. © 2017 John Wiley & Sons Ltd.

  14. Influence of Lewy Pathology on Alzheimer's Disease Phenotype: A Retrospective Clinico-Pathological Study.

    PubMed

    Roudil, Jennifer; Deramecourt, Vincent; Dufournet, Boris; Dubois, Bruno; Ceccaldi, Mathieu; Duyckaerts, Charles; Pasquier, Florence; Lebouvier, Thibaud

    2018-01-01

    Studies have shown the frequent coexistence of Lewy pathology (LP) in Alzheimer's Disease (AD). The aim of this study was to determine the influence of LP on the clinical and cognitive phenotype in a cohort of patients with a neuropathological diagnosis of AD. We reviewed neuropathologically proven AD cases, reaching Braak stages V and VI in the brain banks of Lille and Paris between 1993 and 2016, and classified them according to LP extension (amygdala, brainstem, limbic, or neocortical). We then searched patient files for all available clinical and neuropsychiatric features and neuropsychological data. Thirty-three subjects were selected for this study, among which 16 were devoid of LP and 17 presented AD with concomitant LP. The latter were stratified into two subgroups according to LP distribution: 7 were AD with amygdala LP and 10 were AD with 'classical' (brainstem, limbic or neocortical) LP. When analyzing the incidence of each clinical feature at any point during the disease course, we found no significant difference in symptom frequency between the three groups. However, fluctuations appeared significantly earlier in patients with classical LP (2±3.5 years) than in patients without LP (7±1.7 years) or with amygdala LP (8±2.8 years; p < 0.01). There was no significant difference in cognitive profiles. Our findings suggest that the influence of LP on the clinical phenotype of AD is subtle. Core features of dementia with Lewy bodies do not allow clinical diagnosis of a concomitant LP on a patient-to-patient basis.

  15. 77 FR 59393 - Jordan Cove Energy Project LP; Pacific Connector Gas Pipeline LP; Notice of Additional Public...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-27

    ...-17-000] Jordan Cove Energy Project LP; Pacific Connector Gas Pipeline LP; Notice of Additional Public..., and 11, 2012, the Federal Energy Regulatory Commission (FERC or Commission) Office of Energy Projects... additional public scoping meetings to take comments on Jordan Cove Energy Project LP's (Jordan Cove) proposed...

  16. 77 FR 61753 - Granting of Request for Early Termination of the Waiting Period Under the Premerger Notification...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-11

    ..., Inc.; Astria Semiconductor Holdings, Inc.; FormFactor, Inc. 20121365 G ABRY Partners VII, L.P.; Source.... 20121303 G Wind Point Partners L.P.; Mistral Equity Partners, LP; Wind Point Partners VII-A, L.P. 20121307... Dillard. 20121364 G Apollo Investment Fund VII, L.P.; Jimmy Sanders Incorporated; Apollo Investment Fund...

  17. Intraileal casein infusion increases plasma concentrations of amino acids in humans: A randomized cross over trial.

    PubMed

    Ripken, Dina; van Avesaat, Mark; Troost, Freddy J; Masclee, Ad A; Witkamp, Renger F; Hendriks, Henk F

    2017-02-01

    Activation of the ileal brake by casein induces satiety signals and reduces energy intake. However, adverse effects of intraileal casein administration have not been studied before. These adverse effects may include impaired amino acid digestion, absorption and immune activation. To investigate the effects of intraileal infusion of native casein on plasma amino acid appearance, immune activation and gastrointestinal (GI) symptoms. A randomized single-blind cross over study was performed in 13 healthy subjects (6 male; mean age 26 ± 2.9 years; mean body mass index 22.8 ± 0.4 kg/m -2 ), who were intubated with a naso-ileal feeding catheter. Thirty minutes after intake of a standardized breakfast, participants received an ileal infusion, containing either control (C) consisting of saline, a low-dose (17.2 kcal) casein (LP) or a high-dose (51.7 kcal) of casein (HP) over a period of 90 min. Blood samples were collected for analysis of amino acids (AAs), C-reactive protein (CRP), pro-inflammatory cytokines and oxylipins at regular intervals. Furthermore, GI symptom questionnaires were collected before, during and after ileal infusion. None of the subjects reported any GI symptoms before, during or after ileal infusion of C, LP and HP. Plasma concentrations of all AAs analyzed were significantly increased after infusion of HP as compared to C (p < 0.001), and most AAs were increased after infusion of LP (p < 0.001). In total, 12.49 ± 1.73 and 3.18 ± 0.87 g AAs were found in plasma after intraileal infusion of HP and LP, corresponding to 93 ± 13% (HP) and 72 ± 20% (LP) of AAs infused as casein, respectively. Ileal casein infusion did not affect plasma concentrations of CRP, IL-6, IL-8, IL-1β and TNF-α. Infusion of HP resulted in a decreased concentration of 11,12-dihydroxyeicosatrienoic acid whereas none of the other oxylipins analyzed were affected. A single intraileal infusion of native casein results in a concentration and time dependent increase of AAs in plasma, suggesting an effective digestion and absorption of AAs present in casein. Also, ileal infusion did not result in immune activation nor in GI symptoms. CLINICALTRIALS.GOV: NCT01509469. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Effective Alternating Direction Optimization Methods for Sparsity-Constrained Blind Image Deblurring.

    PubMed

    Xiong, Naixue; Liu, Ryan Wen; Liang, Maohan; Wu, Di; Liu, Zhao; Wu, Huisi

    2017-01-18

    Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

  19. The presence of lysylpyridinoline in the hypertrophic cartilage of newly hatched chicks

    NASA Technical Reports Server (NTRS)

    Orth, M. W.; Martinez, D. A.; Cook, M. E.; Vailas, A. C.

    1993-01-01

    The presence of lysylpyridinoline (LP) as a nonreducible cross-link in appreciable quantities has primarily been limited to the mineralized tissues, bone and dentin. However, the results reported here show that LP is not only present in the hypertrophic cartilage of the tibiotarsus isolated from newly hatched broiler chicks, but it is approx. 4-fold as concentrated as hydroxylysylpyridinoline (HP). Bone and articular cartilage surrounding the hypertrophic cartilage do not contain measurable quantities of LP. Purified LP has a fluorescent scan similar to purified HP and literature values, confirming that we indeed were measuring LP. Also, the cartilage lesion produced by immature chondrocytes from birds with tibial dyschondroplasia had LP but the HP:LP ratio was > 1. Thus, the low HP:LP ratio could be a marker for hypertrophic cartilage in avians.

  20. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    NASA Astrophysics Data System (ADS)

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  1. Thorough clinical evaluation of skin, as well as oral, genital and anal mucosa is beneficial in lichen planus patients.

    PubMed

    Stojanovic, Larisa; Lunder, Tomaz; Rener-Sitar, Ksenija; Mlakar, Bostjan; Maticic, Mojca

    2011-03-01

    Lichen planus (LP) is a common mucocutaneous disease of unknown aetiology with various geographical prevalence, may be related to some serious disorders such as squamous cell carcinoma and often remains underdiagnosed. The aim of this retrospective study was to thoroughly determine localization and clinical characteristics of LP lesions in a cohort of 173 Slovenian patients in association to the presence of accompanying symptoms and history of potential stressful events. Isolated cutaneous lesions of LP were found in 56.6% and isolated oral LP in 3.5% of patients. Thirty-four percent presented orocutaneous LP, whereas genitocutaneous LP was noted in 1.2%, orogenito-cutaneous LP in 4% and orogenital LP in 0.5% of patients. Underlying stressful events were noted in 36 out of 137 (26.3%) patients. Despite obviously visible localization of the lesions various medical specialists should be familiar with LP and thoroughly examine the complete skin, as well as oral, genital and anal mucosa in each LP patient to avoid a delay in diagnosing this disease and possibly disclose a much serious underlying condition. Psychological support should be offered, if needed.

  2. Updated Status and Performance at the Fourth HST COS FUV Lifetime Position

    NASA Astrophysics Data System (ADS)

    Taylor, Joanna M.; De Rosa, Gisella; Fix, Mees B.; Fox, Andrew; Indriolo, Nick; James, Bethan; Jedrzejewski, Robert I.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Proffitt, Charles R.; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Snyder, Elaine M.; Sonnentrucker, Paule; White, James

    2017-06-01

    To mitigate the adverse effects of gain sag on the spectral quality and accuracy of Hubble Space Telescope’s Cosmic Origins Spectrograph FUV observations, COS FUV spectra will be moved from Lifetime Position 3 (LP3) to a new pristine location on the detectors at LP4 in July 2017. To achieve maximal spectral resolution while preserving detector area, the spectra will be shifted in the cross-dispersion (XD) direction by -2.5" (about -31 pixels) from LP3 or -5” (about 62 pixels) from the original LP1. At LP4, the wavelength calibration lamp spectrum can overlap with the previously gain-sagged LP2 PSA spectrum location. If lamp lines fall in the gain sag holes from LP2, it can cause line ratios to change and the wavelength calibration to fail. As a result, we have updated the Wavecal Parameters Reference Table and CalCOS to address this issue. Additionally, it was necessary to extend the current geometric correction in order to encompass the entire LP4 location. Here we present 2-D template profiles and 1-D spectral trace centroids derived at LP4 as well as LP4-related updates to the wavelength calibration, and geometric correction.

  3. Men exhibit greater fatigue resistance than women in alternated bench press and leg press exercises.

    PubMed

    Monteiro, Estêvão R; Steele, James; Novaes, Jefferson S; Brown, Amanda F; Cavanaugh, Mark T; Vingren, Jakob L; Behm, David G

    2017-11-17

    The purpose of this study was to evaluate the influence of sex, exercise order, and rest interval on neuromuscular fatigue resistance for an alternated strength training sequence of bench press (BP) and leg press (LP) exercises. Twelve women and 16 men, both recreationally trained, performed four sessions in a random order: 1) BP followed by LP with three-minutes rest (BP+LP with rest), 2) LP followed by BP with three-minutes rest (LP+BP with rest), 3) BP followed by LP without rest interval (BP+LP no rest), and 4) LP followed by BP without rest interval (LP+BP no rest). Participants performed four sets with 100% of 10RM load to concentric failure with the goal of completing the maximum number of repetitions in both exercises. The fatigue index was analyzed from the first and last sets of each exercise bout. A main effect for sex showed that women exhibited 25.5% (p=0.001) and 24.5% (p=0.001) greater BP and LP fatigue than men respectively when performing 10RM. Men exhibited greater BP (p<0.0001; 34.1%) and LP (p<0.0001; 30.5%) fatigue resistance when a rest period was provided. Men did not show an exercise order effect for BP fatigue and exhibited greater (p=0.0003; 14.5%) LP fatigue resistance when BP was performed first. The present study demonstrated the greater fatigue resistance of men when performing 10RM BP and LP exercises. Since men tend to experience less fatigue with the second exercise in the exercise pairing, women's training programs should be adjusted to ensure they do not parallel men's resistance training programs.

  4. Ice-binding proteins confer freezing tolerance in transgenic Arabidopsis thaliana.

    PubMed

    Bredow, Melissa; Vanderbeld, Barbara; Walker, Virginia K

    2017-01-01

    Lolium perenne is a freeze-tolerant perennial ryegrass capable of withstanding temperatures below -13 °C. Ice-binding proteins (IBPs) presumably help prevent damage associated with freezing by restricting the growth of ice crystals in the apoplast. We have investigated the expression, localization and in planta freezing protection capabilities of two L. perenne IBP isoforms, LpIRI2 and LpIRI3, as well as a processed IBP (LpAFP). One of these isoforms, LpIRI2, lacks a conventional signal peptide and was assumed to be a pseudogene. Nevertheless, both LpIRI2 and LpIRI3 transcripts were up-regulated following cold acclimation. LpIRI2 also demonstrated ice-binding activity when produced recombinantly in Escherichia coli. Both the LpIRI3 and LpIRI2 isoforms appeared to accumulate in the apoplast of transgenic Arabidopsis thaliana plants. In contrast, the fully processed isoform, LpAFP, remained intracellular. Transgenic plants expressing either LpIRI2 or LpIRI3 showed reduced ion leakage (12%-39%) after low-temperature treatments, and significantly improved freezing survival, while transgenic LpAFP-expressing lines did not confer substantial subzero protection. Freeze protection was further enhanced by with the introduction of more than one IBP isoform; ion leakage was reduced 26%-35% and 10% of plants survived temperatures as low as -8 °C. Our results demonstrate that apoplastic expression of multiple L. perenne IBP isoforms shows promise for providing protection to crops susceptible to freeze-induced damage. © 2016 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.

  5. Serum lipoprotein(a) concentration as a cardiovascular risk factor in Kuwaiti type 2 diabetic patients.

    PubMed

    Abdella, N A; Mojiminiyi, O A; Akanji, A O; Al Mohammadi, H; Moussa, M A

    2001-01-01

    Serum lipoprotein(a) [Lp(a)], a risk factor for coronary heart disease (CHD) in some nondiabetic populations, is largely under genetic control and varies among ethnic and racial groups. We evaluated serum Lp(a) concentration and its relationship with traditional CHD risk factors (age, sex, smoking, hypertension, dyslipidemia) as well as stage of diabetic nephropathy in 345 type 2 diabetic patients. Lp(a) concentration was skewed with median (2.5th, 97.5th percentiles) of 25.0 (8.1, 75.7) mg/dl. Twenty-three of 55 (41.8%) patients with CHD had increased (>30 mg/dl) Lp(a) compared with 102 of 290 (35.1%) patients without CHD (P=.35). Twelve of 27 (44.4%) female patients with CHD had increased Lp(a) compared to 11 of 28 (39.3%) males (P=.70). Lp(a) was significantly (P<.05) higher in females than males, but the logistic regression analysis showed significant association of Lp(a), LDL-C, and duration of diabetes mellitus (DM) with CHD in male patients only. Although female patients with CHD and macroalbuminuria had significantly (P<.05) higher Lp(a) than normoalbuminuric female patients without CHD, no such association was found in males and no significant association was found between Lp(a) and the degree of albuminuria. Partial correlation analysis controlling for age, sex, and BMI showed significant correlation of Lp(a) with total cholesterol only (P=.03) and no correlation was found with other lipid parameters. Multiple regression analysis did not show significant associations of Lp(a) with standard CHD risk factors, HbA(1c), and plasma creatinine. This study is in agreement with studies in other populations, which showed that Lp(a) may not be an independent risk factor for CHD in patients with DM. However, as Lp(a) could promote atherogenesis via several mechanisms, follow-up studies in our patients will confirm if increased Lp(a) concentration can partly account for the poorer prognosis when diabetic patients develop CHD.

  6. Transcriptomic changes of Legionella pneumophila in water.

    PubMed

    Li, Laam; Mendis, Nilmini; Trigui, Hana; Faucher, Sébastien P

    2015-08-26

    Legionella pneumophila (Lp) is a water-borne opportunistic pathogen. In water, Lp can survive for an extended period of time until it encounters a permissive host. Therefore, identifying genes that are required for survival in water may help develop strategies to prevent Legionella outbreaks. We compared the global transcriptomic response of Lp grown in a rich medium to that of Lp exposed to an artificial freshwater medium (Fraquil) for 2, 6 and 24 hours. We uncovered successive changes in gene expression required for the successful adaptation to a nutrient-limited water environment. The repression of major pathways involved in cell division, transcription and translation, suggests that Lp enters a quiescent state in water. The induction of flagella associated genes (flg, fli and mot), enhanced-entry genes (enh) and some Icm/Dot effector genes suggests that Lp is primed to invade a suitable host in response to water exposure. Moreover, many genes involved in resistance to antibiotic and oxidative stress were induced, suggesting that Lp may be more tolerant to these stresses in water. Indeed, Lp exposed to water is more resistant to erythromycin, gentamycin and kanamycin than Lp cultured in rich medium. In addition, the bdhA gene, involved in the degradation pathway of the intracellular energy storage compound polyhydroxybutyrate, is also highly expressed in water. Further characterization show that expression of bdhA during short-term water exposure is dependent upon RpoS, which is required for the survival of Lp in water. Deletion of bdhA reduces the survival of Lp in water at 37 °C. The increase of antibiotic resistance and the importance of bdhA to the survival of Lp in water seem consistent with the observed induction of these genes when Lp is exposed to water. Other genes that are highly induced upon exposure to water could also be necessary for Lp to maintain viability in the water environment.

  7. Biosynthesis and processing of bovine cartilage link proteins.

    PubMed

    Hering, T M; Sandell, L J

    1990-02-05

    We have examined posttranslational modifications which are responsible for converting an apparently single precursor (Hering, T. M., and Sandell, L. J. (1988) J. Biol. Chem. 263, 1030-1036) to the two major forms of link protein in bovine articular cartilage. Resistance to endoglycosidases H and F suggests that Asn-linked oligosaccharides of link protein secreted by bovine chondrocytes in culture are of the complex or hybrid type. There is no evidence for O-linked oligosaccharides. There is no apparent precursor-product relationship between link protein (LP)1 and LP2, since after a short pulse with [3H]leucine two forms are present, consistent with the existence of two glycosylation sites. An immunoprecipitate of LP1 from pulse-labeled chondrocytes was observed to show a decrease in electrophoretic mobility and increased microheterogeneity during transit through the Golgi, whereas LP2 did not change. During processing both LP1 and LP2 become endoglycosidase H resistant. LP1, but not LP2, can be biosynthetically labeled with [35S]sulfate. Incorporation of [35S]sulfate is inhibited by tunicamycin, indicating that the sulfate is associated with Asn-linked carbohydrate. Sulfation may be important for normal processing, secretion, or degradation of link protein and with sialylation may confer considerable charge heterogeneity upon LP1. We conclude that there are considerable biochemical differences between glycoproteins LP1 and LP2 which may provide a basis for functional differences.

  8. Osteoblast adhesion on novel machinable calcium phosphate/lanthanum phosphate composites for orthopedic applications.

    PubMed

    Ergun, Celaletdin; Liu, Huinan; Webster, Thomas J

    2009-06-01

    Lanthanum phosphate (LaPO(4), LP) was combined with either hydroxyapatite (HA) or tricalcium phosphate (TCP) to form novel composites for orthopedic applications. In this study, these composites were prepared by wet chemistry synthesis and subsequent powder mixing. These HA/LP and TCP/LP composites were characterized in terms of phase stability and microstructure evolution during sintering using X-ray diffraction (XRD) and scanning electron microscopy (SEM). Their machinability was evaluated using a direct drilling test. For HA/LP composites, LP reacted with HA during sintering and formed a new phase, Ca(8)La(2)(PO(4))(6)O(2), as a reaction by-product. However, TCP/LP composites showed phase stability and the formation of a weak interface between TCP and LP machinability when sintered at 1100 degrees C, which is crucial for achieving desirable properties. Thus, these novel TCP/LP composites fulfilled the requirements for machinability, a key consideration for manufacturing orthopedic implants. Moreover, the biocompatibility of these novel LP composites was studied, for the first time, in this paper. In vitro cell culture tests demonstrated that the LP and its composites supported osteoblast (bone-forming cell) adhesion similar to natural bioceramics (such as HA and TCP). In conclusion, these novel LP composites should be further studied and developed for more effectively treating bone related diseases or injuries. 2008 Wiley Periodicals, Inc.

  9. The Relationship between Lichen Planus and Carotid Intima Media Thickness.

    PubMed

    C, Koseoglu; M, Erdogan; G, Koseoglu; O, Kurmus; Ag, Ertem; Th, Efe; Gi, Kurmus; T, Durmaz; T, Keles; E, Bozkurt

    2016-11-01

    Lichen planus (LP) is a chronic inflammatory disease. Although the association between chronic inflammation and subclinical atherosclerosis has been reported in the literature, the relationship between LP and carotid intima media thickness (CIMT) has not been previously investigated. The aim of this study was to investigate the relationship between LP and CIMT. One hundred eleven LP patients and 105 controls were enrolled in the study. Then, CIMT examination was performed with an ultrasonography device. Cross-sectional associations of LP with CIMT were analyzed using linear regression models adjusted for related confounders. No statistical difference was found between LP and the controls except for the female gender, white blood cell, LDL cholesterol and triglycerides (p = 0.046, p = 0.019, p = 0.011 and p = 0.013, respectively). Significant difference was found between the groups in terms of CIMT (0.90 ± 0.2 mm vs. 0.61 ± 0.3 mm, p = 0.001). CIMT was correlated with longevity of the LP, but we did not find LP to be an independent predictor of increased CIMT in logistic regression analysis (r = 0.449, p < 0.001, β = -0.117, p = 0.092; respectively). The results of our study suggested that LP was associated with increased mean CIMT, and furthermore that CIMT was correlated with longevity of LP. However, LP was not an independent predictor of increased CIMT.

  10. The Relationship between Lichen Planus and Carotid Intima Media Thickness

    PubMed Central

    C, Koseoglu; M, Erdogan; G, Koseoglu; O, Kurmus; AG, Ertem; TH, Efe; GI, Kurmus; T, Durmaz; T, Keles; E, Bozkurt

    2016-01-01

    Background Lichen planus (LP) is a chronic inflammatory disease. Although the association between chronic inflammation and subclinical atherosclerosis has been reported in the literature, the relationship between LP and carotid intima media thickness (CIMT) has not been previously investigated. The aim of this study was to investigate the relationship between LP and CIMT. Methods One hundred eleven LP patients and 105 controls were enrolled in the study. Then, CIMT examination was performed with an ultrasonography device. Cross-sectional associations of LP with CIMT were analyzed using linear regression models adjusted for related confounders. Results No statistical difference was found between LP and the controls except for the female gender, white blood cell, LDL cholesterol and triglycerides (p = 0.046, p = 0.019, p = 0.011 and p = 0.013, respectively). Significant difference was found between the groups in terms of CIMT (0.90 ± 0.2 mm vs. 0.61 ± 0.3 mm, p = 0.001). CIMT was correlated with longevity of the LP, but we did not find LP to be an independent predictor of increased CIMT in logistic regression analysis (r = 0.449, p < 0.001, β = -0.117, p = 0.092; respectively). Conclusions The results of our study suggested that LP was associated with increased mean CIMT, and furthermore that CIMT was correlated with longevity of LP. However, LP was not an independent predictor of increased CIMT. PMID:27899862

  11. Molecular characterization of an Apolipophorin-III gene from the Chinese oak silkworm, Antheraea pernyi (Lepidoptera: Saturniidae).

    PubMed

    Liu, Qiu-Ning; Lin, Kun-Zhang; Yang, Lin-Nan; Dai, Li-Shang; Wang, Lei; Sun, Yu; Qian, Cen; Wei, Guo-Qing; Liu, Dong-Ran; Zhu, Bao-Jian; Liu, Chao-Liang

    2015-03-01

    Apolipophorin-III (ApoLp-III) acts in lipid transport, lipoprotein metabolism, and innate immunity in insects. In this study, an ApoLp-III gene of Antheraea pernyi pupae (Ap-ApoLp-III) was isolated and characterized. The full-length cDNA of Ap-ApoLp-III is 687 bp, including a 5'-untranslated region (UTR) of 40 bp, 3'-UTR of 86 bp and an open reading frame of 561 bp encoding a polypeptide of 186 amino acids that contains an Apolipophorin-III precursor domain (PF07464). The deduced Ap-apoLp-III protein sequence has 68, 59, and 23% identity with its orthologs of Manduca sexta, Bombyx mori, and Aedes aegypti, respectively. Phylogenetic analysis showed that the Ap-apoLp-III was close to that of Bombycoidea. qPCR analysis revealed that Ap-ApoLp-III expressed during the four developmental stages and in integument, fat body, and ovaries. After six types of microorganism infections, expression levels of the Ap-ApoLp-III gene were upregulated significantly at different time points compared with control. RNA interference (RNAi) of Ap-ApoLp-III showed that the expression of Ap-ApoLp-III was significantly downregulated using qPCR after injection of E. coli. We infer that the Ap-ApoLp-III gene acts in the innate immunity of A. pernyi. © 2014 Wiley Periodicals, Inc.

  12. Role of learning potential in cognitive remediation: Construct and predictive validity.

    PubMed

    Davidson, Charlie A; Johannesen, Jason K; Fiszdon, Joanna M

    2016-03-01

    The construct, convergent, discriminant, and predictive validity of Learning Potential (LP) was evaluated in a trial of cognitive remediation for adults with schizophrenia-spectrum disorders. LP utilizes a dynamic assessment approach to prospectively estimate an individual's learning capacity if provided the opportunity for specific related learning. LP was assessed in 75 participants at study entry, of whom 41 completed an eight-week cognitive remediation (CR) intervention, and 22 received treatment-as-usual (TAU). LP was assessed in a "test-train-test" verbal learning paradigm. Incremental predictive validity was assessed as the degree to which LP predicted memory skill acquisition above and beyond prediction by static verbal learning ability. Examination of construct validity confirmed that LP scores reflected use of trained semantic clustering strategy. LP scores correlated with executive functioning and education history, but not other demographics or symptom severity. Following the eight-week active phase, TAU evidenced little substantial change in skill acquisition outcomes, which related to static baseline verbal learning ability but not LP. For the CR group, LP significantly predicted skill acquisition in domains of verbal and visuospatial memory, but not auditory working memory. Furthermore, LP predicted skill acquisition incrementally beyond relevant background characteristics, symptoms, and neurocognitive abilities. Results suggest that LP assessment can significantly improve prediction of specific skill acquisition with cognitive training, particularly for the domain assessed, and thereby may prove useful in individualization of treatment. Published by Elsevier B.V.

  13. LP01 to LP0m mode converters using all-fiber two-stage tapers

    NASA Astrophysics Data System (ADS)

    Mellah, Hakim; Zhang, Xiupu; Shen, Dongya

    2015-11-01

    A mode converter between LP01 and LP0m modes is proposed using two stages of tapers. The first stage is formed by an adiabatically tapering a circular fiber to excite the desirable LP0m mode. The second stage is formed by inserting an inner core (tapered from both sides) with a refractive index smaller than the original core. This second stage is used to obtain low insertion loss and high extinction ratio of the desired LP0m mode. Three converters between LP01 and LP0m, m=2, 3, and 4, are designed for C-band, and simulation results show that less than 0.24, 0.54 and 0.7 dB insertion loss and higher than 15, 16, and 17.5 dB extinction ratio over the entire band were obtained for the three converters, respectively.

  14. 78 FR 21491 - DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P.; Notice Seeking Exemption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ... SMALL BUSINESS ADMINISTRATION [License No. 02/02-0662, 02/02-0661] DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P.; Notice Seeking Exemption Under Section 312 of the Small Business Investment Act, Conflicts of Interest Notice is hereby given that DeltaPoint Capital IV, L.P. and DeltaPoint...

  15. 76 FR 38178 - Change in Bank Control Notices; Acquisitions of Shares of a Bank or Bank Holding Company

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-29

    ..., New York, New York 10045-0001: 1. Thomas H. Lee (Alternative) Fund VI, L.P., Thomas H. Lee (Alternative) Parallel Fund VI, L.P., Thomas H. Lee (Alternative) Parallel (DT) Fund VI, L.P., THL FBC Equity Investors, L.P., THL Advisors (Alternative) VI, L.P., Thomas H. Lee (Alternative) VI, Ltd., THL Managers VI...

  16. LpMab-23-recognizing cancer-type podoplanin is a novel predictor for a poor prognosis of early stage tongue cancer.

    PubMed

    Miyazaki, Akihiro; Nakai, Hiromi; Sonoda, Tomoko; Hirohashi, Yoshihiko; Kaneko, Mika K; Kato, Yukinari; Sawa, Yoshihiko; Hiratsuka, Hiroyoshi

    2018-04-20

    We report that the reactivity of a novel monoclonal antibody LpMab-23 for human cancer-type podoplanin (PDPN) is a predictor for a poor prognosis of tongue cancer. The association between LpMab-23-recognizing cancer-type PDPN expression and clinical/pathological features were analyzed on 60 patients with stage I and II tongue cancer treated with transoral resection of the primary tumor. In the mode of invasion, the LpMab-23-dull/negative cases were significantly larger in cases with low-grade malignancies and without late cervical lymph node metastasis, than in cases with high-grade malignancies and the metastasis. In the high-grade malignant cases, LpMab-23-positive cases were significantly larger than LpMab-23-dull/negative cases. The Kaplan-Meier curves of the five-year metastasis-free survival rate (MFS) were significantly lower in the LpMab-23 positive patients than in LpMab-23 dull/negative patients. The LpMab-23-dull/negative cases showed the highest MFS in all of the clinical/pathological features and particularly, the MFS of the LpMab-23 positive cases decreased to less than 60% in the first year. In the Cox proportional hazard regression models a comparison of the numbers of LpMab-23 dull/negative with positive cases showed the highest hazard ratio with statistical significance in all of the clinical/pathological features. LpMab-23 positive cases may be considered to present a useful predictor of poor prognosis for early stage tongue cancer.

  17. Increased systolic blood pressure in rat offspring following a maternal low-protein diet is normalized by maternal dietary choline supplementation.

    PubMed

    Bai, S Y; Briggs, D I; Vickers, M H

    2012-10-01

    An adverse prenatal environment may induce long-term metabolic consequences, in particular hypertension and cardiovascular disease. A maternal low-protein (LP) diet is well known to result in increased blood pressure (BP) in offspring. Choline has been shown to have direct BP-reducing effects in humans and animals. It has been suggested that endogenous choline synthesis via phosphatidylcholine is constrained during maternal LP exposure. The present study investigates the effect of choline supplementation to mothers fed a LP diet during pregnancy on systolic BP (SBP) in offspring as measured by tail-cuff plethysmography. Wistar rats were assigned to one of three diets to be fed ad libitum throughout pregnancy: (1) control diet (CONT, 20% protein); (2) an LP diet (9% protein); and (3) LP supplemented with choline (LP + C). Dams were fed the CONT diet throughout lactation and offspring were fed the CONT diet from weaning for the remainder of the trial. At postnatal day 150, SBP and retroperitoneal fat mass was significantly increased in LP offspring compared with CONT animals and was normalized in LP + C offspring. Effects of LP + C reduction in SBP were similar in both males and females. Plasma choline and phosphatidylcholine concentrations were not different across treatment groups, but maternal choline supplementation resulted in a significant reduction in homocysteine concentrations in LP + C offspring compared with LP and CONT animals. The present trial shows for the first time that maternal supplementation with dietary choline during periods of LP exposure can normalize increased SBP and fat mass observed in offspring in later life.

  18. ChLpMab-23: Cancer-Specific Human-Mouse Chimeric Anti-Podoplanin Antibody Exhibits Antitumor Activity via Antibody-Dependent Cellular Cytotoxicity.

    PubMed

    Kaneko, Mika K; Nakamura, Takuro; Kunita, Akiko; Fukayama, Masashi; Abe, Shinji; Nishioka, Yasuhiko; Yamada, Shinji; Yanaka, Miyuki; Saidoh, Noriko; Yoshida, Kanae; Fujii, Yuki; Ogasawara, Satoshi; Kato, Yukinari

    2017-06-01

    Podoplanin is expressed in many cancers, including oral cancers and brain tumors. The interaction between podoplanin and its receptor C-type lectin-like receptor 2 (CLEC-2) has been reported to be involved in cancer metastasis and tumor malignancy. We previously established many monoclonal antibodies (mAbs) against human podoplanin using the cancer-specific mAb (CasMab) technology. LpMab-23 (IgG 1 , kappa), one of the mouse anti-podoplanin mAbs, was shown to be a CasMab. However, we have not shown the usefulness of LpMab-23 for antibody therapy against podoplanin-expressing cancers. In this study, we first determined the minimum epitope of LpMab-23 and revealed that Gly54-Leu64 peptide, especially Gly54, Thr55, Ser56, Glu57, Asp58, Arg59, Tyr60, and Leu64 of podoplanin, is a critical epitope of LpMab-23. We further produced human-mouse chimeric LpMab-23 (chLpMab-23) and investigated whether chLpMab-23 exerts antibody-dependent cellular cytotoxicity (ADCC) and antitumor activity. In flow cytometry, chLpMab-23 showed high sensitivity against a podoplanin-expressing glioblastoma cell line, LN319, and an oral cancer cell line, HSC-2. chLpMab-23 also showed ADCC activity against podoplanin-expressing CHO cells (CHO/podoplanin). In xenograft models with HSC-2 and CHO/podoplanin, chLpMab-23 exerts antitumor activity using human natural killer cells, indicating that chLpMab-23 could be useful for antibody therapy against podoplanin-expressing cancers.

  19. Hawking radiation of five-dimensional charged black holes with scalar fields

    NASA Astrophysics Data System (ADS)

    Miao, Yan-Gang; Xu, Zhen-Ming

    2017-09-01

    We investigate the Hawking radiation cascade from the five-dimensional charged black hole with a scalar field coupled to higher-order Euler densities in a conformally invariant manner. We give the semi-analytic calculation of greybody factors for the Hawking radiation. Our analysis shows that the Hawking radiation cascade from this five-dimensional black hole is extremely sparse. The charge enhances the sparsity of the Hawking radiation, while the conformally coupled scalar field reduces this sparsity.

  20. A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades

    DTIC Science & Technology

    2016-09-01

    UNCLASSIFIED A Review of Sparsity-Based Methods for Analysing Radar Returns from Helicopter Rotor Blades Ngoc Hung Nguyen 1, Hai-Tan Tran 2, Kutluyıl...TR–3292 ABSTRACT Radar imaging of rotating blade -like objects, such as helicopter rotors, using narrowband radar has lately been of significant...Methods for Analysing Radar Returns from Helicopter Rotor Blades Executive Summary Signal analysis and radar imaging of fast-rotating objects such as

  1. Sparsity enabled cluster reduced-order models for control

    NASA Astrophysics Data System (ADS)

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  2. 77 FR 73637 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-11

    ... Pipeline L.P.; Notice of Application Take notice that on November 26, 2012, Alliance Pipeline L.P..., Manager, Regulatory Affairs, Alliance Pipeline Ltd. on behalf of Alliance Pipeline L.P., 800, 605-5 Ave...] BILLING CODE 6717-01-P ...

  3. 78 FR 45592 - DeltaPoint Capital IV, LP;

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-29

    ... Regulations (13 CFR 107.730). DeltaPoint Capital IV, L.P. provided financing to BioMaxx, Inc., 1 Fishers Road... York), L.P., an Associate of DeltaPoint Capital IV, L.P., owns more than ten percent of BioMaxx, Inc...

  4. The Lp_3561 and Lp_3562 Enzymes Support a Functional Divergence Process in the Lipase/Esterase Toolkit from Lactobacillus plantarum

    PubMed Central

    Esteban-Torres, María; Reverón, Inés; Santamaría, Laura; Mancheño, José M.; de las Rivas, Blanca; Muñoz, Rosario

    2016-01-01

    Lactobacillus plantarum species is a good source of esterases since both lipolytic and esterase activities have been described for strains of this species. No fundamental biochemical difference exists among esterases and lipases since both share a common catalytic mechanism. L. plantarum WCFS1 possesses a protein, Lp_3561, which is 44% identical to a previously described lipase, Lp_3562. In contrast to Lp_3562, Lp_3561 was unable to degrade esters possessing a chain length higher than C4 and the triglyceride tributyrin. As in other L. plantarum esterases, the electrostatic potential surface around the active site in Lp_3561 is predicted to be basic, whereas it is essentially neutral in the Lp_3562 lipase. The fact that the genes encoding both proteins were located contiguously in the L. plantarum WCFS1 genome, suggests that they originated by tandem duplication, and therefore are paralogs as new functions have arisen during evolution. The presence of the contiguous lp_3561 and lp_3562 genes was studied among L. plantarum strains. They are located in a 8,903 bp DNA fragment that encodes proteins involved in the catabolism of sialic acid and are predicted to increase bacterial adaptability under certain growth conditions. PMID:27486450

  5. Three Drought-Responsive Members of the Nonspecific Lipid-Transfer Protein Gene Family in Lycopersicon pennellii Show Different Developmental Patterns of Expression1

    PubMed Central

    Treviño, Marcela B.; Connell, Mary A. O'

    1998-01-01

    Genomic clones of two nonspecific lipid-transfer protein genes from a drought-tolerant wild species of tomato (Lycopersicon pennellii Corr.) were isolated using as a probe a drought- and abscisic acid (ABA)-induced cDNA clone (pLE16) from cultivated tomato (Lycopersicon esculentum Mill.). Both genes (LpLtp1 and LpLtp2) were sequenced and their corresponding mRNAs were characterized; they are both interrupted by a single intron at identical positions and predict basic proteins of 114 amino acid residues. Genomic Southern data indicated that these genes are members of a small gene family in Lycopersicon spp. The 3′-untranslated regions from LpLtp1 and LpLtp2, as well as a polymerase chain reaction-amplified 3′-untranslated region from pLE16 (cross-hybridizing to a third gene in L. pennellii, namely LpLtp3), were used as gene-specific probes to describe expression in L. pennellii through northern-blot analyses. All LpLtp genes were exclusively expressed in the aerial tissues of the plant and all were drought and ABA inducible. Each gene had a different pattern of expression in fruit, and LpLtp1 and LpLtp2, unlike LpLtp3, were both primarily developmentally regulated in leaf tissue. Putative ABA-responsive elements were found in the proximal promoter regions of LpLtp1 and LpLtp2. PMID:9536064

  6. Ethnicity and lipoprotein(a) polymorphism in Native Mexican populations.

    PubMed

    Cardoso-Saldaña, G; De La Peña-Díaz, A; Zamora-González, J; Gomez-Ortega, R; Posadas-Romero, C; Izaguirre-Avila, R; Malvido-Miranda, E; Morales-Anduaga, M E; Anglés-Cano, E

    2006-01-01

    Lp(a) is a lipoparticle of unknown function mainly present in primates and humans. It consists of a low-density lipoprotein and apo(a), a polymorphic glycoprotein. Apo(a) shares sequence homology and fibrin binding with plasminogen, inhibiting its fibrinolytic properties. Lp(a) is considered a link between atherosclerosis and thrombosis. Marked inter-ethnic differences in Lp(a) concentration related to the genetic polymorphism of apo(a) have been reported in several populations. The study examined the structural and functional features of Lp(a) in three Native Mexican populations (Mayos, Mazahuas and Mayas) and in Mestizo subjects. We determined the plasma concentration of Lp(a) by immunonephelometry, apo(a) isoforms by Western blot, Lp(a) fibrin binding by immuno-enzymatic assay and short tandem repeat (STR) polymorphic marker genetic analysis by capillary electrophoresis. Mestizos presented the less skewed distribution and the highest median Lp(a) concentration (13.25 mg dL(-1)) relative to Mazahuas (8.2 mg dL(-1)), Mayas (8.25 mg dL(-1)) and Mayos (6.5 mg dL(-1)). Phenotype distribution was different in Mayas and Mazahuas as compared with the Mestizo group. The higher Lp(a) fibrin-binding capacity was found in the Maya population. There was an inverse relationship between the size of apo(a) polymorphs and both Lp(a) levels and Lp(a) fibrin binding. There is evidence of significative differences in Lp(a) plasma concentration and phenotype distribution in the Native Mexican and the Mestizo group.

  7. Modifying glycyrrhetinic acid liposomes with liver-targeting ligand of galactosylated derivative: preparation and evaluations

    PubMed Central

    Cheng, Yi; Gao, Youheng; Zheng, Pinjing; Li, Chuangnan; Tong, Yidan; Li, Zhao; Luo, Wenhui; Chen, Zhao

    2017-01-01

    In this study, novel glycyrrhetinic acid (GA) liposomes modified with a liver-targeting galactosylated derivative ligand (Gal) were prepared using a film-dispersion method. To characterize the samples, particle size, zeta potential, drug loading, and encapsulation efficiency were performed. Moreover, plasma and tissues were pre-treated by liquid-liquid extraction and analyzed by high-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS). The results showed that the mean residence times (MRTs) and the area under the curve (AUC) of GA liposomes with Gal (Gal-GA-LP), and GA liposomes (GA-LP) were higher than the GA solution (GA-S) in plasma. The tissue (liver) distribution of Gal-GA-LP was significantly different in contrast to GA-LP. The relative intake rate (Re) of Gal-GA-LP and GA-LP in the liver was 4.752 and 2.196, respectively. The peak concentration ratio (Ce) of Gal-GA-LP and GA-LP in the liver was 2.796 and 1.083, respectively. The targeting efficiency (Te) of Gal-GA-LP and GA-LP in the liver was 48.193% and 34.718%, respectively. Taken together, the results indicate that Gal-GA-LP is an ideal complex for liver-targeting, and has great potential application in the clinical treatment of hepatic diseases. Drug loading and releasing experiments also indicated that most liposomes are spherical structures and have good dispersity under physiologic conditions, which could prolong GA release efficiency in vitro. PMID:29254224

  8. Novel enzymatic method for assaying Lp-PLA2 in serum.

    PubMed

    Yamaura, Saki; Sakasegawa, Shin-Ichi; Koguma, Emisa; Ueda, Shigeru; Kayamori, Yuzo; Sugimori, Daisuke; Karasawa, Ken

    2018-06-01

    Measurement of lipoprotein-associated phospholipase A 2 (Lp-PLA 2 ) can be used as an adjunct to traditional cardiovascular risk factors for identifying individuals at higher risk of cardiovascular events. This can be performed by quantification of the protein concentration using an ELISA platform or by measuring Lp-PLA 2 activity using platelet-activating factor (PAF) analog as substrate. Here, an enzymatic Lp-PLA 2 activity assay method using 1-O-Hexadecyl-2-acetyl-rac-glycero-3-phosphocholine (rac C 16 PAF) was developed. The newly revealed substrate specificity of lysoplasmalogen-specific phospholipase D (lysophospholipase D (LysoPLD)) was exploited. Lp-PLA 2 hydrolyzes 1-O-Hexadecyl-2-acetyl-sn-glycero-3-phosphocholine (C 16 PAF) to 1-O-Hexadecyl-2-hydroxy-sn-glycero-3-phosphocholine (LysoPAF). LysoPLD acted on LysoPAF, and the hydrolytically released choline was detected by choline oxidase. Regression analysis of Lp-PLA 2 activity measured by the enzymatic Lp-PLA 2 activity assay vs. two chemical Lp-PLA 2 activity assays, i.e. LpPLA 2 FS and PLAC® test, and ELISA, gave the following correlation coefficients: 0.990, 0.893 and 0.785, respectively (n = 30). Advantages of this enzymatic Lp-PLA 2 activity assay compared with chemical Lp-PLA 2 methods include the following; (i) only requires two reagents enabling a simple two-point linear calibration method with one calibrator (ii) no need for inhibitors of esterase-like activity in serum. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Ethnicity and lipoprotein(a) polymorphism in Native Mexican populations

    PubMed Central

    Cardoso-Saldaña, Guillermo; De La Peña-Díaz, Aurora; Zamora-González, José; Gomez-Ortega, Rocio; Posadas-Romero, Carlos; Izaguirre-Avila, Raul; Malvido-Miranda, Elsa; Morales-Anduaga, Maria Elena; Angles-Cano, Eduardo

    2006-01-01

    Background Lp(a) is a lipoparticle of unknown function mainly present in primates and humans. It consists of a low-density lipoprotein and apo(a), a polymorphic glycoprotein. Apo(a) shares sequence homology and fibrin-binding with plasminogen inhibiting its fibrinolytic properties. Lp(a) is considered a link between atherosclerosis and thrombosis. Marked inter-ethnic differences in Lp(a) concentration related to the genetic polymorphism of apo(a), have been reported in several populations. Aim To study the structural and functional features of Lp(a) in three Native Mexican populations (Mayos, Mazahuas and Mayas) and in Mestizo subjects. Methods We determined the plasma concentration of Lp(a) by immunonephelometry, apo(a) isoforms by Western blot, Lp(a) fibrin-binding by immuno-enzymatic assay and STR polymorphic markers genetic analysis by capillary electrophoresis. Results Mestizos presented the less skewed distribution and the highest median Lp(a) concentration (13.25 mg/dL) relative to Mazahuas (8.2 mg/dL), Mayas (8.25 mg/dL) and Mayos (6.5 mg/dL). Phenotype distribution was different in Mayas and Mazahuas as compared to the Mestizo group. The higher Lp(a) fibrin-binding capacity was found in the Maya population. There was an inverse relationship between the size of apo(a) polymorphs and both Lp(a) levels and Lp(a) fibrin binding. Conclusion There is evidence of significative differences in Lp(a) plasma concentration and phenotype distribution in Native Mexican and the Mestizo group. PMID:16684693

  10. Anthocyanins and flavonols are responsible for purple color of Lablab purpureus (L.) sweet pods.

    PubMed

    Cui, Baolu; Hu, Zongli; Zhang, Yanjie; Hu, Jingtao; Yin, Wencheng; Feng, Ye; Xie, Qiaoli; Chen, Guoping

    2016-06-01

    Lablab pods, as dietary vegetable, have high nutritional values similar to most of edible legumes. Moreover, our studies confirmed that purple lablab pods contain the natural pigments of anthocyanins and flavonols. Compared to green pods, five kinds of anthocyanins (malvidin, delphinidin and petunidin derivatives) were found in purple pods by HPLC-ESI-MS/MS and the major contents were delphinidin derivatives. Besides, nine kinds of polyphenol derivatives (quercetin, myricetin, kaempferol and apigenin derivatives) were detected by UPLC-ESI-MS/MS and the major components were quercetin and myricetin derivatives. In order to discover their molecular mechanism, expression patterns of biosynthesis and regulatory gens of anthocyanins and flavonols were investigated. Experimental results showed that LpPAL, LpF3H, LpF3'H, LpDFR, LpANS and LpPAP1 expressions were significantly induced in purple pods compared to green ones. Meanwhile, transcripts of LpFLS were more abundant in purple pods than green or yellow ones, suggestind that co-pigments of anthocyanins and flavonols are accumulated in purple pods. Under continuously dark condition, no anthocyanin accumulation was detected in purple pods and transcripts of LpCHS, LpANS, LpFLS and LpPAP1 were remarkably repressed, indicating that anthocyanins and flavonols biosynthesis in purple pods was regulated in light-dependent manner. These results indicate that co-pigments of anthocyanins and flavonols contribute to purple pigmentations of pods. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  11. Prevention and Mitigation of Acute Radiation Syndrome in Mice by Synthetic Lipopeptide Agonists of Toll-Like Receptor 2 (TLR2)

    PubMed Central

    Shakhov, Alexander N.; Singh, Vijay K.; Bone, Frederick; Cheney, Alec; Kononov, Yevgeniy; Krasnov, Peter; Bratanova-Toshkova, Troitza K.; Shakhova, Vera V.; Young, Jason; Weil, Michael M.; Panoskaltsis-Mortari, Angela; Orschell, Christie M.; Baker, Patricia S.; Gudkov, Andrei; Feinstein, Elena

    2012-01-01

    Bacterial lipoproteins (BLP) induce innate immune responses in mammals by activating heterodimeric receptor complexes containing Toll-like receptor 2 (TLR2). TLR2 signaling results in nuclear factor-kappaB (NF-κB)-dependent upregulation of anti-apoptotic factors, anti-oxidants and cytokines, all of which have been implicated in radiation protection. Here we demonstrate that synthetic lipopeptides (sLP) that mimic the structure of naturally occurring mycoplasmal BLP significantly increase mouse survival following lethal total body irradiation (TBI) when administered between 48 hours before and 24 hours after irradiation. The TBI dose ranges against which sLP are effective indicate that sLP primarily impact the hematopoietic (HP) component of acute radiation syndrome. Indeed, sLP treatment accelerated recovery of bone marrow (BM) and spleen cellularity and ameliorated thrombocytopenia of irradiated mice. sLP did not improve survival of irradiated TLR2-knockout mice, confirming that sLP-mediated radioprotection requires TLR2. However, sLP was radioprotective in chimeric mice containing TLR2-null BM on a wild type background, indicating that radioprotection of the HP system by sLP is, at least in part, indirect and initiated in non-BM cells. sLP injection resulted in strong transient induction of multiple cytokines with known roles in hematopoiesis, including granulocyte colony-stimulating factor (G-CSF), keratinocyte chemoattractant (KC) and interleukin-6 (IL-6). sLP-induced cytokines, particularly G-CSF, are likely mediators of the radioprotective/mitigative activity of sLP. This study illustrates the strong potential of LP-based TLR2 agonists for anti-radiation prophylaxis and therapy in defense and medical scenarios. PMID:22479357

  12. Lipoprotein(a) levels predict adverse vascular events after acute myocardial infarction.

    PubMed

    Mitsuda, Takayuki; Uemura, Yusuke; Ishii, Hideki; Takemoto, Kenji; Uchikawa, Tomohiro; Koyasu, Masayoshi; Ishikawa, Shinji; Miura, Ayako; Imai, Ryo; Iwamiya, Satoshi; Ozaki, Yuta; Kato, Tomohiro; Shibata, Rei; Watarai, Masato; Murohara, Toyoaki

    2016-12-01

    Lipoprotein(a) [Lp(a)], which is genetically determined, has been reported as an independent risk factor for atherosclerotic vascular disease. However, the prognostic value of Lp(a) for secondary vascular events in patients after coronary artery disease has not been fully elucidated. This 3-year observational study included a total of 176 patients with ST-elevated myocardial infarction (STEMI), whose Lp(a) levels were measured within 24 h after primary percutaneous coronary intervention. We divided enrolled patients into two groups according to Lp(a) level and investigated the association between Lp(a) and the incidence of major adverse cardiac and cerebrovascular events (MACCE). A Kaplan-Meier analysis demonstrated that patients with higher Lp(a) levels had a higher incidence of MACCE than those with lower Lp(a) levels (log-rank P = 0.034). A multivariate Cox regression analysis revealed that Lp(a) levels were independently correlated with the occurrence of MACCE after adjusting for other classical risk factors of atherosclerotic vascular diseases (hazard ratio 1.030, 95 % confidence interval: 1.011-1.048, P = 0.002). In receiver-operating curve analysis, the cutoff value to maximize the predictive power of Lp(a) was 19.0 mg/dl (area under the curve = 0.674, sensitivity 69.2 %, specificity 62.0 %). Evaluation of Lp(a) in addition to the established coronary risk factors improved their predictive value for the occurrence of MACCE. In conclusion, Lp(a) levels at admission independently predict secondary vascular events in patients with STEMI. Lp(a) might provide useful information for the development of secondary prevention strategies in patients with myocardial infarction.

  13. Single-Nucleotide Polymorphisms in LPA Explain Most of the Ancestry-Specific Variation in Lp(a) Levels in African Americans

    PubMed Central

    Lawson, Kim; Kao, W. H. Linda; Reich, David; Tandon, Arti; Akylbekova, Ermeg; Patterson, Nick; Mosley, Thomas H.; Boerwinkle, Eric; Taylor, Herman A.

    2011-01-01

    Lipoprotein(a) (Lp(a)) is an important causal cardiovascular risk factor, with serum Lp(a) levels predicting atherosclerotic heart disease and genetic determinants of Lp(a) levels showing association with myocardial infarction. Lp(a) levels vary widely between populations, with African-derived populations having nearly 2-fold higher Lp(a) levels than European Americans. We investigated the genetic basis of this difference in 4464 African Americans from the Jackson Heart Study (JHS) using a panel of up to 1447 ancestry informative markers, allowing us to accurately estimate the African ancestry proportion of each individual at each position in the genome. In an unbiased genome-wide admixture scan for frequency-differentiated genetic determinants of Lp(a) level, we found a convincing peak (LOD = 13.6) at 6q25.3, which spans the LPA locus. Dense fine-mapping of the LPA locus identified a number of strongly associated, common biallelic SNPs, a subset of which can account for up to 7% of the variation in Lp(a) level, as well as >70% of the African-European population differences in Lp(a) level. We replicated the association of the most strongly associated SNP, rs9457951 (p = 6×10−22, 27% change in Lp(a) per allele, ∼5% of Lp(a) variance explained in JHS), in 1,726 African Americans from the Dallas Heart Study and found an even stronger association after adjustment for the kringle(IV) repeat copy number. Despite the strong association with Lp(a) levels, we find no association of any LPA SNP with incident coronary heart disease in 3,225 African Americans from the Atherosclerosis Risk in Communities Study. PMID:21283670

  14. Prevention and mitigation of acute radiation syndrome in mice by synthetic lipopeptide agonists of Toll-like receptor 2 (TLR2).

    PubMed

    Shakhov, Alexander N; Singh, Vijay K; Bone, Frederick; Cheney, Alec; Kononov, Yevgeniy; Krasnov, Peter; Bratanova-Toshkova, Troitza K; Shakhova, Vera V; Young, Jason; Weil, Michael M; Panoskaltsis-Mortari, Angela; Orschell, Christie M; Baker, Patricia S; Gudkov, Andrei; Feinstein, Elena

    2012-01-01

    Bacterial lipoproteins (BLP) induce innate immune responses in mammals by activating heterodimeric receptor complexes containing Toll-like receptor 2 (TLR2). TLR2 signaling results in nuclear factor-kappaB (NF-κB)-dependent upregulation of anti-apoptotic factors, anti-oxidants and cytokines, all of which have been implicated in radiation protection. Here we demonstrate that synthetic lipopeptides (sLP) that mimic the structure of naturally occurring mycoplasmal BLP significantly increase mouse survival following lethal total body irradiation (TBI) when administered between 48 hours before and 24 hours after irradiation. The TBI dose ranges against which sLP are effective indicate that sLP primarily impact the hematopoietic (HP) component of acute radiation syndrome. Indeed, sLP treatment accelerated recovery of bone marrow (BM) and spleen cellularity and ameliorated thrombocytopenia of irradiated mice. sLP did not improve survival of irradiated TLR2-knockout mice, confirming that sLP-mediated radioprotection requires TLR2. However, sLP was radioprotective in chimeric mice containing TLR2-null BM on a wild type background, indicating that radioprotection of the HP system by sLP is, at least in part, indirect and initiated in non-BM cells. sLP injection resulted in strong transient induction of multiple cytokines with known roles in hematopoiesis, including granulocyte colony-stimulating factor (G-CSF), keratinocyte chemoattractant (KC) and interleukin-6 (IL-6). sLP-induced cytokines, particularly G-CSF, are likely mediators of the radioprotective/mitigative activity of sLP. This study illustrates the strong potential of LP-based TLR2 agonists for anti-radiation prophylaxis and therapy in defense and medical scenarios.

  15. Isolating and evaluating lactic acid bacteria strains for effectiveness of Leymus chinensis silage fermentation.

    PubMed

    Zhang, Q; Li, X J; Zhao, M M; Yu, Z

    2014-10-01

    Five LAB strains were evaluated using the acid production ability test, morphological observation, Gram staining, physiological, biochemical and acid tolerance tests. All five strains (LP1, LP2, LP3, LC1 and LC2) grew at pH 4·0, and LP1 grew at 15°C. Strains LP1, LP2 and LP3 were identified as Lactobacillus plantarum, whereas LC1 and LC2 were classified as Lactobacillus casei by sequencing 16S rDNA. The five isolated strains and two commercial inoculants (PS and CL) were added to native grass and Leymus chinensis (Trin.) Tzvel. for ensiling. All five isolated strains decreased the pH and ammonia nitrogen content, increased the lactic acid content and LP1, LP2 and LP3 increased the acetic content and lactic/acetic acid ratio of L. chinensis silage significantly. The five isolated strains and two commercial inoculants decreased the butyric acid content of the native grass silage. LP2 treatment had lower butyric acid content and ammonia nitrogen content than the other treatments. The five isolated strains improved the quality of L. chinensis silage. The five isolated strains and the two commercial inoculants were not effective in improving the fermentation quality of the native grass silage, but LP2 performed better comparatively. Significance and impact of the study: Leymus chinensis is an important grass in China and Russia, being the primary grass of the short grassland 'steppe' regions of central Asia. However, it has been difficult to make high-quality silage of this species because of low concentration of water-soluble carbohydrates (WSC). Isolating and evaluating lactic acid bacteria strains will be helpful for improving the silage quality of this extensively grown species. © 2014 The Society for Applied Microbiology.

  16. Mipomersen, an antisense oligonucleotide to apolipoprotein B-100, reduces lipoprotein(a) in various populations with hypercholesterolemia: results of 4 phase III trials.

    PubMed

    Santos, Raul D; Raal, Frederick J; Catapano, Alberico L; Witztum, Joseph L; Steinhagen-Thiessen, Elisabeth; Tsimikas, Sotirios

    2015-03-01

    Lp(a) is an independent, causal, genetic risk factor for cardiovascular disease and aortic stenosis. Current pharmacological lipid-lowering therapies do not optimally lower Lp(a), particularly in patients with familial hypercholesterolemia (FH). In 4 phase III trials, 382 patients on maximally tolerated lipid-lowering therapy were randomized 2:1 to weekly subcutaneous mipomersen 200 mg (n=256) or placebo (n=126) for 26 weeks. Populations included homozygous FH, heterozygous FH with concomitant coronary artery disease (CAD), severe hypercholesterolemia, and hypercholesterolemia at high risk for CAD. Lp(a) was measured 8× between baseline and week 28 inclusive. Of the 382 patients, 57% and 44% had baseline Lp(a) levels >30 and >50 mg/dL, respectively. In the pooled analysis, the mean percent decrease (median, interquartile range in Lp(a) at 28 weeks was significantly greater in the mipomersen group compared with placebo (-26.4 [-42.8, -5.4] versus -0.0 [-10.7, 15.3]; P<0.001). In the mipomersen group in patients with Lp(a) levels >30 or >50 mg/dL, attainment of Lp(a) values ≤30 or ≤50 mg/dL was most frequent in homozygous FH and severe hypercholesterolemia patients. In the combined groups, modest correlations were present between percent change in apolipoprotein B-100 and Lp(a) (r=0.43; P<0.001) and low-density lipoprotein cholesterol and Lp(a) (r=0.36; P<0.001) plasma levels. Mipomersen consistently and effectively reduced Lp(a) levels in patients with a variety of lipid abnormalities and cardiovascular risk. Modest correlations were present between apolipoprotein B-100 and Lp(a) lowering but the mechanistic relevance mediating Lp(a) reduction is currently unknown. © 2015 American Heart Association, Inc.

  17. PLC-based mode multi/demultiplexer for MDM transmission

    NASA Astrophysics Data System (ADS)

    Hanzawa, N.; Saitoh, K.; Sakamoto, T.; Matsui, T.; Tsujikawa, K.; Koshiba, M.; Yamamoto, F.

    2013-12-01

    We propose a PLC-based multi/demultiplexer (MUX/DEMUX) with a mode conversion function for mode division multiplexing (MDM) transmission applications. The PLC-based mode MUX/DEMUX can realize a low insertion loss and a wide working wavelength bandwidth. We designed and demonstrated a two-mode (LP01 and LP11 modes) and a three-mode (LP01, LP11, and LP21 modes) MUX/DEMUX for use in the C-band.

  18. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  19. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  20. 77 FR 7572 - Alliance Pipeline L.P.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ...] Alliance Pipeline L.P.; Notice of Application Take notice that on January 25, 2012, Alliance Pipeline L.P... Pipeline Inc., Managing General Partner of Alliance Pipeline L.P., 800, 605--5 Ave. SW., Calgary, Alberta...; 8:45 am] BILLING CODE 6717-01-P ...

  1. Cutaneous and Mucosal Lichen Planus: A Comprehensive Review of Clinical Subtypes, Risk Factors, Diagnosis, and Prognosis

    PubMed Central

    2014-01-01

    Lichen planus (LP) is a chronic inflammatory disorder that most often affects middle-aged adults. LP can involve the skin or mucous membranes including the oral, vulvovaginal, esophageal, laryngeal, and conjunctival mucosa. It has different variants based on the morphology of the lesions and the site of involvement. The literature suggests that certain presentations of the disease such as esophageal or ophthalmological involvement are underdiagnosed. The burden of the disease is higher in some variants including hypertrophic LP and erosive oral LP, which may have a more chronic pattern. LP can significantly affect the quality of life of patients as well. Drugs or contact allergens can cause lichenoid reactions as the main differential diagnosis of LP. LP is a T-cell mediated immunologic disease but the responsible antigen remains unidentified. In this paper, we review the history, epidemiology, and clinical subtypes of LP. We also review the histopathologic aspects of the disease, differential diagnoses, immunopathogenesis, and the clinical and genetic correlations. PMID:24672362

  2. Consensus guidelines for lumbar puncture in patients with neurological diseases.

    PubMed

    Engelborghs, Sebastiaan; Niemantsverdriet, Ellis; Struyfs, Hanne; Blennow, Kaj; Brouns, Raf; Comabella, Manuel; Dujmovic, Irena; van der Flier, Wiesje; Frölich, Lutz; Galimberti, Daniela; Gnanapavan, Sharmilee; Hemmer, Bernhard; Hoff, Erik; Hort, Jakub; Iacobaeus, Ellen; Ingelsson, Martin; Jan de Jong, Frank; Jonsson, Michael; Khalil, Michael; Kuhle, Jens; Lleó, Alberto; de Mendonça, Alexandre; Molinuevo, José Luis; Nagels, Guy; Paquet, Claire; Parnetti, Lucilla; Roks, Gerwin; Rosa-Neto, Pedro; Scheltens, Philip; Skårsgard, Constance; Stomrud, Erik; Tumani, Hayrettin; Visser, Pieter Jelle; Wallin, Anders; Winblad, Bengt; Zetterberg, Henrik; Duits, Flora; Teunissen, Charlotte E

    2017-01-01

    Cerebrospinal fluid collection by lumbar puncture (LP) is performed in the diagnostic workup of several neurological brain diseases. Reluctance to perform the procedure is among others due to a lack of standards and guidelines to minimize the risk of complications, such as post-LP headache or back pain. We provide consensus guidelines for the LP procedure to minimize the risk of complications. The recommendations are based on (1) data from a large multicenter LP feasibility study (evidence level II-2), (2) systematic literature review on LP needle characteristics and post-LP complications (evidence level II-2), (3) discussion of best practice within the Joint Programme Neurodegenerative Disease Research Biomarkers for Alzheimer's disease and Parkinson's Disease and Biomarkers for Multiple Sclerosis consortia (evidence level III). Our consensus guidelines address contraindications, as well as patient-related and procedure-related risk factors that can influence the development of post-LP complications. When an LP is performed correctly, the procedure is well tolerated and accepted with a low complication rate.

  3. Maximal regularity in lp spaces for discrete time fractional shifted equations

    NASA Astrophysics Data System (ADS)

    Lizama, Carlos; Murillo-Arcila, Marina

    2017-09-01

    In this paper, we are presenting a new method based on operator-valued Fourier multipliers to characterize the existence and uniqueness of ℓp-solutions for discrete time fractional models in the form where A is a closed linear operator defined on a Banach space X and Δα denotes the Grünwald-Letnikov fractional derivative of order α > 0. If X is a UMD space, we provide this characterization only in terms of the R-boundedness of the operator-valued symbol associated to the abstract model. To illustrate our results, we derive new qualitative properties of nonlinear difference equations with shiftings, including fractional versions of the logistic and Nagumo equations.

  4. Ascorbic acid deficiency in patients with lichen planus.

    PubMed

    Nicolae, Ilinca; Mitran, Cristina Iulia; Mitran, Madalina Irina; Ene, Corina Daniela; Tampa, Mircea; Georgescu, Simona Roxana

    2017-01-01

    Recent studies have highlighted the role of oxidative stress in the pathogenesis of lichen planus (LP). In the present study, the interest of the authors is focused on the investigation of ascorbic acid status in patients with LP and identification of parameters that might influence the level of this vitamin. We analyzed the level of urinary ascorbic acid (reflectometric method) in 77 patients with LP (cutaneous LP (CLP)-49 cases; oral LP (OLP)-28 cases) and 50 control subjects. The evaluation of all participants included clinical examination and laboratory and imaging tests. Compared to the control group (19.82 mg/dl) the level of ascorbic acid was significantly lower both in patients with CLP (8.47 mg/dl, p = 0.001) and in those with OLP (8.04 mg/dl, p = 0.001). In patients with LP it was found that the deficiency of ascorbic acid increases with age (r = -0.318, p = 0.032). The urinary concentrations of ascorbic acid were significantly lower in patients with LP associated with infections compared to patients with LP without infections. The urinary ascorbic acid level may be a useful parameter in identifying patients with LP who are at risk of developing viral or bacterial infections.

  5. Roles of the low density lipoprotein receptor and related receptors in inhibition of lipoprotein(a) internalization by proprotein convertase subtilisin/kexin type 9.

    PubMed

    Romagnuolo, Rocco; Scipione, Corey A; Marcovina, Santica M; Gemin, Matthew; Seidah, Nabil G; Boffa, Michael B; Koschinsky, Marlys L

    2017-01-01

    Elevated plasma concentrations of lipoprotein(a) (Lp(a)) are a causal risk factor for cardiovascular disease. The mechanisms underlying Lp(a) clearance from plasma remain unclear, which is an obvious barrier to the development of therapies to specifically lower levels of this lipoprotein. Recently, it has been documented that monoclonal antibody inhibitors of proprotein convertase subtilisin/kexin type 9 (PCSK9) can lower plasma Lp(a) levels by 30%. Since PCSK9 acts primarily through the low density lipoprotein receptor (LDLR), this result is in conflict with the prevailing view that the LDLR does not participate in Lp(a) clearance. To support our recent findings in HepG2 cells that the LDLR can act as a bona fide receptor for Lp(a) whose effects are sensitive to PCSK9, we undertook a series of Lp(a) internalization experiments using different hepatic cells, with different variants of PCSK9, and with different members of the LDLR family. We found that PCSK9 decreased Lp(a) and/or apo(a) internalization by Huh7 human hepatoma cells and by primary mouse and human hepatocytes. Overexpression of human LDLR appeared to enhance apo(a)/Lp(a) internalization in both types of primary cells. Importantly, internalization of Lp(a) by LDLR-deficient mouse hepatocytes was not affected by PCSK9, but the effect of PCSK9 was restored upon overexpression of human LDLR. In HepG2 cells, Lp(a) internalization was decreased by gain-of-function mutants of PCSK9 more than by wild-type PCSK9, and a loss-of function variant had a reduced ability to influence Lp(a) internalization. Apo(a) internalization by HepG2 cells was not affected by apo(a) isoform size. Finally, we showed that very low density lipoprotein receptor (VLDLR), LDR-related protein (LRP)-8, and LRP-1 do not play a role in Lp(a) internalization or the effect of PCSK9 on Lp(a) internalization. Our findings are consistent with the idea that PCSK9 inhibits Lp(a) clearance through the LDLR, but do not exclude other effects of PCSK9 such as on Lp(a) biosynthesis.

  6. Roles of the low density lipoprotein receptor and related receptors in inhibition of lipoprotein(a) internalization by proprotein convertase subtilisin/kexin type 9

    PubMed Central

    Marcovina, Santica M.; Gemin, Matthew; Seidah, Nabil G.; Boffa, Michael B.

    2017-01-01

    Elevated plasma concentrations of lipoprotein(a) (Lp(a)) are a causal risk factor for cardiovascular disease. The mechanisms underlying Lp(a) clearance from plasma remain unclear, which is an obvious barrier to the development of therapies to specifically lower levels of this lipoprotein. Recently, it has been documented that monoclonal antibody inhibitors of proprotein convertase subtilisin/kexin type 9 (PCSK9) can lower plasma Lp(a) levels by 30%. Since PCSK9 acts primarily through the low density lipoprotein receptor (LDLR), this result is in conflict with the prevailing view that the LDLR does not participate in Lp(a) clearance. To support our recent findings in HepG2 cells that the LDLR can act as a bona fide receptor for Lp(a) whose effects are sensitive to PCSK9, we undertook a series of Lp(a) internalization experiments using different hepatic cells, with different variants of PCSK9, and with different members of the LDLR family. We found that PCSK9 decreased Lp(a) and/or apo(a) internalization by Huh7 human hepatoma cells and by primary mouse and human hepatocytes. Overexpression of human LDLR appeared to enhance apo(a)/Lp(a) internalization in both types of primary cells. Importantly, internalization of Lp(a) by LDLR-deficient mouse hepatocytes was not affected by PCSK9, but the effect of PCSK9 was restored upon overexpression of human LDLR. In HepG2 cells, Lp(a) internalization was decreased by gain-of-function mutants of PCSK9 more than by wild-type PCSK9, and a loss-of function variant had a reduced ability to influence Lp(a) internalization. Apo(a) internalization by HepG2 cells was not affected by apo(a) isoform size. Finally, we showed that very low density lipoprotein receptor (VLDLR), LDR-related protein (LRP)-8, and LRP-1 do not play a role in Lp(a) internalization or the effect of PCSK9 on Lp(a) internalization. Our findings are consistent with the idea that PCSK9 inhibits Lp(a) clearance through the LDLR, but do not exclude other effects of PCSK9 such as on Lp(a) biosynthesis. PMID:28750079

  7. Consuming Lower-Protein Nutrition Bars with Added Leucine Elicits Postprandial Changes in Appetite Sensations in Healthy Women.

    PubMed

    Bolster, Douglas R; Rahn, Maike; Kamil, Alison G; Bristol, Lindsey T; Goltz, Shellen R; Leidy, Heather J; Blaze Mt, Melvin; Nunez, Michael A; Guo, Elizabeth; Wang, Jianquan; Harkness, Laura S

    2018-04-20

    Higher-protein meals (>25 g protein/meal) have been associated with enhanced satiety but the role of amino acids is unclear. Leucine has been proposed to stimulate satiety in rodents but has not been assessed in humans. We assessed the acute effects of lower-protein nutrition bars, enhanced with a leucine peptide (LP), on postprandial appetite sensations in combination with plasma leucine and peptide YY (PYY) in healthy women. Utilizing a double-blind randomized crossover design, 40 healthy women [28 ± 7.5 y; body mass index (BMI, in kg/m2): 23.5 ± 2.4] consumed the following isocaloric (180 kcal) pre-loads on 3 separate visits: control bar [9 g protein with 0 g added LP (0-g LP)] or treatment bars [11 g protein with 2 g added LP (2-g LP) or 13 g protein with 3 g added LP (3-g LP)]. Pre- and postprandial hunger, desire to eat, prospective food consumption (PFC), fullness, and plasma leucine were assessed every 30 min for 240 min. Plasma PYY was assessed hourly for 240 min (n = 24). Main effects of time (P < 0.0001) and treatment (P < 0.03) were detected for postprandial hunger, desire to eat, PFC, and fullness. Post hoc analyses revealed that the 2-g and 3-g LP bars elicited greater increases in fullness and greater decreases in PFC compared with 0-g LP (all, P < 0.05) with no differences between the 2-g and 3-g LP bars. The 2-g bar elicited greater decreases in hunger and desire to eat compared with the 0-g LP bar (both, P ≤ 0.01), whereas 3-g LP did not. Appetite incremental areas under the curves (iAUCs) and PYY outcomes were not different between bars. A treatment × time interaction was detected for plasma leucine with increases occurring in a leucine-dose-dependent manner (P < 0.0001). Despite the dose-dependent increases in plasma leucine following the consumption of lower-protein bars enhanced with LP, only the 2-g LP bar elicited consistent postprandial changes in select appetite sensations compared with the 0-g LP bar. This study was registered on clinicaltrials.gov as NCT02091570.

  8. 75 FR 38514 - Application to Export Electric Energy; Brookfield Energy Marketing LP

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-02

    ... Energy Marketing LP AGENCY: Office of Electricity Delivery and Energy Reliability, DOE. ACTION: Notice of application. SUMMARY: Brookfield Energy Marketing LP (BEM LP) has applied for authority to transmit electric... surplus energy purchased from electric utilities, Federal power marketing agencies and other entities...

  9. 78 FR 11567 - Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-19

    ... Airworthiness Directives; Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft... Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.) Model Gulfstream G150... Gulfstream Aerospace LP (Type Certificate Previously Held by Israel Aircraft Industries, Ltd.): Amendment 39...

  10. Communication Optimal Parallel Multiplication of Sparse Random Matrices

    DTIC Science & Technology

    2013-02-21

    Definition 2.1), and (2) the algorithm is sparsity- independent, where the computation is statically partitioned to processors independent of the sparsity...struc- ture of the input matrices (see Definition 2.5). The second assumption applies to nearly all existing al- gorithms for general sparse matrix-matrix...where A and B are n× n ER(d) matrices: Definition 2.1 An ER(d) matrix is an adjacency matrix of an Erdős-Rényi graph with parameters n and d/n. That

  11. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    DTIC Science & Technology

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  12. Is Lp(a) ready for prime time use in the clinic? A pros-and-cons debate.

    PubMed

    Kostner, Karam M; Kostner, Gert M; Wierzbicki, Anthony S

    2018-04-30

    Lipoprotein (a) (Lp(a)) is a cholesterol-rich lipoprotein known since 1963. In spite of extensive research on Lp(a), there are still numerous gaps in our knowledge relating to its function, biosynthesis and catabolism. One reason for this might be that apo(a), the characteristic glycoprotein of Lp(a), is expressed only in primates. Results from experiments using transgenic animals therefore may need verification in humans. Studies on Lp(a) are also handicapped by the great number of isoforms of apo(a) and the heterogeneity of apo(a)-containing fractions in plasma. Quantification of Lp(a) in the clinical laboratory for a long time has not been standardized. Starting from its discovery, reports accumulated that Lp(a) contributed to the risk of cardiovascular disease (CVD), myocardial infarction (MI) and stroke. Early reports were based on case control studies but in the last decades a great deal of prospective studies have been published that highlight the increased risk for CVD and MI in patients with elevated Lp(a). Final answers to the question of whether Lp(a) is ready for wider clinical use will come from intervention studies with novel selective Lp(a) lowering medications that are currently underway. This article expounds arguments for and against this proposition from currently available data. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Lactobacillus paracasei metabolism of rice bran reveals metabolome associated with Salmonella Typhimurium growth reduction.

    PubMed

    Nealon, N J; Worcester, C R; Ryan, E P

    2017-06-01

    This study aimed to determine the effect of a cell-free supernatant of Lactobacillus paracasei ATCC 27092 with and without rice bran extract (RBE) on Salmonella Typhimurium 14028s growth, and to identify a metabolite profile with antimicrobial functions. Supernatant was collected from overnight cultures of L. paracasei incubated in the presence (LP+RBE) or absence (LP) of RBE and applied to S. Typhimurium. LP+RBE reduced 13·1% more S. Typhimurium growth than LP after 16 h (P < 0·05). Metabolite profiles of LP and LP+RBE were examined using nontargeted global metabolomics consisting of ultra-high-performance liquid chromatography coupled with tandem mass spectrometry. A comparison of LP and LP+RBE revealed 84 statistically significant metabolites (P < 0·05), where 20 were classified with antimicrobial functions. LP+RBE reduced S. Typhimurium growth to a greater extent than LP, and the metabolite profile distinctions suggested that RBE favourably modulates the metabolism of L. paracasei. These findings warrant continued investigation of probiotic and RBE antimicrobial activities across microenvironments and matrices where S. Typhimurium exposure is problematic. This study showed a novel metabolite profile of probiotic L. paracasei and prebiotic rice bran that increased antimicrobial activity against S. Typhimurium. © 2017 The Authors. Journal of Applied Microbiology published by John Wiley & Sons Ltd on behalf of The Society for Applied Microbiology.

  14. Large particle breakdown by cattle eating ryegrass and alfalfa.

    PubMed

    McLeod, M N; Minson, D J

    1988-04-01

    The proportion of large particles (LP) broken down to small, insoluble particles by primary mastication (eating), rumination, digestion and detrition (rubbing) was determined for separated leaf and stem fractions of perennial ryegrass (Lolium perenne) and alfalfa (Medicago sativa) fed to cattle cannulated at the esophagus. Large particles were defined as those particles retained during wet sieving on a screen with an aperture of 1.18 mm. Reduction in weight of particles caused by solubilizing or digestion was not considered to be particle breakdown per se, and particles were corrected for this loss in weight. The proportion of LP in the forage broken down by primary mastication was 25 +/- 1.9% (means +/- SE). Breakdown of LP by rumination was calculated from the weight of total particles regurgitated and the proportion of LP in the regurgitated and swallowed remasticated material. The weight of LP regurgitated was corrected for the dry matter lost by digestion using lignin ratio in the LP entering the rumen and of the regurgitated digesta. Rumination accounted for 50 +/- 1.5% of LP breakdown. Fecal loss accounted for 8 +/- .8% of the LP in forage. Breakdown of LP by digestion and detrition was calculated as 17 +/- 1.3% from the difference between the LP eaten and those broken down by primary mastication, rumination and passing out in the feces. The significance of these results for predicting voluntary intake from laboratory analysis is considered.

  15. The Lipid Parameters and Lipoprotein(a) Excess in Hashimoto Thyroiditis.

    PubMed

    Yetkin, D O; Dogantekin, B

    2015-01-01

    Objective. The risk of atherosclerotic heart disease is increased in autoimmune thyroiditis, although the reason is not clear. Lipoprotein(a) (Lp(a)) excess has been identified as a powerful predictor of premature atherosclerotic vascular diseases. The aim of this study is to investigate the relationship between Lp(a) levels and thyroid hormones in Hashimoto patients. Method. 154 premenopausal female Hashimoto patients (48 patients with overthypothyroid (OH), 50 patients with subclinical hypothyroid (SH), and 56 patients with euthyroid Hashimoto to (EH)) were enrolled in this study. The control group consists of 50 age matched volunteers. In every group, thyroid function tests and lipid parameters with Lp(a) were measured. Lp(a) excess was defined as Lp(a) > 30 mg/dL. Results. Total-C, LDL-C, TG, and Lp(a) levels were increased in Hashimoto group. Total-C, LDL-C, and TG levels were higher in SH group than in the control group. Total-C and LDL-C levels were also higher in EH group compared to controls. Lp(a) levels were similar in SH and EH groups with controls. However, excess Lp(a) was more common in subclinical hypothyroid and euthyroid Hashimoto group than in the control group. Conclusion. The Total-C and LDL-C levels and excess Lp(a) were higher even in euthyroid Hashimoto patients. Thyroid autoimmunity may have some effect on Lp(a) and lipid metabolism.

  16. The Lipid Parameters and Lipoprotein(a) Excess in Hashimoto Thyroiditis

    PubMed Central

    Yetkin, D. O.; Dogantekin, B.

    2015-01-01

    Objective. The risk of atherosclerotic heart disease is increased in autoimmune thyroiditis, although the reason is not clear. Lipoprotein(a) (Lp(a)) excess has been identified as a powerful predictor of premature atherosclerotic vascular diseases. The aim of this study is to investigate the relationship between Lp(a) levels and thyroid hormones in Hashimoto patients. Method. 154 premenopausal female Hashimoto patients (48 patients with overthypothyroid (OH), 50 patients with subclinical hypothyroid (SH), and 56 patients with euthyroid Hashimoto to (EH)) were enrolled in this study. The control group consists of 50 age matched volunteers. In every group, thyroid function tests and lipid parameters with Lp(a) were measured. Lp(a) excess was defined as Lp(a) > 30 mg/dL. Results. Total-C, LDL-C, TG, and Lp(a) levels were increased in Hashimoto group. Total-C, LDL-C, and TG levels were higher in SH group than in the control group. Total-C and LDL-C levels were also higher in EH group compared to controls. Lp(a) levels were similar in SH and EH groups with controls. However, excess Lp(a) was more common in subclinical hypothyroid and euthyroid Hashimoto group than in the control group. Conclusion. The Total-C and LDL-C levels and excess Lp(a) were higher even in euthyroid Hashimoto patients. Thyroid autoimmunity may have some effect on Lp(a) and lipid metabolism. PMID:26064115

  17. Immune-Enhancing Effect of Nanometric Lactobacillus plantarum nF1 (nLp-nF1) in a Mouse Model of Cyclophosphamide-Induced Immunosuppression.

    PubMed

    Choi, Dae-Woon; Jung, Sun Young; Kang, Jisu; Nam, Young-Do; Lim, Seong-Il; Kim, Ki Tae; Shin, Hee Soon

    2018-02-28

    Nanometric Lactobacillus plantarum nF1 (nLp-nF1) is a biogenics consisting of dead L. plantarum cells pretreated with heat and a nanodispersion process. In this study, we investigated the immune-enhancing effects of nLp-nF1 in vivo and in vitro. To evaluate the immunostimulatory effects of nLp-nF1, mice immunosuppressed by cyclophosphamide (CPP) treatment were administered with nLp-nF1. As expected, CPP restricted the immune response of mice, whereas oral administration of nLp-nF1 significantly increased the total IgG in the serum, and cytokine production (interleukin-12 (IL-12) and tumor necrosis factor alpha (TNF-α)) in bone marrow cells. Furthermore, nLp-nF1 enhanced the production of splenic cytokines such as IL-12, TNF-α, and interferon gamma (IFN-γ). In vitro, nLp-nF1 stimulated the immune response by enhancing the production of cytokines such as IL-12, TNF-α, and IFN-γ. Moreover, nLp-nF1 given a food additive enhanced the immune responses when combined with various food materials in vitro. These results suggest that nLp-nF1 could be used to strengthen the immune system and recover normal immunity in people with a weak immune system, such as children, the elderly, and patients.

  18. Diagnostic classification of eating disorders in children and adolescents: How does DSM-IV-TR compare to empirically-derived categories?

    PubMed Central

    Eddy, Kamryn T.; le Grange, Daniel; Crosby, Ross D.; Hoste, Renee Rienecke; Doyle, Angela Celio; Smyth, Angela; Herzog, David B.

    2009-01-01

    Objective The purpose of this study was to empirically derive eating disorder phenotypes in a clinical sample of children and adolescents using latent profile analysis (LPA) and compare these latent profile (LP) groups to the DSM-IV-TR eating disorder categories. Method Eating disorder symptom data collected from 401 youth (ages 7–19; mean 15.14 ± 2.35y) seeking eating disorder treatment were included in LPA; general linear models were used to compare LP groups to DSM-IV-TR eating disorder categories on pre-treatment and outcome indices. Results Three LP groups were identified: LP1 (n=144), characterized binge eating and purging (“Binge/purge”); LP2 (n=126), characterized by excessive exercise and extreme eating disorder cognitions (“Exercise-extreme cognitions”); and LP3 (n=131), characterized by minimal eating disorder behaviors and cognitions (“Minimal behaviors/cognitions”). Identified LPs imperfectly resembled DSM-IV-TR eating disorders. LP1 resembled bulimia nervosa; LP2 and LP3 broadly resembled anorexia nervosa with a relaxed weight criterion, differentiated by excessive exercise and severity of eating disorder cognitions. LP groups were more differentiated than the DSM-IV-TR categories across pre-treatment eating disorder and general psychopathology indices, as well as weight change at follow-up. Neither LP nor DSM-IV-TR categories predicted change in binge/purge behaviors. Validation analyses suggest these empirically-derived groups improve upon the current DSM-IV-TR categories. Conclusions In children and adolescents, revisions for DSM-V should consider recognition of patients with minimal cognitive eating disorder symptoms. PMID:20410717

  19. 78 FR 53487 - Equinox Funds Trust and Equinox Institutional Asset Management LP; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... Funds Trust and Equinox Institutional Asset Management LP; Notice of Application August 23, 2013. AGENCY...: Equinox Funds Trust (the ``Trust'') and Equinox Institutional Asset Management LP (the ``Initial Adviser... Institutional Asset Management LP, 47 Hulfish Street, Suite 510, Princeton, NJ 08542; Daniel Prezioso, Equinox...

  20. 75 FR 26255 - Change in Bank Control Notices; Acquisition of Shares of Bank or Bank Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-11

    ... Street, San Francisco, California 94105-1579: 1. Thomas H. Lee Equity Fund VI, L.P.; Thomas H. Lee Parallel Fund VI, L.P.; Thomas H. Lee Parallel (DT) Fund VI, L.P.; and THL Sterling Equity Investors, L.P...

  1. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  2. Genotype-phenotype association study via new multi-task learning model

    PubMed Central

    Huo, Zhouyuan; Shen, Dinggang

    2018-01-01

    Research on the associations between genetic variations and imaging phenotypes is developing with the advance in high-throughput genotype and brain image techniques. Regression analysis of single nucleotide polymorphisms (SNPs) and imaging measures as quantitative traits (QTs) has been proposed to identify the quantitative trait loci (QTL) via multi-task learning models. Recent studies consider the interlinked structures within SNPs and imaging QTs through group lasso, e.g. ℓ2,1-norm, leading to better predictive results and insights of SNPs. However, group sparsity is not enough for representing the correlation between multiple tasks and ℓ2,1-norm regularization is not robust either. In this paper, we propose a new multi-task learning model to analyze the associations between SNPs and QTs. We suppose that low-rank structure is also beneficial to uncover the correlation between genetic variations and imaging phenotypes. Finally, we conduct regression analysis of SNPs and QTs. Experimental results show that our model is more accurate in prediction than compared methods and presents new insights of SNPs. PMID:29218896

  3. Detection of faults in rotating machinery using periodic time-frequency sparsity

    NASA Astrophysics Data System (ADS)

    Ding, Yin; He, Wangpeng; Chen, Binqiang; Zi, Yanyang; Selesnick, Ivan W.

    2016-11-01

    This paper addresses the problem of extracting periodic oscillatory features in vibration signals for detecting faults in rotating machinery. To extract the feature, we propose an approach in the short-time Fourier transform (STFT) domain where the periodic oscillatory feature manifests itself as a relatively sparse grid. To estimate the sparse grid, we formulate an optimization problem using customized binary weights in the regularizer, where the weights are formulated to promote periodicity. In order to solve the proposed optimization problem, we develop an algorithm called augmented Lagrangian majorization-minimization algorithm, which combines the split augmented Lagrangian shrinkage algorithm (SALSA) with majorization-minimization (MM), and is guaranteed to converge for both convex and non-convex formulation. As examples, the proposed approach is applied to simulated data, and used as a tool for diagnosing faults in bearings and gearboxes for real data, and compared to some state-of-the-art methods. The results show that the proposed approach can effectively detect and extract the periodical oscillatory features.

  4. An efficient semi-supervised community detection framework in social networks.

    PubMed

    Li, Zhen; Gong, Yong; Pan, Zhisong; Hu, Guyu

    2017-01-01

    Community detection is an important tasks across a number of research fields including social science, biology, and physics. In the real world, topology information alone is often inadequate to accurately find out community structure due to its sparsity and noise. The potential useful prior information such as pairwise constraints which contain must-link and cannot-link constraints can be obtained from domain knowledge in many applications. Thus, combining network topology with prior information to improve the community detection accuracy is promising. Previous methods mainly utilize the must-link constraints while cannot make full use of cannot-link constraints. In this paper, we propose a semi-supervised community detection framework which can effectively incorporate two types of pairwise constraints into the detection process. Particularly, must-link and cannot-link constraints are represented as positive and negative links, and we encode them by adding different graph regularization terms to penalize closeness of the nodes. Experiments on multiple real-world datasets show that the proposed framework significantly improves the accuracy of community detection.

  5. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  6. Multidimensional Compressed Sensing MRI Using Tensor Decomposition-Based Sparsifying Transform

    PubMed Central

    Yu, Yeyang; Jin, Jin; Liu, Feng; Crozier, Stuart

    2014-01-01

    Compressed Sensing (CS) has been applied in dynamic Magnetic Resonance Imaging (MRI) to accelerate the data acquisition without noticeably degrading the spatial-temporal resolution. A suitable sparsity basis is one of the key components to successful CS applications. Conventionally, a multidimensional dataset in dynamic MRI is treated as a series of two-dimensional matrices, and then various matrix/vector transforms are used to explore the image sparsity. Traditional methods typically sparsify the spatial and temporal information independently. In this work, we propose a novel concept of tensor sparsity for the application of CS in dynamic MRI, and present the Higher-order Singular Value Decomposition (HOSVD) as a practical example. Applications presented in the three- and four-dimensional MRI data demonstrate that HOSVD simultaneously exploited the correlations within spatial and temporal dimensions. Validations based on cardiac datasets indicate that the proposed method achieved comparable reconstruction accuracy with the low-rank matrix recovery methods and, outperformed the conventional sparse recovery methods. PMID:24901331

  7. Dendrites of dentate gyrus granule cells contribute to pattern separation by controlling sparsity

    PubMed Central

    Chavlis, Spyridon; Petrantonakis, Panagiotis C.

    2016-01-01

    ABSTRACT The hippocampus plays a key role in pattern separation, the process of transforming similar incoming information to highly dissimilar, nonverlapping representations. Sparse firing granule cells (GCs) in the dentate gyrus (DG) have been proposed to undertake this computation, but little is known about which of their properties influence pattern separation. Dendritic atrophy has been reported in diseases associated with pattern separation deficits, suggesting a possible role for dendrites in this phenomenon. To investigate whether and how the dendrites of GCs contribute to pattern separation, we build a simplified, biologically relevant, computational model of the DG. Our model suggests that the presence of GC dendrites is associated with high pattern separation efficiency while their atrophy leads to increased excitability and performance impairments. These impairments can be rescued by restoring GC sparsity to control levels through various manipulations. We predict that dendrites contribute to pattern separation as a mechanism for controlling sparsity. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27784124

  8. Effects of high-order correlations on personalized recommendations for bipartite networks

    NASA Astrophysics Data System (ADS)

    Liu, Jian-Guo; Zhou, Tao; Che, Hong-An; Wang, Bing-Hong; Zhang, Yi-Cheng

    2010-02-01

    In this paper, we introduce a modified collaborative filtering (MCF) algorithm, which has remarkably higher accuracy than the standard collaborative filtering. In the MCF, instead of the cosine similarity index, the user-user correlations are obtained by a diffusion process. Furthermore, by considering the second-order correlations, we design an effective algorithm that depresses the influence of mainstream preferences. Simulation results show that the algorithmic accuracy, measured by the average ranking score, is further improved by 20.45% and 33.25% in the optimal cases of MovieLens and Netflix data. More importantly, the optimal value λ depends approximately monotonously on the sparsity of the training set. Given a real system, we could estimate the optimal parameter according to the data sparsity, which makes this algorithm easy to be applied. In addition, two significant criteria of algorithmic performance, diversity and popularity, are also taken into account. Numerical results show that as the sparsity increases, the algorithm considering the second-order correlation can outperform the MCF simultaneously in all three criteria.

  9. Lipoprotein(a): Biology and Clinical Importance

    PubMed Central

    McCormick, Sally P A

    2004-01-01

    Lipoprotein(a) [Lp(a)] is a unique lipoprotein that has emerged as an independent risk factor for developing vascular disease. Plasma Lp(a) levels above the common cut-off level of 300 mg/L place individuals at risk of developing heart disease particularly if combined with other lipid and thrombogenic risk factors. Studies in humans have shown Lp(a) levels to be hugely variable and under strict genetic control, largely by the apolipoprotein(a) [apo(a)] gene. In general, Lp(a) levels have proven difficult to manipulate, although some factors have been identified that can influence levels. Research has shown that Lp(a) has a high affinity for the arterial wall and displays many athero-thrombogenic properties. While a definite function for Lp(a) has not been identified, the last two decades of research have provided much information on the biology and clinical importance of Lp(a). PMID:18516206

  10. Serum lipoprotein (a) concentrations among Arab children: a hospital-based study in Kuwait.

    PubMed

    Alsaeid, M; Alsaeid, K; Fatania, H R; Sharma, P N; Abd-Elsalam, R

    1998-09-01

    Elevated lipoprotein (a) [Lp(a)] is an independent risk factor for premature atherosclerosis and coronary heart disease, both of which are prevalent among Kuwaitis. Our objective was to measure serum lipids, including Lp(a), in Arab children and compare them with values reported for other ethnic groups. To that end, serum concentrations of Lp(a), total cholesterol [T-CHOL], high density lipoprotein [HDL], low density lipoprotein [LDL], and triglyceride [TG] were assessed in 103 Arab children. The mean and median Lp(a) were 140.4 mg/l and 95 mg/l, respectively. The Lp(a) frequency distribution was skewed to the right with the highest frequencies appearing at low levels. Serum Lp(a) correlated positively with T-CHOL and LDL but did not correlate with age, HDL and TG. Only nine children (8.7%) had serum Lp(a) levels associated with increased cardiovascular risk, namely > or = 300 mg/l.

  11. The Land Processes Distributed Active Archive Center (LP DAAC)

    USGS Publications Warehouse

    Golon, Danielle K.

    2016-10-03

    The Land Processes Distributed Active Archive Center (LP DAAC) operates as a partnership with the U.S. Geological Survey and is 1 of 12 DAACs within the National Aeronautics and Space Administration (NASA) Earth Observing System Data and Information System (EOSDIS). The LP DAAC ingests, archives, processes, and distributes NASA Earth science remote sensing data. These data are provided to the public at no charge. Data distributed by the LP DAAC provide information about Earth’s surface from daily to yearly intervals and at 15 to 5,600 meter spatial resolution. Data provided by the LP DAAC can be used to study changes in agriculture, vegetation, ecosystems, elevation, and much more. The LP DAAC provides several ways to access, process, and interact with these data. In addition, the LP DAAC is actively archiving new datasets to provide users with a variety of data to study the Earth.

  12. 76 FR 67017 - Praesidian Capital Opportunity Fund III, LP License No. 02/02-0647; Notice Seeking Exemption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-28

    ... II, LP, Associate of Praesidian Capital Opportunity Fund III, LP, holds a debt investment and warrant... SMALL BUSINESS ADMINISTRATION Praesidian Capital Opportunity Fund III, LP License No. 02/02- 0647; Notice Seeking Exemption Under Section 312 of the Small Business Investment Act, Conflicts of Interest...

  13. 78 FR 25262 - TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Filings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-30

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket Nos. PR13-16-002; PR13-17-002; Not Consolidated] TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Filings Take notice that on April 23, 2013, the applicants listed above submitted an amendment to the December...

  14. 75 FR 8921 - Grant of Authority for Subzone Status; Brightpoint North America L.P. (Cell Phone Kitting and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ... Status; Brightpoint North America L.P. (Cell Phone Kitting and Distribution) Indianapolis, IN Pursuant to... the cell phone kitting and distribution facilities of Brightpoint North America L.P., located in... cell phones at the facilities of Brightpoint North America L.P., located in Plainfield, Indiana...

  15. Coupling analysis of non-circular-symmetric modes and design of orientation-insensitive few-mode fiber couplers

    NASA Astrophysics Data System (ADS)

    Li, Jiaxiong; Du, Jiangbing; Ma, Lin; Li, Ming-Jun; Jiang, Shoulin; Xu, Xiao; He, Zuyuan

    2017-01-01

    We study the coupling between two identical weakly-coupled few-mode fibers based on coupled-mode theory. The coupling behavior of non-circular-symmetric modes, such as LP11 and LP21, is investigated analytically and numerically. By carefully choosing the fiber core separation and coupler length, we can design orientation-insensitive fiber couplers for non-circular-symmetric modes at arbitrary coupling ratios. Based on the design method, we propose an orientation-insensitive two-mode fiber coupler at 850 nm working as a mode multiplexer/demultiplexer for two-mode transmission using standard single-mode fiber. Within the band from 845 to 855 nm, the insertion losses of LP01 and LP11 modes are less than 0.03 dB and 0.24 dB, respectively. When the two-mode fiber coupler is used as mode demultiplexer, the LP01/LP11 and LP11/LP01 extinction ratios in the separated branches are respectively above 12.6 dB and 21.2 dB. Our design method can be extended to two-mode communication or sensing systems at other wavelengths.

  16. Characterization of a Novel Maltose-Forming α-Amylase from Lactobacillus plantarum subsp. plantarum ST-III.

    PubMed

    Jeon, Hye-Yeon; Kim, Na-Ri; Lee, Hye-Won; Choi, Hye-Jeong; Choung, Woo-Jae; Koo, Ye-Seul; Ko, Dam-Seul; Shim, Jae-Hoon

    2016-03-23

    A novel maltose (G2)-forming α-amylase from Lactobacillus plantarum subsp. plantarum ST-III was expressed in Escherichia coli and characterized. Analysis of conserved amino acid sequence alignments showed that L. plantarum maltose-producing α-amylase (LpMA) belongs to glycoside hydrolase family 13. The recombinant enzyme (LpMA) was a novel G2-producing α-amylase. The properties of purified LpMA were investigated following enzyme purification. LpMA exhibited optimal activity at 30 °C and pH 3.0. It produced only G2 from the hydrolysis of various substrates, including maltotriose (G3), maltopentaose (G5), maltosyl β-cyclodextrin (G2-β-CD), amylose, amylopectin, and starch. However, LpMA was unable to hydrolyze cyclodextrins. Reaction pattern analysis using 4-nitrophenyl-α-d-maltopentaoside (pNPG5) demonstrated that LpMA hydrolyzed pNPG5 from the nonreducing end, indicating that LpMA is an exotype α-amylase. Kinetic analysis revealed that LpMA had the highest catalytic efficiency (kcat/Km ratio) toward G2-β-CD. Compared with β-amylase, a well-known G2-producing enzyme, LpMA produced G2 more efficiently from liquefied corn starch due to its ability to hydrolyze G3.

  17. Dynamic changes of the intraocular pressure and the pressure of cerebrospinal fluid in nonglaucomatous neurological patients.

    PubMed

    González-Camarena, Pedro Iván; San-Juan, Daniel; González-Olhovich, Irene; Rodríguez-Arévalo, David; Lozano-Elizondo, David; Trenado, Carlos; Anschel, David J

    2017-03-01

    To describe the dynamic changes of the intraocular pressure (IOP) and intracranial pressure (ICP) with normal or pathological values (intracranial hypertension) in nonglaucomatous neurological patients during lumbar punction (LP). Case-control study, prospective measurement of tonometry in both groups referred for LP. Intraocular pressure, ICP and translaminar pressure difference (TPD) were compared pre- and post-LP. Thirty-six patients (72 eyes) with mean age of 38.5 (16-64) years and BMI of 26.81 kg/m 2 were analysed. The initial mean ICP was 12.81 (± 6.6) mmHg. The mean TPD before and after the LP was 1.48 mmHg and 0.65 mmHg, respectively. The mean IOP of both eyes decreased to 0.8 mmHg post-LP in patients with pathological ICP (p = 0.0193) and normal ICP (p = 0.006). We found a statistically significant decrease of the IOP post-LP compared to the pre-LP in both groups, being higher in patients with pathological ICP. There were no significant differences of the IOP in patients with normal versus pathological ICP pre-LP/post-LP; neither was found a correlation between ICP and IOP. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  18. The relationship between root hydraulics and scion vigour across Vitis rootstocks: what role do root aquaporins play?

    PubMed Central

    McElrone, A. J.

    2012-01-01

    Vitis vinifera scions are commonly grafted onto rootstocks of other grape species to influence scion vigour and provide resistance to soil-borne pests and abiotic stress; however, the mechanisms by which rootstocks affect scion physiology remain unknown. This study characterized the hydraulic physiology of Vitis rootstocks that vary in vigour classification by investigating aquaporin (VvPIP) gene expression, fine-root hydraulic conductivity (Lp r), % aquaporin contribution to Lp r, scion transpiration, and the size of root systems. Expression of several VvPIP genes was consistently greater in higher-vigour rootstocks under favourable growing conditions in a variety of media and in root tips compared to mature fine roots. Similar to VvPIP expression patterns, fine-root Lp r and % aquaporin contribution to Lp r determined under both osmotic (Lp r Osm) and hydrostatic (Lp r Hyd) pressure gradients were consistently greater in high-vigour rootstocks. Interestingly, the % aquaporin contribution was nearly identical for Lp r Osm and Lp r Hyd even though a hydrostatic gradient would induce a predominant flow across the apoplastic pathway. In common scion greenhouse experiments, leaf area-specific transpiration (E) and total leaf area increased with rootstock vigour and were positively correlated with fine-root Lp r. These results suggest that increased canopy water demands for scion grafted onto high-vigour rootstocks are matched by adjustments in root-system hydraulic conductivity through the combination of fine-root Lp r and increased root surface area. PMID:23136166

  19. Dietary medium-chain triglycerides attenuate hepatic lipid deposition in growing rats with protein malnutrition.

    PubMed

    Kuwahata, Masashi; Kubota, Hiroyo; Amano, Saki; Yokoyama, Meiko; Shimamura, Yasuhiro; Ito, Shunsuke; Ogawa, Aki; Kobayashi, Yukiko; Miyamoto, Ken-ichi; Kido, Yasuhiro

    2011-01-01

    The objective of this study was to investigate the effects of dietary medium-chain triglycerides (MCT) on hepatic lipid accumulation in growing rats with protein malnutrition. Weaning rats were fed either a low-protein diet (3%, LP) or control protein diet (20%, CP), in combination with or without MCT. The four groups were as follows: CP-MCT, CP+MCT, LP-MCT, and LP+MCT. Rats in the CP-MCT, CP+MCT and LP+MCT groups were pair-fed their respective diets based on the amount of diet consumed by the LP-MCT group. Rats were fed each experimental diet for 30 d. Four weeks later, the respiratory quotient was higher in the LP-MCT group than those in the other groups during the fasting period. Hepatic triglyceride content increased in the LP groups compared with the CP groups. Hepatic triglyceride content in the LP+MCT group, however, was significantly decreased compared with that in the LP-MCT group. Levels of carnitine palmitoyltransferase (CPT) 1a mRNA and CPT2 mRNA were significantly decreased in the livers of the LP-MCT group, as compared with corresponding mRNA levels of the other groups. These results suggest that ingestion of a low-protein diet caused fatty liver in growing rats. However, when rats were fed the low-protein diet with MCT, hepatic triglyceride deposition was attenuated, and mRNA levels encoding CPT1a and CPT2 were preserved at the levels of rats fed control protein diets.

  20. Variation of mucin adhesion, cell surface characteristics, and molecular mechanisms among Lactobacillus plantarum isolated from different habitats.

    PubMed

    Buntin, Nirunya; de Vos, Willem M; Hongpattarakere, Tipparat

    2017-10-01

    The adhesion ability to mucin varied greatly among 18 Lactobacillus plantarum isolates depending on their isolation habitats. Such ability remained at high level even though they were sequentially exposed to the gastrointestinal (GI) stresses. The majority of L. plantarum isolated from shrimp intestine and about half of food isolates exhibited adhesion ability (51.06-55.04%) about the same as the well-known adhesive L. plantarum 299v. Interestingly, five infant isolates of CIF17A2, CIF17A4, CIF17A5, CIF17AN2, and CIF17AN8 exhibited extremely high adhesion ranging from 62.69 to 72.06%. Such highly adhesive property correlating to distinctively high cell surface hydrophobicity was significantly weaken after pretreatment with LiCl and guanidine-HCl confirming the entailment of protein moiety. Regarding the draft genome information, all molecular structures of major cell wall-anchored proteins involved in the adhesion based on L. plantarum WCSF1, including lp_0964, lp_1643, lp_3114, lp_2486, lp_3127, and lp_3059 orthologues were detected in all isolates. Exceptionally, the gene-trait matching between yeast agglutination assay and the relevant mannose-specific adhesin (lp_1229) encoding gene confirmed the Msa absence in five infant isolates expressed distinctively high adhesion. Interestingly, the predicted flagellin encoding genes (fliC) firstly revealed in lp_1643, lp_2486, and lp_3114 orthologues may potentially contribute to such highly adhesive property of these isolates.

  1. Discovery and in vivo evaluation of novel RGD-modified lipid-polymer hybrid nanoparticles for targeted drug delivery.

    PubMed

    Zhao, Yinbo; Lin, Dayong; Wu, Fengbo; Guo, Li; He, Gu; Ouyang, Liang; Song, Xiangrong; Huang, Wei; Li, Xiang

    2014-09-29

    In the current study, the lipid-shell and polymer-core hybrid nanoparticles (lpNPs) modified by Arg-Gly-Asp(RGD) peptide, loaded with curcumin (Cur), were developed by emulsification-solvent volatilization method. The RGD-modified hybrid nanoparticles (RGD-lpNPs) could overcome the poor water solubility of Cur to meet the requirement of intravenous administration and tumor active targeting. The obtained optimal RGD-lpNPs, composed of PLGA (poly(lactic-co-glycolic acid))-mPEG (methoxyl poly(ethylene- glycol)), RGD-polyethylene glycol (PEG)-cholesterol (Chol) copolymers and lipids, had good entrapment efficiency, submicron size and negatively neutral surface charge. The core-shell structure of RGD-lpNPs was verified by TEM. Cytotoxicity analysis demonstrated that the RGD-lpNPs encapsulated Cur retained potent anti-tumor effects. Flow cytometry analysis revealed the cellular uptake of Cur encapsulated in the RGD-lpNPs was increased for human umbilical vein endothelial cells (HUVEC). Furthermore, Cur loaded RGD-lpNPs were more effective in inhibiting tumor growth in a subcutaneous B16 melanoma tumor model. The results of immunofluorescent and immunohistochemical studies by Cur loaded RGD-lpNPs therapies indicated that more apoptotic cells, fewer microvessels, and fewer proliferation-positive cells were observed. In conclusion, RGD-lpNPs encapsulating Cur were developed with enhanced anti-tumor activity in melanoma, and Cur loaded RGD-lpNPs represent an excellent tumor targeted formulation of Cur which might be an attractive candidate for cancer therapy.

  2. The Apo(a) gene is the major determinant of variation in plasma Lp(a) levels in African Americans.

    PubMed Central

    Mooser, V; Scheer, D; Marcovina, S M; Wang, J; Guerra, R; Cohen, J; Hobbs, H H

    1997-01-01

    The distributions of plasma lipoprotein(a), or Lp(a), levels differ significantly among ethnic groups. Individuals of African descent have a two- to threefold higher mean plasma level of Lp(a) than either Caucasians or Orientals. In Caucasians, variation in the plasma Lp(a) levels has been shown to be largely determined by sequence differences at the apo(a) locus, but little is known about either the genetic architecture of plasma Lp(a) levels in Africans or why they have higher levels of plasma Lp(a). In this paper we analyze the plasma Lp(a) levels of 257 sibling pairs from 49 independent African American families. The plasma Lp(a) levels were much more similar in the sibling pairs who inherited both apo(a) alleles identical by descent (IBD) (r = .85) than in those that shared one (r = .48) or no (r = .22) parental apo(a) alleles in common. On the basis of these findings, it was estimated that 78% of the variation in plasma Lp(a) levels in African Americans is attributable to polymorphism at either the apo(a) locus or sequences closely linked to it. Thus, the apo(a) locus is the major determinant of variation in plasma Lp(a) levels in African Americans, as well as in Caucasians. No molecular evidence was found for a common "high-expressing" apo(a) allele in the African Americans. We propose that the higher plasma levels of Lp(a) in Africans are likely due to a yet-to-be-identified trans-acting factor(s) that causes an increase in the rate of secretion of apo(a) or a decrease in its catabolism. PMID:9311746

  3. Effects of bezafibrate and of 2 HMG-CoA reductase inhibitors on lipoprotein (a) level in hypercholesterolemic patients.

    PubMed

    Branchi, A; Rovellini, A; Fiorenza, A M; Sommariva, D

    1995-06-01

    Lp(a) level is relatively stable in each individual and is mainly under genetic control. Attempts made to lower Lp(a) with pharmacological means gave conflicting results. In order to further evaluate the effect of hypocholesterolemic drugs on Lp(a) level, 66 patients with primary hypercholesterolemia were selected. The vast majority of the patients had Lp(a) concentration at the low end of the range of distribution, 7 had undetectable Lp(a) levels and only 2 had Lp(a) higher than 30 mg/dl. No relationship was found between Lp(a) level and serum and lipoprotein lipids. In 12 patients serum cholesterol was well controlled by diet alone and the patients continued the diet for up to 8 months. The other patients were randomly subdivided into 3 groups of therapy. The first group received slow release bezafibrate 400 mg once a day, the second one pravastatin 20 mg once a day and the third one simvastatin 10-40 mg once a day. Drug therapy lasted for 8 months. At the end of the period, 22 of 29 patients treated with the 2 HMG-CoA reductase inhibitors had Lp(a) higher than baseline. The difference was statistically significant in both groups of patients. No significant change in Lp(a) was observed in diet and in bezafibrate group. Serum and LDL cholesterol significantly decreased in all the 3 drug groups. The increase in Lp(a) after the 2 HMG-CoA reductase was small enough to have negligible effects on cardiovascular risk, but raises the problem of the role of LDL receptor in the catabolism of Lp(a).

  4. Effect of Lactobacillus plantarum LP-Onlly on gut flora and colitis in interleukin-10 knockout mice.

    PubMed

    Xia, Yang; Chen, Hong-Qi; Zhang, Min; Jiang, Yan-Qun; Hang, Xiao-Min; Qin, Huan-Long

    2011-02-01

    Probiotics are used in the therapy of inflammatory bowel disease. This study aimed to determine the effects of probiotic Lactobacillus plantarum LP-Onlly (LP) on gut flora and colitis in interleukin-10 knockout (IL-10(-/-) ) mice, a model of spontaneous colitis. IL-10(-/-) and wild-type mice were used at 8 weeks of age and LP by gavage was administered at a dose of 10(9) cells/day per mice for 4 weeks. Mice were maintained for another one week without LP treatment. The colonic tissues were collected for histological and ultrastructural analysis at death after 4 weeks treatment of LP, and the feces were collected at 1-week intervals throughout the experiment for the analysis of gut flora and LP using selective culture-based techniques. Compared with control mice, IL-10(-/-) mice developed a severe intestinal inflammation and tissue damage, and had an abnormal composition of gut microflora. LP administration attenuated colitis with the decreased inflammatory scoring and histological injury in the colon of IL-10(-/-) mice. In addition, LP administration increased the numbers of beneficial total bifidobacteria and lactobacilli, and decreased the numbers of potential pathogenic enterococci and Clostridium perfringens, although the decrease of coliforms was not significant after LP treatment in IL-10(-/-) mice. Oral administration of LP was effective in the treatment of colitis, with the direct modification of gut microflora in IL-10(-/-) mice. This probiotic strain could be used as a potential adjuvant in the therapy of inflammatory bowel disease, although further studies are required in human. © 2011 Journal of Gastroenterology and Hepatology Foundation and Blackwell Publishing Asia Pty Ltd.

  5. Improvement of kurtosis-guided-grams via Gini index for bearing fault feature identification

    NASA Astrophysics Data System (ADS)

    Miao, Yonghao; Zhao, Ming; Lin, Jing

    2017-12-01

    A group of kurtosis-guided-grams, such as Kurtogram, Protrugram and SKRgram, is designed to detect the resonance band excited by faults based on the sparsity index. However, a common issue associated with these methods is that they tend to choose the frequency band with individual impulses rather than the desired fault impulses. This may be attributed to the selection of the sparsity index, kurtosis, which is vulnerable to impulsive noise. In this paper, to solve the problem, a sparsity index, called the Gini index, is introduced as an alternative estimator for the selection of the resonance band. It has been found that the sparsity index is still able to provide guidelines for the selection of the fault band without prior information of the fault period. More importantly, the Gini index has unique performance in random-impulse resistance, which renders the improved methods using the index free from the random impulse caused by external knocks on the bearing housing, or electromagnetic interference. By virtue of these advantages, the improved methods using the Gini index not only overcome the shortcomings but are more effective under harsh working conditions, even in the complex structure. Finally, the comparison between the kurtosis-guided-grams and the improved methods using the Gini index is made using the simulated and experimental data. The results verify the effectiveness of the improvement by both the fixed-axis bearing and planetary bearing fault signals.

  6. Learning what matters: A neural explanation for the sparsity bias.

    PubMed

    Hassall, Cameron D; Connor, Patrick C; Trappenberg, Thomas P; McDonald, John J; Krigolson, Olave E

    2018-05-01

    The visual environment is filled with complex, multi-dimensional objects that vary in their value to an observer's current goals. When faced with multi-dimensional stimuli, humans may rely on biases to learn to select those objects that are most valuable to the task at hand. Here, we show that decision making in a complex task is guided by the sparsity bias: the focusing of attention on a subset of available features. Participants completed a gambling task in which they selected complex stimuli that varied randomly along three dimensions: shape, color, and texture. Each dimension comprised three features (e.g., color: red, green, yellow). Only one dimension was relevant in each block (e.g., color), and a randomly-chosen value ranking determined outcome probabilities (e.g., green > yellow > red). Participants were faster to respond to infrequent probe stimuli that appeared unexpectedly within stimuli that possessed a more valuable feature than to probes appearing within stimuli possessing a less valuable feature. Event-related brain potentials recorded during the task provided a neurophysiological explanation for sparsity as a learning-dependent increase in optimal attentional performance (as measured by the N2pc component of the human event-related potential) and a concomitant learning-dependent decrease in prediction errors (as measured by the feedback-elicited reward positivity). Together, our results suggest that the sparsity bias guides human reinforcement learning in complex environments. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Low-rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging

    PubMed Central

    Ravishankar, Saiprasad; Moore, Brian E.; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery from undersampled measurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamic magnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method. PMID:28092528

  8. Low-Rank and Adaptive Sparse Signal (LASSI) Models for Highly Accelerated Dynamic Imaging.

    PubMed

    Ravishankar, Saiprasad; Moore, Brian E; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-05-01

    Sparsity-based approaches have been popular in many applications in image processing and imaging. Compressed sensing exploits the sparsity of images in a transform domain or dictionary to improve image recovery fromundersampledmeasurements. In the context of inverse problems in dynamic imaging, recent research has demonstrated the promise of sparsity and low-rank techniques. For example, the patches of the underlying data are modeled as sparse in an adaptive dictionary domain, and the resulting image and dictionary estimation from undersampled measurements is called dictionary-blind compressed sensing, or the dynamic image sequence is modeled as a sum of low-rank and sparse (in some transform domain) components (L+S model) that are estimated from limited measurements. In this work, we investigate a data-adaptive extension of the L+S model, dubbed LASSI, where the temporal image sequence is decomposed into a low-rank component and a component whose spatiotemporal (3D) patches are sparse in some adaptive dictionary domain. We investigate various formulations and efficient methods for jointly estimating the underlying dynamic signal components and the spatiotemporal dictionary from limited measurements. We also obtain efficient sparsity penalized dictionary-blind compressed sensing methods as special cases of our LASSI approaches. Our numerical experiments demonstrate the promising performance of LASSI schemes for dynamicmagnetic resonance image reconstruction from limited k-t space data compared to recent methods such as k-t SLR and L+S, and compared to the proposed dictionary-blind compressed sensing method.

  9. The role of indoleamine 2,3-dioxygenase in LP-BPM5 murine retroviral disease progression.

    PubMed

    O'Connor, Megan A; Green, William R

    2013-05-17

    Indoleamine 2,3-dioxygenase (IDO) is an immunomodulatory intracellular enzyme involved in tryptophan degradation. IDO is induced during cancer and microbial infections by cytokines, ligation of co-stimulatory molecules and/or activation of pattern recognition receptors, ultimately leading to modulation of the immune response. LP-BM5 murine retroviral infection induces murine AIDS (MAIDS), which is characterized by profound and broad immunosuppression of T- and B-cell responses. Our lab has previously described multiple mechanisms regulating the development of immunodeficiency of LP-BM5-induced disease, including Programmed Death 1 (PD-1), IL-10, and T-regulatory (Treg) cells. Immunosuppressive roles of IDO have been demonstrated in other retroviral models, suggesting a possible role for IDO during LP-BM5-induced retroviral disease progression and/or development of viral load. Mice deficient in IDO (B6.IDO-/-) and wildtype C57BL/6 (B6) mice were infected with LP-BM5 murine retrovirus. MAIDS and LP-BM5 viral load were assessed at termination. As expected, IDO was un-inducible in B6.IDO-/- during LP-BM5 infection. B6.IDO-/- mice infected with LP-BM5 retrovirus succumbed to MAIDS as indicated by splenomegaly, serum hyper IgG2a and IgM, decreased responsiveness to B- and T-cell mitogens, conversion of a proportion of CD4+ T cells from Thy1.2+ to Thy1.2-, and increased percentages of CD11b+Gr-1+ cells. LP-BM5 infected B6.IDO-/- mice also demonstrated the development of roughly equivalent disease kinetics as compared to infected B6 mice. Splenic viral loads of B6 and B6.IDO-/- mice were also equivalent after infection as measured by LP-BM5-specific Def Gag and Eco Gag viral mRNA, determined by qRT-PCR. Collectively, these results demonstrate IDO neither plays an essential role, nor is required, in LP-BM5-induced disease progression or LP-BM5 viral load.

  10. Gestational Protein Restriction Impairs Insulin-Regulated Glucose Transport Mechanisms in Gastrocnemius Muscles of Adult Male Offspring

    PubMed Central

    Blesson, Chellakkan S.; Sathishkumar, Kunju; Chinnathambi, Vijayakumar

    2014-01-01

    Type II diabetes originates from various genetic and environmental factors. Recent studies showed that an adverse uterine environment such as that caused by a gestational low-protein (LP) diet can cause insulin resistance in adult offspring. The mechanism of insulin resistance induced by gestational protein restriction is not clearly understood. Our aim was to investigate the role of insulin signaling molecules in gastrocnemius muscles of gestational LP diet–exposed male offspring to understand their role in LP-induced insulin resistance. Pregnant Wistar rats were fed a control (20% protein) or isocaloric LP (6%) diet from gestational day 4 until delivery and a normal diet after weaning. Only male offspring were used in this study. Glucose and insulin responses were assessed after a glucose tolerance test. mRNA and protein levels of molecules involved in insulin signaling were assessed at 4 months in gastrocnemius muscles. Muscles were incubated ex vivo with insulin to evaluate insulin-induced phosphorylation of insulin receptor (IR), Insulin receptor substrate-1, Akt, and AS160. LP diet-fed rats gained less weight than controls during pregnancy. Male pups from LP diet–fed mothers were smaller but exhibited catch-up growth. Plasma glucose and insulin levels were elevated in LP offspring when subjected to a glucose tolerance test; however, fasting levels were comparable. LP offspring showed increased expression of IR and AS160 in gastrocnemius muscles. Ex vivo treatment of muscles with insulin showed increased phosphorylation of IR (Tyr972) in controls, but LP rats showed higher basal phosphorylation. Phosphorylation of Insulin receptor substrate-1 (Tyr608, Tyr895, Ser307, and Ser318) and AS160 (Thr642) were defective in LP offspring. Further, glucose transporter type 4 translocation in LP offspring was also impaired. A gestational LP diet leads to insulin resistance in adult offspring by a mechanism involving inefficient insulin-induced IR, Insulin receptor substrate-1, and AS160 phosphorylation and impaired glucose transporter type 4 translocation. PMID:24797633

  11. The effects of gene disruption of Kre6-like proteins on the phenotype of β-glucan-producing Aureobasidium pullulans.

    PubMed

    Uchiyama, Hirofumi; Iwai, Atsushi; Dohra, Hideo; Ohnishi, Toshiyuki; Kato, Tatsuya; Park, Enoch Y

    2018-05-01

    Killer toxin resistant 6 (Kre6) and its paralog, suppressor of Kre null 1 (Skn1), are thought to be involved in the biosynthesis of cell wall β-(1 → 6)-D-glucan in baker's yeast, Saccharomyces cerevisiae. The Δkre6Δskn1 mutant of S. cerevisiae and other fungi shows severe growth defects due to the failure to synthesize normal cell walls. In this study, two homologs of Kre6, namely, K6LP1 (Kre6-like protein 1) and K6LP2 (Kre6-like protein 2), were identified in Aureobasidium pullulans M-2 by draft genome analysis. The Δk6lp1, Δk6lp2, and Δk6lp1Δk6lp2 mutants were generated in order to confirm the functions of the Kre6-like proteins in A. pullulans M-2. The cell morphologies of Δk6lp1 and Δk6lp1Δk6lp2 appeared to be different from those of wild type and Δk6lp2 in both their yeast and hyphal forms. The productivity of the extracellular polysaccharides, mainly composed of β-(1 → 3),(1 → 6)-D-glucan (β-glucan), of the mutants was 5.1-17.3% less than that of wild type, and the degree of branching in the extracellular β-glucan of mutants was 14.5-16.8% lower than that of wild type. This study showed that the gene disruption of Kre6-like proteins affected the cell morphology, the productivity of extracellular polysaccharides, and the structure of extracellular β-glucan, but it did not have a definite effect on the cell viability even in Δk6lp1Δk6lp2, unlike in the Δkre6Δskn1 of S. cerevisiae.

  12. Mipomersen, an antisense oligonucleotide to apolipoprotein B-100, reduces lipoprotein(a) in various populations with hypercholesterolemia: Results of 4 Phase III Trials

    PubMed Central

    Santos, Raul D.; Raal, MD Frederick J.; Catapano, Alberico L.; Witztum, Joseph L; Steinhagen-Thiessen, Elisabeth; Tsimikas, Sotirios

    2015-01-01

    Objective Lp(a) is an independent, causal, genetic risk factor for cardiovascular disease and aortic stenosis. Current pharmacologic lipid-lowering therapies do not optimally lower Lp(a), particularly in patients with familial hypercholesterolemia (FH). Approach and Results In four Phase III trials, 382 patients on maximally tolerated lipid-lowering therapy were randomized 2:1 to weekly subcutaneous mipomersen 200 mg (n=256) or placebo (n=126) for 26 weeks. Populations included homozygous FH (HoFH), heterozygous FH (HeFH) with concomitant coronary artery disease (CAD), severe hypercholesterolemia (HC), and HC at high risk for CAD. Lp(a) was measured eight times between baseline and week 28 inclusive. Of the 382 patients, 57% and 44% had baseline Lp(a) levels >30 mg/dL and >50 mg/dL, respectively. In the pooled analysis, the mean percent decrease (median, interquartile range, IQR) in Lp(a) at 28 weeks was significantly greater in the mipomersen group compared with placebo (-26.4 (-42.8, 5.4) vs. -0.0 (10.7, 15.3), p<0.001). In the mipomersen group in patients with Lp(a) levels >30 mg/dL or >50 mg/dL, attainment of Lp(a) values ≤30 mg/dL or ≤50 mg/dL was most frequent in HoFH and severe HC patients. In the combined groups, modest correlations were present between percent change in apoB and Lp(a) (r=0.43, p<0.001) and LDL-C and Lp(a) (r=0.36, p<0.001) plasma levels. Conclusions Mipomersen consistently and effectively reduced Lp(a) levels in patients with a variety of lipid abnormalities and cardiovascular risk. Modest correlations were present between apoB and Lp(a) lowering but the mechanistic relevance mediating Lp(a) reduction is currently unknown. PMID:25614280

  13. Genomic organization, sequence characterization and expression analysis of Tenebrio molitor apolipophorin-III in response to an intracellular pathogen, Listeria monocytogenes.

    PubMed

    Noh, Ju Young; Patnaik, Bharat Bhusan; Tindwa, Hamisi; Seo, Gi Won; Kim, Dong Hyun; Patnaik, Hongray Howrelia; Jo, Yong Hun; Lee, Yong Seok; Lee, Bok Luel; Kim, Nam Jung; Han, Yeon Soo

    2014-01-25

    Apolipophorin III (apoLp-III) is a well-known hemolymph protein having a functional role in lipid transport and immune response of insects. We cloned full-length cDNA encoding putative apoLp-III from larvae of the coleopteran beetle, Tenebrio molitor (TmapoLp-III), by identification of clones corresponding to the partial sequence of TmapoLp-III, subsequently followed with full length sequencing by a clone-by-clone primer walking method. The complete cDNA consists of 890 nucleotides, including an ORF encoding 196 amino acid residues. Excluding a putative signal peptide of the first 20 amino acid residues, the 176-residue mature apoLp-III has a calculated molecular mass of 19,146Da. Genomic sequence analysis with respect to its cDNA showed that TmapoLp-III was organized into four exons interrupted by three introns. Several immune-related transcription factor binding sites were discovered in the putative 5'-flanking region. BLAST and phylogenetic analyses reveal that TmapoLp-III has high sequence identity (88%) with Tribolium castaneum apoLp-III but shares little sequence homologies (<26%) with other apoLp-IIIs. Homology modeling of Tm apoLp-III shows a bundle of five amphipathic alpha helices, including a short helix 3'. The 'helix-short helix-helix' motif was predicted to be implicated in lipid binding interactions, through reversible conformational changes and accommodating the hydrophobic residues to the exterior for stability. Highest level of TmapoLp-III mRNA was detected at late pupal stages, albeit it is expressed in the larval and adult stages at lower levels. The tissue specific expression of the transcripts showed significantly higher numbers in larval fat body and adult integument. In addition, TmapoLp-III mRNA was found to be highly upregulated in late stages of L. monocytogenes or E. coli challenge. These results indicate that TmapoLp-III may play an important role in innate immune responses against bacterial pathogens in T. molitor. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. PLA2G7 genotype, Lp-PLA2 activity and coronary heart disease risk in 10,494 cases and 15,624 controls of European ancestry

    PubMed Central

    Casas, Juan P.; Ninio, Ewa; Panayiotou, Andrie; Palmen, Jutta; Cooper, Jackie A; Ricketts, Sally L; Sofat, Reecha; Nicolaides, Andrew N; Corsetti, James P; Fowkes, F Gerry R; Tzoulaki, Ioanna; Kumari, Meena; Brunner, Eric J; Kivimaki, Mika; Marmot, Michael G; Hoffmann, Michael M; Winkler, Karl; März, Winfred; Ye, Shu; Stirnadel, Heide A; MBBChir, Kay-Tee Khaw; Humphries, Steve E; Sandhu, Manjinder S; Hingorani, Aroon D; Talmud, Philippa J

    2012-01-01

    Background Higher Lp-PLA2 activity is associated with increased risk of coronary heart disease (CHD), making Lp-PLA2 a potential therapeutic target. PLA2G7 variants associated with Lp-PLA2 activity could evaluate whether this relationship is causal. Methods and Results A meta-analysis including a total of 12 studies (5 prospective, 4 case-control, 1 case-only and 2 cross-sectional, n=26,118) was undertaken to examine the association of: (i) LpPLA2 activity vs. cardiovascular biomarkers and risk factors and CHD events (two prospective studies; n=4884); ii) PLA2G7 SNPs and Lp-PLA2 activity (3 prospective, 2 case-control, 2 cross-sectional studies; up to n=6094); and iii) PLA2G7 SNPs and angiographic coronary artery disease (2 case-control, 1 case-only study; n=4971 cases) and CHD events (5 prospective, 2 case-control studies; n=5523). Lp-PLA2 activity correlated with several CHD risk markers. Hazard ratio for CHD events top vs. bottom quartile of Lp-PLA2 activity was 1.61 (95%CI: 1.31, 1.99) and 1.17 (95%CI: 0.91, 1.51) after adjustment for baseline traits. Of seven SNPs, rs1051931 (A379V) showed the strongest association with Lp-PLA2 activity, VV subjects having 7.2% higher activity than AAs. Genotype was not associated with risk markers, angiographic coronary disease (OR 1.03 (95%CI 0.80, 1.32), or CHD events (OR 0.98 (95%CI 0.82, 1.17). Conclusions Unlike Lp-PLA2 activity, PLA2G7 variants associated with modest effects on Lp-PLA2 activity were not associated with cardiovascular risk markers, coronary atheroma or CHD. Larger association studies, identification of SNPs with larger effects, or randomised trials of specific Lp-PLA2 inhibitors are needed to confirm/refute a contributory role for Lp-PLA2 in CHD. PMID:20479152

  15. Simvastatin but not bezafibrate decreases plasma lipoprotein-associated phospholipase A₂ mass in type 2 diabetes mellitus: relevance of high sensitive C-reactive protein, lipoprotein profile and low-density lipoprotein (LDL) electronegativity.

    PubMed

    Constantinides, Alexander; de Vries, Rindert; van Leeuwen, Jeroen J J; Gautier, Thomas; van Pelt, L Joost; Tselepis, Alexandros D; Lagrost, Laurent; Dullaart, Robin P F

    2012-10-01

    Plasma lipoprotein-associated phospholipase A(2) (Lp-PLA(2)) levels predict incident cardiovascular disease, impacting Lp-PLA(2) as an emerging therapeutic target. We determined Lp-PLA(2) responses to statin and fibrate administration in type 2 diabetes mellitus, and assessed relationships of changes in Lp-PLA(2) with subclinical inflammation and lipoprotein characteristics. A placebo-controlled cross-over study (three 8-week treatment periods with simvastatin (40 mg daily), bezafibrate (400mg daily) and their combination) was carried out in 14 male type 2 diabetic patients. Plasma Lp-PLA(2) mass was measured by turbidimetric immunoassay. Plasma Lp-PLA(2) decreased (-21 ± 4%) in response to simvastatin (p<0.05 from baseline and placebo), but was unaffected by bezafibrate (1 ± 5%). The drop in Lp-PLA(2) during combined treatment (-17 ± 3%, p<0.05) was similar compared to that during simvastatin alone. The Lp-PLA(2) changes during the 3 active lipid lowering treatment periods were related positively to baseline levels of high sensitive C-reactive protein, non-HDL cholesterol, triglycerides, the total cholesterol/HDL cholesterol ratio and less LDL electronegativity (p<0.02 to p<0.01), and inversely to baseline Lp-PLA(2) (p<0.01). LpPLA(2) responses correlated inversely with changes in non-HDL cholesterol, triglycerides and the total cholesterol/HDL cholesterol ratio during treatment (p<0.05 to p<0.02). In type 2 diabetes mellitus, plasma Lp-PLA(2) is likely to be lowered by statin treatment only. Enhanced subclinical inflammation and more severe dyslipidemia may predict diminished LpPLA(2) responses during lipid lowering treatment, which in turn appear to be quantitatively dissociated from decreases in apolipoprotein B lipoproteins. Conventional lipid lowering treatment may be insufficient for optimal LpPLA(2) lowering in diabetes mellitus. Copyright © 2012 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  16. Ultradeformable Archaeosomes for Needle Free Nanovaccination with Leishmania braziliensis Antigens.

    PubMed

    Higa, Leticia H; Arnal, Laura; Vermeulen, Mónica; Perez, Ana Paula; Schilrreff, Priscila; Mundiña-Weilenmann, Cecilia; Yantorno, Osvaldo; Vela, María Elena; Morilla, María José; Romero, Eder Lilia

    2016-01-01

    Total antigens from Leishmania braziliensis promastigotes, solubilized with sodium cholate (dsLp), were formulated within ultradeformable nanovesicles (dsLp-ultradeformable archaeosomes, (dsLp-UDA), and dsLp-ultradeformable liposomes (dsLp-UDL)) and topically administered to Balb/c mice. Ultradeformable nanovesicles can penetrate the intact stratum corneum up to the viable epidermis, with no aid of classical permeation enhancers that can damage the barrier function of the skin. Briefly, 100 nm unilamellar dsLp-UDA (soybean phosphatidylcholine: Halorubrum tebenquichense total polar lipids (TPL): sodium cholate, 3:3:1 w:w) of -31.45 mV Z potential, containing 4.84 ± 0.53% w/w protein/lipid dsLp, 235 KPa Young modulus were prepared. In vitro, dsLp-UDA was extensively taken up by J774A1 and bone marrow derive cells, and the only that induced an immediate secretion of IL-6, IL-12p40 and TNF-α, followed by IL-1β, by J774A1 cells. Such extensive uptake is a key feature of UDA ascribed to the highly negatively charged archaeolipids of the TPL, which are recognized by a receptor specialized in uptake and not involved in downstream signaling. Despite dsLp alone was also immunostimulatory on J774A1 cells, applied twice a week on consecutive days along 7 weeks on Balb/c mice, it raised no measurable response unless associated to UDL or UDA. The highest systemic response, IgGa2 mediated, 1 log lower than im dsLp Al2O3, was elicited by dsLp-UDA. Such findings suggest that in vivo, UDL and UDA acted as penetration enhancers for dsLp, but only dsLp-UDA, owed to its pronounced uptake by APC, succeeded as topical adjuvants. The actual TPL composition, fully made of sn2,3 ether linked saturated archaeolipids, gives the UDA bilayer resistance against chemical, physical and enzymatic attacks that destroy ordinary phospholipids bilayers. Together, these properties make UDA a promising platform for topical drug targeted delivery and vaccination, that may be of help for countries with a deficient healthcare system.

  17. Antisense oligonucleotide directed to human apolipoprotein B-100 reduces lipoprotein(a) levels and oxidized phospholipids on human apolipoprotein B-100 particles in lipoprotein(a) transgenic mice.

    PubMed

    Merki, Esther; Graham, Mark J; Mullick, Adam E; Miller, Elizabeth R; Crooke, Rosanne M; Pitas, Robert E; Witztum, Joseph L; Tsimikas, Sotirios

    2008-08-12

    Lipoprotein (a) [Lp(a)] is a genetic cardiovascular risk factor that preferentially binds oxidized phospholipids (OxPL) in plasma. There is a lack of therapeutic agents that reduce plasma Lp(a) levels. Transgenic mice overexpressing human apolipoprotein B-100 (h-apoB-100 [h-apoB mice]) or h-apoB-100 plus human apo(a) to generate genuine Lp(a) particles [Lp(a) mice] were treated with the antisense oligonucleotide mipomersen directed to h-apoB-100 mRNA or control antisense oligonucleotide for 11 weeks by intraperitoneal injection. Mice were then followed up for an additional 10 weeks off therapy. Lp(a) levels [apo(a) bound to apoB-100] and apo(a) levels ["free" apo(a) plus apo(a) bound to apoB-100] were measured by chemiluminescent enzyme-linked immunoassay and commercial assays, respectively. The content of OxPL on h-apoB-100 particles (OxPL/h-apoB) was measured by capturing h-apoB-100 in microtiter wells and detecting OxPL by antibody E06. As expected, mipomersen significantly reduced plasma h-apoB-100 levels in both groups of mice. In Lp(a) mice, mipomersen significantly reduced Lp(a) levels by approximately 75% compared with baseline (P<0.0001) but had no effect on apo(a) levels or hepatic apo(a) mRNA expression. OxPL/h-apoB levels were much higher at baseline in Lp(a) mice compared with h-ApoB mice (P<0.0001) but decreased in a time-dependent fashion with mipomersen. There was no effect of the control antisense oligonucleotide on lipoprotein levels or oxidative parameters. Mipomersen significantly reduced Lp(a) and OxPL/apoB levels in Lp(a) mice. The present study demonstrates that h-apoB-100 is a limiting factor in Lp(a) particle synthesis in this Lp(a) transgenic model. If applicable to humans, mipomersen may represent a novel therapeutic approach to reducing Lp(a) levels and their associated OxPL.

  18. Metabolic and anthropometric determinants of serum Lp(a) concentrations and Apo(a) polymorphism in a healthy Arab population.

    PubMed

    Akanji, A O; al-Shayji, I A; Kumar, P

    1999-08-01

    Blood lipoprotein(a) Lp(a) concentrations are an important risk factor for atherosclerosis. The basis for this atherogenic property of Lp(a) and the factors that influence its cross-population levels, however, remain poorly understood. To investigate the relationship between serum Lp(a) and metabolic and anthropometric parameters in a healthy Kuwaiti population. Cross-sectional study. 177 (72 male, 105 female) randomly recruited healthy Kuwait Arabs aged 17-60 y Metabolic parameters in serum: Lp(a), apo(a) phenotypes, lipids and lipoproteins, glucose and urate. Anthropometric parameters: body mass index (BMI) and waist:hip-ratio (WHR). The distribution of Lp(a) concentrations was positively skewed (median 153 mg/l, range 0-1086). Women had higher concentrations-(194, 0-1086) than men (117, 0-779), P = 0.069. Lp(a) and insulin concentrations were significantly higher when the men and women were obese. In all subjects, there were significant correlations between Lp(a) and BMI (r = 0.23), total cholesterol (TC) (r = 0.17) and LDL (r = 0.20). Lp(a) correlated only with glucose in men (r = 0.28). In women it correlated with age (r = 0.20), BMI (r = 0.30), BP (r = 0.20), TC (r = 0.20) and LDL (r = 0.26). Multivariate analyses confirmed BMI and low-density lipoprotein (LDL) as the significant determinants of serum Lp(a). On apo (a) phenotyping, 114 (67%), 51 (30%) and 6 (4%) had single, double and null phenotypes respectively. The isoforms and their corresponding kringle IV repeat numbers were: F (14 repeats in 3%, mean Lp(a) 497 mg/l); S1 (19 repeats in 14%, mean 245 mg/l); S2 (23 repeats in 16%, mean 264 mg/l); S3 (27 repeats in 35%, mean 236 mg/l); and S4 (35 repeats in 28%, mean 235 mg/l). The results from the Kuwaiti population studied suggest that: (1) serum Lp(a) concentrations and distribution are similar to the pattern in Caucasians and Asians but not African-Americans or Africans; (2) serum Lp(a) is variably influenced by BMI and LDL--the impact of either factor differs between the sexes; (3) there is a high frequency of the single-banded phenotype; (4) contrary to reports in some Caucasian and Asian populations, there is no simple relationship between kringle IV repeat numbers and plasma Lp(a) concentrations.

  19. Liposomal 64Cu-PET Imaging of Anti-VEGF Drug Effects on Liposomal Delivery to Colon Cancer Xenografts.

    PubMed

    Blocker, Stephanie J; Douglas, Kirk A; Polin, Lisa Anne; Lee, Helen; Hendriks, Bart S; Lalo, Enxhi; Chen, Wei; Shields, Anthony F

    2017-01-01

    Liposomes (LP) deliver drug to tumors due to enhanced permeability and retention (EPR). LP were labeled with 64 Cu for positron emission tomography (PET) to image tumor localization. Bevacizumab (bev), a VEGF targeted antibody, may modify LP delivery by altering tumor EPR and this change can also be imaged. Objective : Assess the utility of 64 Cu-labeled LP for PET in measuring altered LP delivery early after treatment with bev. Methods: HT-29 human colorectal adenocarcinoma tumors were grown subcutaneously in SCID mice. Empty LP MM-DX-929 (Merrimack Pharmaceuticals, Inc. Cambridge, MA) were labeled with 64 CuCl 2 chelated with 4-DEAP-ATSC. Tumor-bearing mice received ~200-300 μCi of 64 Cu-MM-DX-929 and imaged with microPET. All mice were scanned before and after the treatment period, in which half of the mice received bev for one week. Scans were compared for changes in LP accumulation during this time. Initially, tissues were collected after the second PET for biodistribution measurements and histological analysis. Subsequent groups were divided for further treatment. Tumor growth following bev treatment, with or without LP-I, was assessed compared to untreated controls. Results : PET scans of untreated mice showed increased uptake of 64 Cu-MM-DX-929, with a mean change in tumor SUV max of 43.9%±6.6% (n=10) after 7 days. Conversely, images of treated mice showed that liposome delivery did not increase, with changes in SUV max of 7.6%±4.8% (n=12). Changes in tumor SUV max were significantly different between both groups (p=0.0003). Histology of tumor tissues indicated that short-term bev was able to alter vessel size. Therapeutically, while bev monotherapy, LP-I monotherapy, and treatment with bev followed by LP-I all slowed HT-29 tumor growth compared to controls, combination provided no therapeutic benefit. Conclusions: PET with tracer LP 64 Cu-MM-DX-929 can detect significant differences in LP delivery to colon tumors treated with bev when compared to untreated controls. Imaging with 64 Cu-MM-DX-929 is sensitive enough to measure drug-induced changes in LP localization which can have an effect on outcomes of treatment with LP.

  20. Multimodal manifold-regularized transfer learning for MCI conversion prediction.

    PubMed

    Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang

    2015-12-01

    As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.

  1. 75 FR 64306 - Shell Energy North America (US), LP; Notice of Institution of Proceeding and Refund Effective Date

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-19

    ...] Shell Energy North America (US), LP; Notice of Institution of Proceeding and Refund Effective Date...), concerning the justness and reasonableness of Shell Energy North America (US), LP's market- based rate authority in the Central and Southwest balancing authority area. Shell Energy North America (US), LP, 133...

  2. 76 FR 53440 - Freeport LNG Development, LP; Freeport LNG Expansion, LP; FLNG Liquefaction LLC; Notice of Intent...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PF11-2-000] Freeport LNG Development, LP; Freeport LNG Expansion, LP; FLNG Liquefaction LLC; Notice of Intent To Prepare an Environmental Assessment for the Planned Liquefaction Project, Request for Comments on Environmental Issues, and Notice of Public Scoping Meeting The...

  3. 77 FR 74179 - TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Baseline Filings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR13-16-000; Docket No. PR13-17-000; Not Consolidated] TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Baseline Filings Take notice that on December 6, 2012, the applicants listed above submitted a baseline...

  4. 78 FR 22872 - TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Filings

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. PR13-16-001; Docket No. PR13-17-001; Not Consolidated] TexStar Transmission, LP; TEAK Texana Transmission Company, LP; Notice of Filings Take notice that on April 5, 2013, the applicants listed above submitted an amendment to the...

  5. The importance of age and statin therapy in the interpretation of Lp-PLA(2) in ACS patients, and relation to CRP.

    PubMed

    Franeková, J; Kettner, J; Kubíček, Z; Jabor, A

    2015-01-01

    C-reactive protein (CRP) is a marker of arterial inflammation while lipoprotein-associated phospholipase A(2) (Lp-PLA(2)) is related to plaque instability. The aim of this study was to evaluate the correlation between the risk of unstable plaque presenting as acute coronary syndrome (ACS) and Lp-PLA(2), and to assess the influence of statins on interpretation of Lp-PLA(2). A total of 362 consecutive patients presenting to the emergency department (ED) with acute chest pain suggestive of ACS were evaluated by cardiologists as STEMI, NSTEMI, or unstable angina, and non-ACS. Serum biomarkers measured on admission: troponin I, C-reactive protein (Abbott), and Lp-PLA(2) (DiaDexus). Four groups were defined according to the final diagnosis and history of statin medication: ACS/statin-; ACS/statin+; non-ACS/statin-; non-ACS/statin+. Lp-PLA(2) was highest in ACS/statin- group; statins decreased Lp-PLA(2) both in ACS and non-ACS of about 20 %. Lp-PLA(2) was higher in ACS patients in comparison with non-ACS patients group without respect to statin therapy (p<0.001). Lp-PLA(2) predicted worse outcome (in terms of acute coronary syndrome) effectively in patients up to 62 years; limited prediction was found in older patients. C-reactive protein (CRP) failed to discriminate four groups of patients. Statin therapy and age should be taken into consideration while interpreting Lp-PLA(2) concentrations and lower cut-off values should be used for statin-treated persons.

  6. ApoA-I/A-II-HDL positively associates with apoB-lipoproteins as a potential atherogenic indicator.

    PubMed

    Kido, Toshimi; Kondo, Kazuo; Kurata, Hideaki; Fujiwara, Yoko; Urata, Takeyoshi; Itakura, Hiroshige; Yokoyama, Shinji

    2017-11-29

    We recently reported distinct nature of high-density lipoproteins (HDL) subgroup particles with apolipoprotein (apo) A-I but not apoA-II (LpAI) and HDL having both (LpAI:AII) based on the data from 314 Japanese. While plasma HDL level almost exclusively depends on concentration of LpAI having 3 to 4 apoA-I molecules, LpAI:AII appeared with almost constant concentration regardless of plasma HDL levels having stable structure with two apoA-I and one disulfide-dimeric apoA-II molecules (Sci. Rep. 6; 31,532, 2016). The aim of this study is further characterization of LpAI:AII with respect to its role in atherogenesis. Association of LpAI, LpAI:AII and other HDL parameters with apoB-lipoprotein parameters was analyzed among the cohort data above. ApoA-I in LpAI negatively correlated with the apoB-lipoprotein parameters such as apoB, triglyceride, nonHDL-cholesterol, and nonHDL-cholesterol + triglyceride, which are apparently reflected in the relations of the total HDL parameters to apoB-lipoproteins. In contrast, apoA-I in LpAI:AII and apoA-II positively correlated to the apoB-lipoprotein parameters even within their small range of variation. These relationships are independent of sex, but may slightly be influenced by the activity-related CETP mutations. The study suggested that LpAI:AII is an atherogenic indicator rather than antiatherogenic. These sub-fractions of HDL are to be evaluated separately for estimating atherogenic risk of the patients.

  7. [Lipoprotein (a) in an urban population of Venezuela: Evidence that estrogenic deprivation increase in lipoprotein (a) levels is transitory].

    PubMed

    Bermúdez Pirela, V; Cabrera de Bravo, M; Mengual Moreno, E; Cano Ponce, C; Leal González, E; Lemus Antepaz, M; Amell de Díaz, A; Sorell Gómez, L

    2007-07-01

    Lipoprotein (a) [Lp (a)] is an independent risk factor for coronary artery disease and normal serum levels of this particle is not known in our country. Thus, the aim of this study was to determine plasma Lp (a) concentration in a population sample of Maracaibo. Fifth hundred out-patients, consulting at Centro de Investigaciones Endocrino-Metabólicas "Dr. Félix Gómez" were randomly underwent to venipunction to obtain a fasting blood simple to assess Lp (a) by a ELISA assay. No significantly differences were found when compared by sex or age separately, higher levels in Lp (a) was found in female 40-44 year group (median: 20,9 mg/dl). Thus, female population was divided in two sub-groups: < 40 years (median: 13 mg/dl) 40 yr and more (median: 16 mg/dl), finding higher Lp (a) levels in the second group (p < 0,02). Hormonal replace therapy was assessed by age, resulting that women subjected this approach shows lower levels of Lp (a) (p < 0,01), except in 60-64 year group. Lp (a) in a Maracaibo was within normal levels. Hormonal replace therapy diminishes Lp (a) concentration in menopausal women, but in menopausal women without hormonal therapy Lp (a) levels experienced a sustained decrease to normal levels in a age-depended manner.

  8. Lactobacillus plantarum TWK10 Supplementation Improves Exercise Performance and Increases Muscle Mass in Mice

    PubMed Central

    Chen, Yi-Ming; Wei, Li; Chiu, Yen-Shuo; Hsu, Yi-Ju; Tsai, Tsung-Yu; Wang, Ming-Fu; Huang, Chi-Chang

    2016-01-01

    Lactobacillus plantarum (L. plantarum) is a well-known probiotic among the ingested-microorganism probiotics (i.e., ingested microorganisms associated with beneficial effects for the host). However, few studies have examined the effects of L. plantarum TWK10 (LP10) supplementation on exercise performance, physical fatigue, and gut microbial profile. Male Institute of Cancer Research (ICR) strain mice were divided into three groups (n = 8 per group) for oral administration of LP10 for six weeks at 0, 2.05 × 108, or 1.03 × 109 colony-forming units/kg/day, designated the vehicle, LP10-1X and LP10-5X groups, respectively. LP10 significantly decreased final body weight and increased relative muscle weight (%). LP10 supplementation dose-dependently increased grip strength (p < 0.0001) and endurance swimming time (p < 0.001) and decreased levels of serum lactate (p < 0.0001), ammonia (p < 0.0001), creatine kinase (p = 0.0118), and glucose (p = 0.0151) after acute exercise challenge. The number of type I fibers (slow muscle) in gastrocnemius muscle significantly increased with LP10 treatment. In addition, serum levels of albumin, blood urea nitrogen, creatinine, and triacylglycerol significantly decreased with LP10 treatment. Long-term supplementation with LP10 may increase muscle mass, enhance energy harvesting, and have health-promotion, performance-improvement, and anti-fatigue effects. PMID:27070637

  9. The renaissance of lipoprotein(a): Brave new world for preventive cardiology?

    PubMed

    Ellis, Katrina L; Boffa, Michael B; Sahebkar, Amirhossein; Koschinsky, Marlys L; Watts, Gerald F

    2017-10-01

    Lipoprotein(a) [Lp(a)] is a highly heritable cardiovascular risk factor. Although discovered more than 50 years ago, Lp(a) has recently re-emerged as a major focus in the fields of lipidology and preventive cardiology owing to findings from genetic studies and the possibility of lowering elevated plasma concentrations with new antisense therapy. Data from genetic, epidemiological and clinical studies have provided compelling evidence establishing Lp(a) as a causal risk factor for atherosclerotic cardiovascular disease. Nevertheless, major gaps in knowledge remain and the identification of the mechanistic processes governing both Lp(a) pathobiology and metabolism are an ongoing challenge. Furthermore, the complex structure of Lp(a) presents a major obstacle to the accurate quantification of plasma concentrations, and a universally accepted and standardized approach for measuring Lp(a) is required. Significant progress has been made in the development of novel therapeutics for selectively lowering Lp(a). However, before these therapies can be widely implemented further investigations are required to assess their efficacy, safety, and cost-efficiency in the prevention of cardiovascular events. We review recent advances in molecular and biochemical aspects, epidemiology, and pathobiology of Lp(a), and provide a contemporary update on the significance of Lp(a) in clinical medicine. "Progress lies not in enhancing what is, but in advancing toward what will be." (Khalil Gibran). Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Well-to-Wheels Greenhouse Gas Emissions Analysis of High-Octane Fuels with Various Market Shares and Ethanol Blending Levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Jeongwoo; Elgowainy, Amgad; Wang, Michael

    2015-07-14

    In this study, we evaluated the impacts of producing HOF with a RON of 100, using a range of ethanol blending levels (E10, E25, and E40), vehicle efficiency gains, and HOF market penetration scenarios (3.4% to 70%), on WTW petroleum use and GHG emissions. In particular, we conducted LP modeling of petroleum refineries to examine the impacts of different HOF production scenarios on petroleum refining energy use and GHG emissions. We compared two cases of HOF vehicle fuel economy gains of 5% and 10% in terms of MPGGE to baseline regular gasoline vehicles. We incorporated three key factors in GREETmore » — (1) refining energy intensities of gasoline components for the various ethanol blending options and market shares, (2) vehicle efficiency gains, and (3) upstream energy use and emissions associated with the production of different crude types and ethanol — to compare the WTW GHG emissions of various HOF/vehicle scenarios with the business-as-usual baseline regular gasoline (87 AKI E10) pathway.« less

  11. 75 FR 32181 - Change in Bank Control Notices; Acquisition of Shares of Bank or Bank Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-07

    ... Investment Management, L.L.C.; TC Group, L.L.C.; and TCG Holdings, L.L.C., all of Wilmington, Delaware; to... Partners, L.P.; TCG Financial Services, L.P.; Carlyle Financial Services, Ltd.; TC Group Cayman Investment Holdings, L.P.; TCG Holdings Cayman II, L.P.; DBD Cayman, Limited; TCG Financial Services Investment...

  12. Lipoprotein(a) levels, apo(a) isoform size, and coronary heart disease risk in the Framingham Offspring Study

    USDA-ARS?s Scientific Manuscript database

    The aim of this study was to assess the independent contributions of plasma levels of lipoprotein(a) [Lp(a)], Lp(a) cholesterol, and of apo(a) isoform size to prospective coronary heart disease (CHD) risk. Plasma Lp(a) and Lp(a) cholesterol levels, and apo(a) isoform size were measured at examinati...

  13. Intestinal lamina propria dendritic cells maintain T cell homeostasis but do not affect commensalism

    PubMed Central

    Welty, Nathan E.; Staley, Christopher; Ghilardi, Nico; Sadowsky, Michael J.; Igyártó, Botond Z.

    2013-01-01

    Dendritic cells (DCs) in the intestinal lamina propria (LP) are composed of two CD103+ subsets that differ in CD11b expression. We report here that Langerin is expressed by human LP DCs and that transgenic human langerin drives expression in CD103+CD11b+ LP DCs in mice. This subset was ablated in huLangerin-DTA mice, resulting in reduced LP Th17 cells without affecting Th1 or T reg cells. Notably, cognate DC–T cell interactions were not required for Th17 development, as this response was intact in huLangerin-Cre I-Aβfl/fl mice. In contrast, responses to intestinal infection or flagellin administration were unaffected by the absence of CD103+CD11b+ DCs. huLangerin-DTA x BatF3−/− mice lacked both CD103+ LP DC subsets, resulting in defective gut homing and fewer LP T reg cells. Despite these defects in LP DCs and resident T cells, we did not observe alterations of intestinal microbial communities. Thus, CD103+ LP DC subsets control T cell homeostasis through both nonredundant and overlapping mechanisms. PMID:24019552

  14. Lack of evidence for hepatitis C virus infection in association with lichen planus.

    PubMed

    Stojanovic, Larisa; Lunder, Tomaz; Poljak, Mario; Mars, Tomaz; Mlakar, Bostjan; Maticic, Mojca

    2008-12-01

    The association between hepatitis C virus (HCV) infection and lichen planus (LP) is a subject of controversy. Prevalence studies of HCV infection in LP patients in various countries reveal diverse results. The Slovenian population is rather homogenous with specific geographic and epidemiological characteristics. Lack of data or contradictory results from neighboring countries urged the need for a case-controlled study in our LP patients. The retrospective study was performed on 173 LP patients. Control group included 218 patients with dermatological diseases other than LP. Anti-HCV antibodies were found in 2/173 patients (1.2%) with LP and in 0/218 controls. No statistically significant difference was found between the study and control group regarding anti-HCV antibody prevalence (P = 0.195; estimated OR 6.4, 95% CI 0.3-134.0) and risk factors for HCV infection. Based on our results, anti-HCV antibody testing is not necessarily required in LP patients with no risk factors for HCV infection in this geographic region.

  15. Six mode selective fiber optic spatial multiplexer.

    PubMed

    Velazquez-Benitez, A M; Alvarado, J C; Lopez-Galmiche, G; Antonio-Lopez, J E; Hernández-Cordero, J; Sanchez-Mondragon, J; Sillard, P; Okonkwo, C M; Amezcua-Correa, R

    2015-04-15

    Low-loss all-fiber photonic lantern (PL) mode multiplexers (MUXs) capable of selectively exciting the first six fiber modes of a multimode fiber (LP01, LP11a, LP11b, LP21a, LP21b, and LP02) are demonstrated. Fabrication of the spatial mode multiplexers was successfully achieved employing a combination of either six step or six graded index fibers of four different core sizes. Insertion losses of 0.2-0.3 dB and mode purities above 9 dB are achieved. Moreover, it is demonstrated that the use of graded index fibers in a PL eases the length requirements of the adiabatic tapered transition and could enable scaling to large numbers.

  16. Optimized respiratory-resolved motion-compensated 3D Cartesian coronary MR angiography.

    PubMed

    Correia, Teresa; Ginami, Giulia; Cruz, Gastão; Neji, Radhouene; Rashid, Imran; Botnar, René M; Prieto, Claudia

    2018-04-22

    To develop a robust and efficient reconstruction framework that provides high-quality motion-compensated respiratory-resolved images from free-breathing 3D whole-heart Cartesian coronary magnetic resonance angiography (CMRA) acquisitions. Recently, XD-GRASP (eXtra-Dimensional Golden-angle RAdial Sparse Parallel MRI) was proposed to achieve 100% scan efficiency and provide respiratory-resolved 3D radial CMRA images by exploiting sparsity in the respiratory dimension. Here, a reconstruction framework for Cartesian CMRA imaging is proposed, which provides respiratory-resolved motion-compensated images by incorporating 2D beat-to-beat translational motion information to increase sparsity in the respiratory dimension. The motion information is extracted from interleaved image navigators and is also used to compensate for 2D translational motion within each respiratory phase. The proposed Optimized Respiratory-resolved Cartesian Coronary MR Angiography (XD-ORCCA) method was tested on 10 healthy subjects and 2 patients with cardiovascular disease, and compared against XD-GRASP. The proposed XD-ORCCA provides high-quality respiratory-resolved images, allowing clear visualization of the right and left coronary arteries, even for irregular breathing patterns. Compared with XD-GRASP, the proposed method improves the visibility and sharpness of both coronaries. Significant differences (p < .05) in visible vessel length and proximal vessel sharpness were found between the 2 methods. The XD-GRASP method provides good-quality images in the absence of intraphase motion. However, motion blurring is observed in XD-GRASP images for respiratory phases with larger motion amplitudes and subjects with irregular breathing patterns. A robust respiratory-resolved motion-compensated framework for Cartesian CMRA has been proposed and tested in healthy subjects and patients. The proposed XD-ORCCA provides high-quality images for all respiratory phases, independently of the regularity of the breathing pattern. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  17. Complications after LP related to needle type: pencil-point versus Quincke.

    PubMed

    Aamodt, A; Vedeler, C

    2001-06-01

    We studied the incidence of complications after diagnostic lumbar puncture (LP) related to needle type. A 5 months' observational study of routine diagnostic LP in 83 patients was conducted. Significantly more headache was observed after LP using thicker cutting needles (20G Quincke) compared with thinner cutting or non-cutting needles (22G Quincke or pencil-point). No significant difference in complications after LP was found between the 22G Quincke and pencil-point needles. The size of the needle and not the needle shape seems to be the main determinant for post-dural puncture headache (PDPH).

  18. Enfuvirtide (T20)-Based Lipopeptide Is a Potent HIV-1 Cell Fusion Inhibitor: Implications for Viral Entry and Inhibition.

    PubMed

    Ding, Xiaohui; Zhang, Xiujuan; Chong, Huihui; Zhu, Yuanmei; Wei, Huamian; Wu, Xiyuan; He, Jinsheng; Wang, Xinquan; He, Yuxian

    2017-09-15

    The peptide drug enfuvirtide (T20) is the only viral fusion inhibitor used in combination therapy for HIV-1 infection, but it has relatively low antiviral activity and easily induces drug resistance. Emerging studies demonstrate that lipopeptide-based fusion inhibitors, such as LP-11 and LP-19, which mainly target the gp41 pocket site, have greatly improved antiviral potency and in vivo stability. In this study, we focused on developing a T20-based lipopeptide inhibitor that lacks pocket-binding sequence and targets a different site. First, the C-terminal tryptophan-rich motif (TRM) of T20 was verified to be essential for its target binding and inhibition; then, a novel lipopeptide, termed LP-40, was created by replacing the TRM with a fatty acid group. LP-40 showed markedly enhanced binding affinity for the target site and dramatically increased inhibitory activity on HIV-1 membrane fusion, entry, and infection. Unlike LP-11 and LP-19, which required a flexible linker between the peptide sequence and the lipid moiety, addition of a linker to LP-40 sharply reduced its potency, implying different binding modes with the extended N-terminal helices of gp41. Also, interestingly, LP-40 showed more potent activity than LP-11 in inhibiting HIV-1 Env-mediated cell-cell fusion while it was less active than LP-11 in inhibiting pseudovirus entry, and the two inhibitors displayed synergistic antiviral effects. The crystal structure of LP-40 in complex with a target peptide revealed their key binding residues and motifs. Combined, our studies have not only provided a potent HIV-1 fusion inhibitor, but also revealed new insights into the mechanisms of viral inhibition. IMPORTANCE T20 is the only membrane fusion inhibitor available for treatment of viral infection; however, T20 requires high doses and has a low genetic barrier for resistance, and its inhibitory mechanism and structural basis remain unclear. Here, we report the design of LP-40, a T20-based lipopeptide inhibitor that has greatly improved anti-HIV activity and is a more potent inhibitor of cell-cell fusion than of cell-free virus infection. The binding modes of two classes of membrane-anchoring lipopeptides (LP-40 and LP-11) verify the current fusion model in which an extended prehairpin structure bridges the viral and cellular membranes, and their complementary effects suggest a vital strategy for combination therapy of HIV-1 infection. Moreover, our understanding of the mechanism of action of T20 and its derivatives benefits from the crystal structure of LP-40. Copyright © 2017 American Society for Microbiology.

  19. Risk factors and outcomes for late presentation for HIV-positive persons in Europe: results from the Collaboration of Observational HIV Epidemiological Research Europe Study (COHERE).

    PubMed

    Mocroft, Amanda; Lundgren, Jens D; Sabin, Miriam Lewis; Monforte, Antonella d'Arminio; Brockmeyer, Norbert; Casabona, Jordi; Castagna, Antonella; Costagliola, Dominique; Dabis, Francois; De Wit, Stéphane; Fätkenheuer, Gerd; Furrer, Hansjakob; Johnson, Anne M; Lazanas, Marios K; Leport, Catherine; Moreno, Santiago; Obel, Niels; Post, Frank A; Reekie, Joanne; Reiss, Peter; Sabin, Caroline; Skaletz-Rorowski, Adriane; Suarez-Lozano, Ignacio; Torti, Carlo; Warszawski, Josiane; Zangerle, Robert; Fabre-Colin, Céline; Kjaer, Jesper; Chene, Genevieve; Grarup, Jesper; Kirk, Ole

    2013-01-01

    Few studies have monitored late presentation (LP) of HIV infection over the European continent, including Eastern Europe. Study objectives were to explore the impact of LP on AIDS and mortality. LP was defined in Collaboration of Observational HIV Epidemiological Research Europe (COHERE) as HIV diagnosis with a CD4 count <350/mm(3) or an AIDS diagnosis within 6 months of HIV diagnosis among persons presenting for care between 1 January 2000 and 30 June 2011. Logistic regression was used to identify factors associated with LP and Poisson regression to explore the impact on AIDS/death. 84,524 individuals from 23 cohorts in 35 countries contributed data; 45,488 were LP (53.8%). LP was highest in heterosexual males (66.1%), Southern European countries (57.0%), and persons originating from Africa (65.1%). LP decreased from 57.3% in 2000 to 51.7% in 2010/2011 (adjusted odds ratio [aOR] 0.96; 95% CI 0.95-0.97). LP decreased over time in both Central and Northern Europe among homosexual men, and male and female heterosexuals, but increased over time for female heterosexuals and male intravenous drug users (IDUs) from Southern Europe and in male and female IDUs from Eastern Europe. 8,187 AIDS/deaths occurred during 327,003 person-years of follow-up. In the first year after HIV diagnosis, LP was associated with over a 13-fold increased incidence of AIDS/death in Southern Europe (adjusted incidence rate ratio [aIRR] 13.02; 95% CI 8.19-20.70) and over a 6-fold increased rate in Eastern Europe (aIRR 6.64; 95% CI 3.55-12.43). LP has decreased over time across Europe, but remains a significant issue in the region in all HIV exposure groups. LP increased in male IDUs and female heterosexuals from Southern Europe and IDUs in Eastern Europe. LP was associated with an increased rate of AIDS/deaths, particularly in the first year after HIV diagnosis, with significant variation across Europe. Earlier and more widespread testing, timely referrals after testing positive, and improved retention in care strategies are required to further reduce the incidence of LP.

  20. Temporal variability in lipoprotein(a) levels in patients enrolled in the placebo arms of IONIS-APO(a)Rx and IONIS-APO(a)-LRx antisense oligonucleotide clinical trials.

    PubMed

    Marcovina, Santica M; Viney, Nicholas J; Hughes, Steven G; Xia, Shuting; Witztum, Joseph L; Tsimikas, Sotirios

    Lipoprotein(a) [Lp(a)] levels are primarily genetically determined, but their natural variability is not well known. The aim of the study was to evaluate the short-term temporal variability in Lp(a) in 3 placebo groups from the IONIS-APO(a) Rx and IONIS-APO(a)-L Rx trials. The placebo groups comprised 3 studies: Study 1 with 10 subjects with any Lp(a) concentration; Study 2 with 13 subjects with Lp(a) ≥75 nmol/L (∼30 mg/dL); and Study 3 with 29 patients with Lp(a) ≥125 nmol/L (≥∼50 mg/dL). Lp(a) was measured in serial blood samples (range 7-12 samples up to 190 days of follow-up) and analyzed as absolute change and mean percent change from baseline. Outliers were defined as having a > ±25% difference in Lp(a) from baseline at any future time point. No significant temporal differences in mean absolute Lp(a) levels were present in any group. However, among individuals, the mean change in absolute Lp(a) levels at any time point ranged from -16.2 to +7.0 nmol/L in Study 1, -15.8 to +9.8 nmol/L in Study 2, and -60.2 to +16.6 nmol/L in Study 3. The mean percent change from baseline ranged from -9.4% to +21.6% for Study 1, -13.1% to 2.8% for Study 2, and -12.1% to +4.9% in Study 3. A total of 21 of 52 subjects (40.4%) were outliers, with 13 (62%) >25% up and 8 (38%) >25% down. Significant variability was also noted in other lipid parameters, but no outliers were noted with serum albumin. In subjects randomized to placebo in Lp(a) lowering trials, modest intra-individual temporal variability of mean Lp(a) levels was present. Significant number of subjects had > ±25% variation in Lp(a) in at least 1 time point. Although Lp(a) levels are primarily genetically determined, further study is required to define additional factors mediating short-term variability. Copyright © 2017 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  1. Risk Factors and Outcomes for Late Presentation for HIV-Positive Persons in Europe: Results from the Collaboration of Observational HIV Epidemiological Research Europe Study (COHERE)

    PubMed Central

    Mocroft, Amanda; Lundgren, Jens D.; Sabin, Miriam Lewis; Monforte, Antonella d'Arminio; Brockmeyer, Norbert; Casabona, Jordi; Castagna, Antonella; Costagliola, Dominique; Dabis, Francois; De Wit, Stéphane; Fätkenheuer, Gerd; Furrer, Hansjakob; Johnson, Anne M.; Lazanas, Marios K.; Leport, Catherine; Moreno, Santiago; Obel, Niels; Post, Frank A.; Reekie, Joanne; Reiss, Peter; Sabin, Caroline; Skaletz-Rorowski, Adriane; Suarez-Lozano, Ignacio; Torti, Carlo; Warszawski, Josiane; Zangerle, Robert; Fabre-Colin, Céline; Kjaer, Jesper; Chene, Genevieve; Grarup, Jesper; Kirk, Ole

    2013-01-01

    Background Few studies have monitored late presentation (LP) of HIV infection over the European continent, including Eastern Europe. Study objectives were to explore the impact of LP on AIDS and mortality. Methods and Findings LP was defined in Collaboration of Observational HIV Epidemiological Research Europe (COHERE) as HIV diagnosis with a CD4 count <350/mm3 or an AIDS diagnosis within 6 months of HIV diagnosis among persons presenting for care between 1 January 2000 and 30 June 2011. Logistic regression was used to identify factors associated with LP and Poisson regression to explore the impact on AIDS/death. 84,524 individuals from 23 cohorts in 35 countries contributed data; 45,488 were LP (53.8%). LP was highest in heterosexual males (66.1%), Southern European countries (57.0%), and persons originating from Africa (65.1%). LP decreased from 57.3% in 2000 to 51.7% in 2010/2011 (adjusted odds ratio [aOR] 0.96; 95% CI 0.95–0.97). LP decreased over time in both Central and Northern Europe among homosexual men, and male and female heterosexuals, but increased over time for female heterosexuals and male intravenous drug users (IDUs) from Southern Europe and in male and female IDUs from Eastern Europe. 8,187 AIDS/deaths occurred during 327,003 person-years of follow-up. In the first year after HIV diagnosis, LP was associated with over a 13-fold increased incidence of AIDS/death in Southern Europe (adjusted incidence rate ratio [aIRR] 13.02; 95% CI 8.19–20.70) and over a 6-fold increased rate in Eastern Europe (aIRR 6.64; 95% CI 3.55–12.43). Conclusions LP has decreased over time across Europe, but remains a significant issue in the region in all HIV exposure groups. LP increased in male IDUs and female heterosexuals from Southern Europe and IDUs in Eastern Europe. LP was associated with an increased rate of AIDS/deaths, particularly in the first year after HIV diagnosis, with significant variation across Europe. Earlier and more widespread testing, timely referrals after testing positive, and improved retention in care strategies are required to further reduce the incidence of LP. Please see later in the article for the Editors' Summary PMID:24137103

  2. Lipoprotein(a) and coronary atheroma progression rates during long-term high-intensity statin therapy: Insights from SATURN.

    PubMed

    Puri, Rishi; Ballantyne, Christie M; Hoogeveen, Ron C; Shao, Mingyuan; Barter, Philip; Libby, Peter; Chapman, M John; Erbel, Raimund; Arsenault, Benoit J; Raichlen, Joel S; Nissen, Steven E; Nicholls, Stephen J

    2017-08-01

    Lipoprotein(a) [Lp(a)] is a low-density lipoprotein (LDL)-like particle that associates with major adverse cardiovascular events (MACE). We examined relationships between Lp(a) measurements and changes in coronary atheroma volume following long-term maximally-intensive statin therapy in coronary artery disease patients. Study of coronary atheroma by intravascular ultrasound: Effect of Rosuvastatin Versus Atorvastatin (SATURN) used serial intravascular ultrasound measures of coronary atheroma volume in patients treated with rosuvastatin 40 mg or atorvastatin 80 mg for 24 months. Baseline and follow-up Lp(a) levels were measured in 915 of the 1039 SATURN participants, and were correlated with changes in percent atheroma volume (ΔPAV). Mean age was 57.7 ± 8.6 years, 74% were men, 96% were Caucasian, with statin use prior to study enrolment occurring in 59.3% of participants. Baseline [median (IQR)] LDL-cholesterol (LDL-C) and measured Lp(a) levels (mg/dL) were 114 (99, 137) and 17.4 (7.6, 52.9) respectively; follow-up measures were 60 (47, 77), and 16.5 (6.7, 57.7) (change from baseline: p < 0.001, p = 0.31 respectively). At baseline, there were 676 patients with Lp(a) levels <50 mg/dL [median Lp(a) of 10.9 mg/dL], and 239 patients with Lp(a) levels ≥ 50 mg/dL [median Lp(a) of 83.2 mg/dL]. Quartiles of baseline and follow-up Lp(a) did not associate with ΔPAV. Irrespective of the achieved LDL-C ( 50 mg/dL. In coronary artery disease patients prescribed long-term maximally intensive statin therapy with low on-treatment LDL-C levels, measured Lp(a) levels (predominantly below the 50 mg/dL threshold) do not associate with coronary atheroma progression. Alternative biomarkers may thus associate with residual cardiovascular risk in such patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Recycling, Remobilization, and Eruption of Crystals from the Lassen Volcanic Center

    NASA Astrophysics Data System (ADS)

    Schrecengost, K.; Cooper, K. M.; Kent, A. J.; Huber, C.; Clynne, M. A.

    2016-12-01

    The Lassen Volcanic Center recently produced two relatively small dacitic eruptions (0.03 km3 -1.4 km3) with a complex mixing history. Preliminary data for the 1915 Lassen Peak (LP) and the 1103±13 ybp Chaos Crags (CC) eruptions indicate complex mixing between a remobilized crystal mush (hornblende, biotite, sodic plagioclase, quartz) and basalt or basaltic andesite. U-series bulk ages represent crystallization of plagioclase at an average age of either a single event or a mixture of different plagioclase populations that crystallized during distinct crystallization events separated in time. We present 238U-230Th disequilibria for the LP light dacite and black dacite along with three stages (upper pyroclastic flow deposit, Dome B, and Dome F) of the CC eruption. Initial 230Th/232Th activity ratios for the LP plagioclase are higher than the LP host liquid and modeled equilibrium zero-age plagioclase towards the CC host liquid composition. The LP plagioclase data are inconsistent with crystallization from the LP host liquid. Therefore, at least a portion of the plagioclase carried by the LP eruptive products are antecrystic originating from an older and/or isotopically distinct host liquid composition. Moreover, LP bulk plagioclase is consistent with crystallization from the CC host liquid, suggesting that both eruptions are sourced from a similar host reservoir (i.e., crystal mush). Hornblende and biotite from the LP eruption have isotopic ratios that are consistent with zero age crystallization from the LP liquid composition, suggesting that they are younger and originate from a different magma than the plagioclase, with mixing between the magmas prior to eruption. However, it is more likely that hornblende, biotite, and plagioclase with varying average crystal ages were remobilized and erupted from a common crystal mush reservoir during the LP and CC eruptions. These data are consistent with zircon 238U-230Th model ages [1] that emphasize the importance of local, small-scale rejuvenation and mixing within a long-lived magmatic system. Moreover, assuming crystallization from a CC-like liquid compositions, LP bulk plagioclase model ages produce similar ages to those derived from LP and CC zircon (i.e., 17 ka to secular equilibrium). [1] Klemetti and Clynne, PLoS ONE, 9(12): e113157.

  4. M-estimation for robust sparse unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Toomik, Maria; Lu, Shijian; Nelson, James D. B.

    2016-10-01

    Hyperspectral unmixing methods often use a conventional least squares based lasso which assumes that the data follows the Gaussian distribution. The normality assumption is an approximation which is generally invalid for real imagery data. We consider a robust (non-Gaussian) approach to sparse spectral unmixing of remotely sensed imagery which reduces the sensitivity of the estimator to outliers and relaxes the linearity assumption. The method consists of several appropriate penalties. We propose to use an lp norm with 0 < p < 1 in the sparse regression problem, which induces more sparsity in the results, but makes the problem non-convex. On the other hand, the problem, though non-convex, can be solved quite straightforwardly with an extensible algorithm based on iteratively reweighted least squares. To deal with the huge size of modern spectral libraries we introduce a library reduction step, similar to the multiple signal classification (MUSIC) array processing algorithm, which not only speeds up unmixing but also yields superior results. In the hyperspectral setting we extend the traditional least squares method to the robust heavy-tailed case and propose a generalised M-lasso solution. M-estimation replaces the Gaussian likelihood with a fixed function ρ(e) that restrains outliers. The M-estimate function reduces the effect of errors with large amplitudes or even assigns the outliers zero weights. Our experimental results on real hyperspectral data show that noise with large amplitudes (outliers) often exists in the data. This ability to mitigate the influence of such outliers can therefore offer greater robustness. Qualitative hyperspectral unmixing results on real hyperspectral image data corroborate the efficacy of the proposed method.

  5. 76 FR 76802 - Riverside Micro-Cap Fund II, L.P.; Notice Seeking Exemption Under Section 312 of the Small...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-08

    ... SMALL BUSINESS ADMINISTRATION [License No. 02/02-0646] Riverside Micro-Cap Fund II, L.P.; Notice... hereby given that Riverside Micro-Cap Fund II, L.P., 45 Rockefeller Center, New York, NY 10111, a Federal... Regulations (13 CFR 107.730). Riverside Micro-Cap Fund II, L.P. proposes to provide equity security financing...

  6. 77 FR 7655 - Riverside Micro-Cap Fund II, L.P.; Notice Seeking Exemption Under Section 312 of the Small...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-13

    ... SMALL BUSINESS ADMINISTRATION [License No. 02/02-0646] Riverside Micro-Cap Fund II, L.P.; Notice... hereby given that Riverside Micro-Cap Fund II, L.P., 45 Rockefeller Center, New York, NY 10111, a Federal... Regulations (13 CFR 107.730). Riverside Micro-Cap Fund II, L.P. proposes to provide equity security financing...

  7. 75 FR 51451 - Erie Boulevard Hydropower, L.P.; Notice of Intent To File License Application, Filing of Pre...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-20

    ... Hydropower, L.P.; Notice of Intent To File License Application, Filing of Pre-Application Document, and.... Project No.: 7320-040. c. Dated Filed: June 29, 2010. d. Submitted By: Erie Boulevard Hydropower, L.P. e...: John Mudre at (202) 502-8902; or e-mail at [email protected] . j. Erie Boulevard Hydropower, L.P...

  8. 78 FR 21491 - DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P.; Notice Seeking Exemption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-10

    ... Small Business Investment Act of 1958, as amended (``the Act''), in connection with the financing of a... SMALL BUSINESS ADMINISTRATION [License No. 02/02-0662, 02/02-0661] DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P.; Notice Seeking Exemption Under Section 312 of the Small Business...

  9. The role of indoleamine 2,3-dioxygenase in LP-BPM5 murine retroviral disease progression

    PubMed Central

    2013-01-01

    Background Indoleamine 2,3-dioxygenase (IDO) is an immunomodulatory intracellular enzyme involved in tryptophan degradation. IDO is induced during cancer and microbial infections by cytokines, ligation of co-stimulatory molecules and/or activation of pattern recognition receptors, ultimately leading to modulation of the immune response. LP-BM5 murine retroviral infection induces murine AIDS (MAIDS), which is characterized by profound and broad immunosuppression of T- and B-cell responses. Our lab has previously described multiple mechanisms regulating the development of immunodeficiency of LP-BM5-induced disease, including Programmed Death 1 (PD-1), IL-10, and T-regulatory (Treg) cells. Immunosuppressive roles of IDO have been demonstrated in other retroviral models, suggesting a possible role for IDO during LP-BM5-induced retroviral disease progression and/or development of viral load. Methods Mice deficient in IDO (B6.IDO−/−) and wildtype C57BL/6 (B6) mice were infected with LP-BM5 murine retrovirus. MAIDS and LP-BM5 viral load were assessed at termination. Results As expected, IDO was un-inducible in B6.IDO−/− during LP-BM5 infection. B6.IDO−/− mice infected with LP-BM5 retrovirus succumbed to MAIDS as indicated by splenomegaly, serum hyper IgG2a and IgM, decreased responsiveness to B- and T-cell mitogens, conversion of a proportion of CD4+ T cells from Thy1.2+ to Thy1.2-, and increased percentages of CD11b+Gr-1+ cells. LP-BM5 infected B6.IDO−/− mice also demonstrated the development of roughly equivalent disease kinetics as compared to infected B6 mice. Splenic viral loads of B6 and B6.IDO−/− mice were also equivalent after infection as measured by LP-BM5-specific Def Gag and Eco Gag viral mRNA, determined by qRT-PCR. Conclusions Collectively, these results demonstrate IDO neither plays an essential role, nor is required, in LP-BM5-induced disease progression or LP-BM5 viral load. PMID:23680027

  10. Increased risk of coronary artery calcification progression in subjects with high baseline Lp(a) levels: The Kangbuk Samsung Health Study.

    PubMed

    Cho, Jung Hwan; Lee, Da Young; Lee, Eun Seo; Kim, Jihyun; Park, Se Eun; Park, Cheol-Young; Lee, Won-Young; Oh, Ki-Won; Park, Sung-Woo; Rhee, Eun-Jung

    2016-11-01

    Results from previous studies support the association of lipoprotein(a) [Lp(a)] levels and coronary artery disease risk. In this study, we analyzed the association between baseline Lp(a) levels and future progression of coronary artery calcification (CAC) in apparently healthy Korean adults. A total of 2611 participants (mean age: 41years, 92% mend) who underwent a routine health check-up in 2010 and 2014 were enrolled. Coronary artery calcium score (CACS) were measured by multi-detector computed tomography. Baseline Lp(a) was measured by high-sensitivity immunoturbidimetric assay. Progression of CAC was defined as a change in CACS >0 over four years. Bivariate correlation analyses with baseline Lp(a) and other metabolic parameters revealed age, total cholesterol, HDL-C, LDL-C and CACS to have a significant positive correlation, while body weight, fasting glucose level, blood pressure and triglyceride level were negatively correlated with baseline Lp(a) level. After four years of follow-up, 635 subjects (24.3%) had CAC progression. The participants who had CAC progression were older, composed of more men, more obese, and had higher fasting glucose levels and worse baseline lipid profiles compared to those who did not have CAC progression. The mean serum Lp(a) level was significantly higher in subjects who had CAC progression compared to those who did not (32.5 vs. 28.9mg/dL, p<0.01). When the risk for CAC progression according to baseline Lp(a) was calculated, those with Lp(a) level≥50mg/dL had an odds ratio of 1.333 (95% CI 1.027-1.730) for CAC progression compared to those with Lp(a)<50mg/dL after adjusting for confounding factors. In this study, the subjects who had higher Lp(a) were at significantly higher risk for CAC progression after four years of follow-up, suggesting the role of high Lp(a) in CAC progression. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Continuation of SAGE and MLS High-Resolution Ozone Profiles with the Suomi NPP OMPS Limb Profiler

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Bhartia, P. K.; Moy, L.; Chen, Z.; Frith, S. M.

    2015-12-01

    The Ozone Mapper and Profiler Suite (OMPS) Limb Profiler (LP) onboard the Suomi NPP satellite is design to measure ozone profiles with a high vertical resolution (~2 km) and dense spatial sampling (~1° latitude). The LP sensor represents a new generation of the US ozone profile instruments with the plan for a follow-up limb instrument onboard the Joint Polar Satellite System 2 (JPSS-2) in 2021. In this study we will examine the suitability of using LP profiles to continue the EOS climate ozone profile record from the SAGE and MLS datasets. First of all, we evaluate the accuracy in determining the LP tangent height by analyzing measured and calculated radiances. The accurate estimation of the tangent height is critical for limb observations. Several methods were explored to estimate the uncertainties in the LP tangent height registration, and the results will be briefly summarized in this presentation. Version 2 of LP data, released in May 2014, includes a static adjustment of ~1.5 km and a dynamic tangent height adjustment within each orbit. A recent analysis of Version 2 Level 1 radiances revealed a 100 m step in the tangent height that occurred on 26 April 2013, due to a switch to two star trackers in determining spacecraft position. In addition, a ~200 m shift in the tangent height along each orbit was detected. These uncertainties in tangent height registrations can affect the stability of the LP ozone record. Therefore, the second step in our study includes a validation of LP ozone profiles against correlative satellite ozone measurements (Aura MLS, ACE-FTS, OSIRIS, and SBUV) with the focus on time-dependent changes. We estimate relative drifts between OMPS LP and correlative ozone records to evaluate stability of the LP measurements. We also test the tangent height corrections found in the internal analysis of Version 2 measurements to determine their effect on the long-term stability of the LP ozone record.

  12. Significance of lipoprotein(a) levels in familial hypercholesterolemia and coronary artery disease.

    PubMed

    Li, Sha; Wu, Na-Qiong; Zhu, Cheng-Gang; Zhang, Yan; Guo, Yuan-Lin; Gao, Ying; Li, Xiao-Lin; Qing, Ping; Cui, Chuan-Jue; Xu, Rui-Xia; Sun, Jing; Liu, Geng; Dong, Qian; Li, Jian-Jun

    2017-05-01

    Patients with familial hypercholesterolemia (FH) are often characterized by premature coronary artery disease (CAD) with heterogeneity at onset. The aim of the present study was to investigate the associations of lipoprotein (a) [Lp(a)] with the FH phenotype, genotype and roles of Lp(a) in determining CAD risk among patients with and without FH. We enrolled 8050 patients undergoing coronary angiography, from our Lipid clinic. Clinical FH was diagnosed using the Dutch Lipid Clinic Network criteria. Mutational analysis (LDLR, APOB, PCSK9) in definite/probable FH was performed by target exome sequencing. Lp(a) levels were increased, with a clinical FH diagnosis (unlikely, possible, definite/probable FH) independent of the patients status, with Lp(a)-hyperlipoproteinemia [Lp(a)-HLP] (median 517.70 vs. 570.98 vs. 604.65 mg/L, p < 0.001) or without (median 89.20 vs. 99.20 vs. 133.67 mg/L, p < 0.001). Patients with Lp(a)-HLP had a higher prevalence of definite/probable FH than those without (6.1% vs. 2.4%, p < 0.05). However, no significant difference in Lp(a) was observed in patients with definite/probable FH phenotype carrying LDLR or LDLR-independent (APOB, PCSK9) or neither mutations (p > 0.05). Multivariate analysis showed that Lp(a) and FH phenotype were both significant determinants in predicting the early onset and severity of CAD. Subsequently, patients with Lp(a)-HLP in definite/probable FH increased significantly the CAD risk (all p < 0.05). Lp(a) levels were higher in patients with FH phenotype than in those without, but no difference were found in FH patients of different mutated backgrounds. Moreover, Lp(a) and FH played a synergistic role in predicting the early onset and severity of CAD. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Parental and Volunteer Perception of Pyloromyotomy Scars: Comparing Laparoscopic, Open, and Nonsurgical Volunteers.

    PubMed

    St Peter, Shawn D; Acher, Charles W; Shah, Sohail R; Sharp, Susan W; Ostlie, Daniel J

    2016-04-01

    Despite evidence from prospective trials and meta-analyses supporting laparoscopic pyloromyotomy (LP) over open pyloromyotomy (OP), the open technique is still utilized by some surgeons on the premise that there is minimal clinical benefit to LP over OP. Although the potential cosmetic benefit of LP over OP is often cited in reports, it has never been objectively evaluated. After internal review board approval, the parents of patients from a previous prospective trial who had undergone LP (n = 9) and OP (n = 10) were contacted. After consent was obtained, the parents and patients were asked to complete a validated scar scoring questionnaire that was compared between groups. Standardized photos were taken of study subjects and controls with no abdominal procedures. Blinded volunteers were recruited to view the photos, identify if scars were present, and complete questions if a scar(s) was seen. Volunteers were also asked about the degree of satisfaction if their child had similar scars on a four-point scale from happy to unacceptable. Mean age was 7 years in both groups. Parental scar assessment scores were superior in the LP group in every category. Blinded volunteers detected abdominal scars significantly more often in the OP group (98%) vs. the LP group (28%; P < .001). The volunteers detected a scar in 16% of the controls, comparable to the 28% detected in the LP group (P = .17). The degree of satisfaction estimate by volunteers was 1.78 for OP and 1.02 for LP and controls, generating a Cohen's d effect size of 5.1 standard deviation units comparing OP to either LP or controls (very large ≥1.3). Parents of children scored LP scars superior to OP scars. Surgical scars are almost always identifiable with OP while the surgical scars associated with LP approach invisibility to the observer, appearing similar to patients with no prior abdominal operation.

  14. Do root hydraulic properties change during the early vegetative stage of plant development in barley (Hordeum vulgare)?

    PubMed

    Suku, Shimi; Knipfer, Thorsten; Fricke, Wieland

    2014-02-01

    As annual crops develop, transpirational water loss increases substantially. This increase has to be matched by an increase in water uptake through the root system. The aim of this study was to assess the contributions of changes in intrinsic root hydraulic conductivity (Lp, water uptake per unit root surface area, driving force and time), driving force and root surface area to developmental increases in root water uptake. Hydroponically grown barley plants were analysed during four windows of their vegetative stage of development, when they were 9-13, 14-18, 19-23 and 24-28 d old. Hydraulic conductivity was determined for individual roots (Lp) and for entire root systems (Lp(r)). Osmotic Lp of individual seminal and adventitious roots and osmotic Lp(r) of the root system were determined in exudation experiments. Hydrostatic Lp of individual roots was determined by root pressure probe analyses, and hydrostatic Lp(r) of the root system was derived from analyses of transpiring plants. Although osmotic and hydrostatic Lp and Lp(r) values increased initially during development and were correlated positively with plant transpiration rate, their overall developmental increases (about 2-fold) were small compared with increases in transpirational water loss and root surface area (about 10- to 40-fold). The water potential gradient driving water uptake in transpiring plants more than doubled during development, and potentially contributed to the increases in plant water flow. Osmotic Lp(r) of entire root systems and hydrostatic Lp(r) of transpiring plants were similar, suggesting that the main radial transport path in roots was the cell-to-cell path at all developmental stages. Increase in the surface area of root system, and not changes in intrinsic root hydraulic properties, is the main means through which barley plants grown hydroponically sustain an increase in transpirational water loss during their vegetative development.

  15. One-year neurodevelopmental outcome of very and late preterm infants: Risk factors and correlation with maternal stress.

    PubMed

    Coletti, Maria Franca; Caravale, Barbara; Gasparini, Corinna; Franco, Francesco; Campi, Francesca; Dotta, Andrea

    2015-05-01

    Although "late preterm" (LP) newborns (33-36 weeks of gestational age) represent more than 70% of all preterm labors, little is known about the relation between certain risk factors and developmental outcomes in LP compared to "very preterm" (≤32 weeks) children (VP). This study investigates: (1) LP and VP infants' development at 12 months of corrected age (CA) using the Bayley Scales of Infant Development - 3rd Edition (BSID-III); (2) correlation between BSID-III performances and maternal stress (using Parenting Stress Index-Short Form, PSI-SF) among LP and VP at 12 months CA; and (3) the link between known neonatal and demographic risk factors and developmental outcomes of LP and VP infants. For both LP and VP infants the Mean Cognitive (LP: 102.69±7.68; VP: 103.63±10.68), Language (LP: 96.23±10.08; VP: 99.10±10.37) and Motor (LP: 91.11±10.33; VP: 93.85±10.17) composite scores were in the normal range, without significant differences between the groups. Correlations between PSI-SF and BSID-III showed that in the VP group (but not LP), Language score was negatively related to the PSI-SF 'Difficult Child' scale (r=-.34, p<.05). Regression models revealed that cognitive performance was significantly predicted by physical therapy in LP and by cesarean section in VP infants. For VP only maternal education and length of stay predicted Language score, whereas physical therapy predicted Motor score. Results of the study underline the importance of considering cognitive, language and motor developments separately when assessing a preterm child's development. Prediction models of developmental performance confirm the influence of some known neonatal risk factors and indicate the need for further research on the role of sociodemographic risk factors. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Leukoproliferative response of splenocytes from English sole (Pleuronectes vetulus) exposed to chemical contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arkoosh, M.R.; Clemons, E.; Huffman, P.

    The leukoproliferative (LP) response of splenic leukocytes from the marine benthic fish English sole (Pleuronectes vetulus) stimulated with the mitogens lipopolysaccharide (LPS), convanavalin A (Con A), and pokeweed mitogen (PWM) was examined as a biomarker of immunotoxic effects. English sole were exposed to contaminants, either by injection of an organic-solvent extract of a sediment containing polycyclic aromatic compounds (PACs) or placed for up to 5 weeks on a reference sediment containing 0.15 to 1.5% (v/v) of the PAC-contaminated sediment. English sole either injected with the contaminated extract or held on PAC-contaminated sediment had an augmented response to Con A. Themore » LP response to LPS showed no relationship to PAC exposure in laboratory-exposed fish, while PWM showed no consistent relationship to exposure to PACs. In a field study, English sole captured from an urban area in Puget Sound, Washington, USA, contaminated with PACs and other chemical contaminants had a significantly augmented LP response to Con A and PWM in comparison to the LP response in fish from a nonurban reference site. Fish from another nonurban site also had an augmented LP response to Con A, indicating that the elevation of the Con A LP response can also result from factors other than chemical contaminant exposure. In addition, English sole from this site also had an augmented LP response to LPS, whereas fish from urban sites did not exhibit an augmented LP response to LPS. Overall, the results demonstrated that although the LP response in splenic leukocytes of English sole to Con A was linked to contaminant exposure, the LP response to Con A did not exhibit high specificity as an indicator of chemical contaminant exposure. However, the concerted use of Con A, LPS, and PWM allowed for identification of apparent chemical contaminant-induced alterations of the LP response in English sole from an urban area of Puget Sound.« less

  17. Lipoprotein (a), metabolic syndrome and coronary calcium score in a large occupational cohort.

    PubMed

    Sung, K-C; Wild, S H; Byrne, C D

    2013-12-01

    Whether lipoprotein (a) [Lp(a)] concentration is associated with metabolic syndrome (MetS) and pre-clinical atherosclerosis in different ethnic groups is uncertain. The association between Lp(a), MetS and a measure of pre-clinical atherosclerosis was studied in a large Asian cohort. Data were analyzed from a South Korean occupational cohort who underwent a cardiac computed tomography (CT) estimation of CAC score and measurements of cardiovascular risk factors (n = 14,583 people). The key exposure was an Lp(a) concentration in the top quartile (>38.64 mg/dL)) with a CAC score >0 as the outcome variable and measure of pre-clinical atherosclerosis. Logistic regression was used to describe the associations. 1462 participants had a CAC score >0. In the lowest Lp(a) quartile (<11.29 mg/dL), 25.8% had MetS, compared with 16.1% in the highest Lp(a) quartile (>38.64 mg/dL (p < 0.001). MetS, and component features, were inversely related to Lp(a) concentration (all p < 0.0001). In the highest Lp(a) quartile group, there was an association between Lp(a) and CAC score >0 in men (OR 1.21[1.05, 1.40], p = 0.008), and women (OR 1.62[1.03, 2.55], p = 0.038), after adjustment for age, sex, lipid lowering therapy, and multiple cardiovascular risk factors. There was no evidence of an interaction between highest quartile Lp(a) and either high LDLc (>147 mg/dL) (p = 0.99), or MetS (p = 0.84) on the association with CAC score >0. Lp(a) levels are inversely related to MetS and its components. There was a robust association between Lp(a) concentration >38.6 mg/dL and marker of early atherosclerosis in both men and women, regardless of LDLc, level MetS or other cardiovascular risk factors. © 2013 Elsevier B.V. All rights reserved.

  18. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, D; Nguyen, D; Voronenko, Y

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term thatmore » encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA188300, Varian Medical Systems; Part of this research took place while D. O’Connor was a summer intern at RefleXion Medical.« less

  19. Evidence for an Evolutionarily Conserved Memory Coding Scheme in the Mammalian Hippocampus

    PubMed Central

    Thome, Alexander; Lisanby, Sarah H.; McNaughton, Bruce L.

    2017-01-01

    Decades of research identify the hippocampal formation as central to memory storage and recall. Events are stored via distributed population codes, the parameters of which (e.g., sparsity and overlap) determine both storage capacity and fidelity. However, it remains unclear whether the parameters governing information storage are similar between species. Because episodic memories are rooted in the space in which they are experienced, the hippocampal response to navigation is often used as a proxy to study memory. Critically, recent studies in rodents that mimic the conditions typical of navigation studies in humans and nonhuman primates (i.e., virtual reality) show that reduced sensory input alters hippocampal representations of space. The goal of this study was to quantify this effect and determine whether there are commonalities in information storage across species. Using functional molecular imaging, we observe that navigation in virtual environments elicits activity in fewer CA1 neurons relative to real-world conditions. Conversely, comparable neuronal activity is observed in hippocampus region CA3 and the dentate gyrus under both conditions. Surprisingly, we also find evidence that the absolute number of neurons used to represent an experience is relatively stable between nonhuman primates and rodents. We propose that this convergence reflects an optimal ensemble size for episodic memories. SIGNIFICANCE STATEMENT One primary factor constraining memory capacity is the sparsity of the engram, the proportion of neurons that encode a single experience. Investigating sparsity in humans is hampered by the lack of single-cell resolution and differences in behavioral protocols. Sparsity can be quantified in freely moving rodents, but extrapolating these data to humans assumes that information storage is comparable across species and is robust to restraint-induced reduction in sensory input. Here, we test these assumptions and show that species differences in brain size build memory capacity without altering the structure of the data being stored. Furthermore, sparsity in most of the hippocampus is resilient to reduced sensory information. This information is vital to integrating animal data with human imaging navigation studies. PMID:28174334

  20. Long-term statin therapy could be efficacious in reducing the lipoprotein (a) levels in patients with coronary artery disease modified by some traditional risk factors.

    PubMed

    Xu, Ming-Xing; Liu, Chang; He, Yong-Ming; Yang, Xiang-Jun; Zhao, Xin

    2017-05-01

    Lipoprotein (a) [Lp (a)] is a well-established risk factor for coronary artery disease (CAD). However, up till now, treatment of patients with higher Lp (a) levels is challenging. This current study aimed to investigate the therapeutic effects of short-, medium and long-term statin use on the Lp (a) reduction and its modifying factors. The therapeutic duration was categorized into short-term (median, 39 days), medium term (median, 219 days) and long-term (median, 677 days). The lipid profiles before therapy served as baselines. Patients at short-, medium or long-term exactly matched with those at baseline. Every patient's lipid profiles during the follow-ups were compared to his own ones at baselines. The current study demonstrated that long-term statin therapy significantly decreased the Lp (a) levels in CAD patients while short-term or medium term statin therapy didn't. When grouped by statin use, only long-term simvastatin use significantly decreased the Lp (a) levels while long-term atorvastatin use insignificantly decreased the Lp (a) levels. Primary hypertension (PH), DM, low density lipoprotein cholesterol (LDL-C) and high density lipoprotein cholesterol (HDL-C) could modify the therapeutic effects of statin use on the Lp (a) levels in CAD patients. The long-term statin therapy could be efficacious in reducing the Lp (a) levels in CAD patients, which has been modified by some traditional risk factors. In the era of commercial unavailability of more reliable Lp (a) lowering drugs, our findings will bolster confidence in fighting higher Lp (a) abnormalities both for patients and for doctors.

  1. Effect of woody-plant encroachment on livestock production in North and South America

    PubMed Central

    Anadón, José D.; Sala, Osvaldo E.; Turner, B. L.; Bennett, Elena M.

    2014-01-01

    A large fraction of the world grasslands and savannas are undergoing a rapid shift from herbaceous to woody-plant dominance. This land-cover change is expected to lead to a loss in livestock production (LP), but the impacts of woody-plant encroachment on this crucial ecosystem service have not been assessed. We evaluate how tree cover (TC) has affected LP at large spatial scales in rangelands of contrasting social–economic characteristics in the United States and Argentina. Our models indicate that in areas of high productivity, a 1% increase in TC results in a reduction in LP ranging from 0.6 to 1.6 reproductive cows (Rc) per km2. Mean LP in the United States is 27 Rc per km2, so a 1% increase in TC results in a 2.5% decrease in mean LP. This effect is large considering that woody-plant cover has been described as increasing at 0.5% to 2% per y. On the contrary, in areas of low productivity, increased TC had a positive effect on LP. Our results also show that ecological factors account for a larger fraction of LP variability in Argentinean than in US rangelands. Differences in the relative importance of ecological versus nonecological drivers of LP in Argentina and the United States suggest that the valuation of ecosystem services between these two rangelands might be different. Current management strategies in Argentina are likely designed to maximize LP for various reasons we are unable to explore in this effort, whereas land managers in the United States may be optimizing multiple ecosystem services, including conservation or recreation, alongside LP. PMID:25136084

  2. Impact of high lipoprotein(a) levels on in-stent restenosis and long-term clinical outcomes of angina pectoris patients undergoing percutaneous coronary intervention with drug-eluting stents in Asian population.

    PubMed

    Park, Sang-Ho; Rha, Seung-Woon; Choi, Byoung-Geol; Park, Ji-Young; Jeon, Ung; Seo, Hong-Seog; Kim, Eung-Ju; Na, Jin-Oh; Choi, Cheol-Ung; Kim, Jin-Won; Lim, Hong-Euy; Park, Chang-Gyu; Oh, Dong-Joo

    2015-06-01

    Lipoprotein(a) (Lp(a)) is known to be associated with cardiovascular complications and atherothrombotic properties in general populations. However, it has not been examined whether Lp(a) levels are able to predict adverse cardiovascular outcomes in patients undergoing percutaneous coronary intervention (PCI) with drug-eluting stents (DES). A total of 595 consecutive patients with angina pectoris who underwent elective PCI with DES were enrolled from 2004 to 2010. The patients were divided into two groups according to the levels of Lp(a): Lp(a) < 50 mg/dL (n = 485 patients), and Lp(a) ≥ 50 mg/dL (n = 111 patients). The 6-9-month angiographic outcomes and 3-year cumulative major clinical outcomes were compared between the two groups. Binary restenosis occurred in 26 of 133 lesions (19.8%) in the high Lp(a) group and 43 of 550 lesions (7.9%) in the low Lp(a) group (P = 0.001). In multivariate analysis, the reference vessel diameter, low density lipoprotein cholesterol, total lesion length, and Lp(a) ≥ 50 mg/dL were predictors of binary restenosis. In the Cox proportional hazards regression analysis, Lp(a) > 50 mg/dL was significantly associated with the 3-year adverse clinical outcomes including any myocardial infarction, revascularization (target lesion revascularization (TLR) and target vessel revascularization (TVR)), TLR-major adverse cardiac events (MACEs), TVR-MACE, and All-MACEs. In our study, high Lp(a) level ≥ 50 mg/dL in angina pectoris patients undergoing elective PCI with DES was significantly associated with binary restenosis and 3-year adverse clinical outcomes in an Asian population. © 2015 Wiley Publishing Asia Pty Ltd.

  3. Ischemic Effects of Transcatheter Arterial Embolization with N-Butyl Cyanoacrylate-Lipiodol on the Colon in a Swine Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikoma, Akira; Kawai, Nobuyuki; Sato, Morio, E-mail: morisato@wakayama-med.ac.jp

    2010-10-15

    This study was designed to assess the safety of transcatheter arterial embolization (TAE) with n-butyl cyanoacrylate-lipiodol (NBCA-Lp) for the large bowel and to investigate the vital response to NBCA-Lp in a swine model. In nine swine, nine arteries nourishing the colon were embolized with NBCA-Lp (1 ml of NBCA mixed with 4 ml of lipiodol): sigmoid-rectal branch artery in six swine, right colic branch artery in two, and middle colic branch artery in one. The amount of NBCA-Lp was 0.1-0.4 ml. Sacrifice was conducted 3 days after TAE to identify histological infarction. Classification was conducted retrospectively: group A, vasa rectamore » without NBCA-Lp embolization despite TAE; group B, three or fewer vasa recta with NBCA-Lp embolization; and group C, five or more vasa recta with NBCA-Lp embolization. In one swine in group A, no necrotic focus was observed. In group B, three of four swine experienced no ischemic damage. The remaining one swine experienced necrosis of mucosal and submucosal layers in one-fourth of the circumference. In group C, all four swine with marginal artery and five vasa recta or more embolized experienced total necrosis of mucosa, submucosa, and smooth muscle layers of the whole colonic circumference. Significant difference on the extent of ischemic damage was observed between groups B and C (P < 0.05). Microscopically, NBCA-Lp induced acute vasculitis. Embolization of three or fewer vasa recta with NBCA-Lp induced no ischemic damage or limited necrosis, whereas embolization of five or more vasa recta with NBCA-Lp induced extensive necrosis.« less

  4. A pentanucleotide repeat polymorphism in the 5' control region of the apolipoprotein(a) gene is associated with lipoprotein(a) plasma concentrations in Caucasians.

    PubMed Central

    Trommsdorff, M; Köchl, S; Lingenhel, A; Kronenberg, F; Delport, R; Vermaak, H; Lemming, L; Klausen, I C; Faergeman, O; Utermann, G

    1995-01-01

    The enormous interindividual variation in the plasma concentrations of the atherogenic lipoprotein(a) [Lp(a)] is almost entirely controlled by the apo(a) locus on chromosome 6q26-q27. A variable number of transcribed kringle4 repeats (K4-VNTR) in the gene explains a large fraction of this variation, whereas the rest is presently unexplained. We here have analyzed the effect of the K4-VNTR and of a pentanucleotide repeat polymorphism (TTTTA)n (n = 6-11) in the 5' control region of the apo(a) gene on plasma Lp(a) levels in unrelated healthy Tyroleans (n = 130), Danes (n = 154), and Black South Africans (n = 112). The K4-VNTR had a significant effect on plasma Lp(a) levels in Caucasians and explained 41 and 45% of the variation in Lp(a) plasma concentration in Tyroleans and Danes, respectively. Both, the pentanucleotide repeat (PNR) allele frequencies and their effects on Lp(a) concentrations were heterogeneous among populations. A significant negative correlation between the number of pentanucleotide repeats and the plasma Lp(a) concentration was observed in Tyroleans and Danes. The effect of the 5' PNRP on plasma Lp(a) concentrations was independent from the K4-VNTR and explained from 10 to 14% of the variation in Lp(a) concentrations in Caucasians. No significant effect of the PNRP was present in Black Africans. This suggests allelic association between PNR alleles and sequences affecting Lp(a) levels in Caucasians. Thus, in Caucasians but not in Blacks, concentrations of the atherogenic Lp(a) particle are strongly associated with two repeat polymorphisms in the apo(a) gene. Images PMID:7615785

  5. Factors associated with lipoprotein(a) in chronic kidney disease.

    PubMed

    Uhlig, Katrin; Wang, Shin-Ru; Beck, Gerald J; Kusek, John W; Marcovina, Santica M; Greene, Tom; Levey, Andrew S; Sarnak, Mark J

    2005-01-01

    It is unclear whether lipoprotein(a) (Lp[a]) levels in patients with chronic kidney disease (CKD) are elevated as a result of reduced glomerular filtration rate (GFR) or other factors associated with CKD. The goal of this study is to describe the association of Lp(a) level with GFR in the context of apoprotein(a) (apo[a]) isoform size, race, and other kidney disease-related factors, such as proteinuria, serum albumin level, C-reactive protein (CRP) level, and serum lipid levels. Lp(a) and apo(a) isoforms were measured in serum samples obtained at baseline from 804 participants in the Modification of Diet in Renal Disease study (GFR range, 13 to 55 mL/min/1.73 m2). The cross-sectional association between Lp(a) level and GFR, apo(a) isoform size, race, and other variables was analyzed in univariate and multivariate linear regression. Median Lp(a) level was greater in blacks than whites (97.5 versus 28.1 nmol/L; P < 0.001). Those with a low-molecular-weight apo(a) isoform size had greater Lp(a) levels than those with a high-molecular-weight apo(a) isoform size (57.5 versus 21.3 nmol/L; P < 0.001). Lp(a) level was not associated with GFR. Low-molecular-weight apo(a), black race, and greater levels of proteinuria, CRP, and triglycerides were independently associated with greater Lp(a) levels. In this population with CKD stages 3 to 4, GFR was not associated with Lp(a) level, whereas other factors related to CKD, such as proteinuria, CRP level, and triglyceride level, as well as genetic factors such as apo(a) isoform size and race, were associated with Lp(a) level.

  6. Tonic nanomolar dopamine enables an activity-dependent phase recovery mechanism that persistently alters the maximal conductance of the hyperpolarization-activated current in a rhythmically active neuron.

    PubMed

    Rodgers, Edmund W; Fu, Jing Jing; Krenz, Wulf-Dieter C; Baro, Deborah J

    2011-11-09

    The phases at which network neurons fire in rhythmic motor outputs are critically important for the proper generation of motor behaviors. The pyloric network in the crustacean stomatogastric ganglion generates a rhythmic motor output wherein neuronal phase relationships are remarkably invariant across individuals and throughout lifetimes. The mechanisms for maintaining these robust phase relationships over the long-term are not well described. Here we show that tonic nanomolar dopamine (DA) acts at type 1 DA receptors (D1Rs) to enable an activity-dependent mechanism that can contribute to phase maintenance in the lateral pyloric (LP) neuron. The LP displays continuous rhythmic bursting. The activity-dependent mechanism was triggered by a prolonged decrease in LP burst duration, and it generated a persistent increase in the maximal conductance (G(max)) of the LP hyperpolarization-activated current (I(h)), but only in the presence of steady-state DA. Interestingly, micromolar DA produces an LP phase advance accompanied by a decrease in LP burst duration that abolishes normal LP network function. During a 1 h application of micromolar DA, LP phase recovered over tens of minutes because, the activity-dependent mechanism enabled by steady-state DA was triggered by the micromolar DA-induced decrease in LP burst duration. Presumably, this mechanism restored normal LP network function. These data suggest steady-state DA may enable homeostatic mechanisms that maintain motor network output during protracted neuromodulation. This DA-enabled, activity-dependent mechanism to preserve phase may be broadly relevant, as diminished dopaminergic tone has recently been shown to reduce I(h) in rhythmically active neurons in the mammalian brain.

  7. Transcriptome analysis of the epidermis of the purple quail-like (q-lp) mutant of silkworm, Bombyx mori.

    PubMed

    Wang, Pingyang; Qiu, Zhiyong; Xia, Dingguo; Tang, Shunming; Shen, Xingjia; Zhao, Qiaoling

    2017-01-01

    A new purple quail-like (q-lp) mutant found from the plain silkworm strain 932VR has pigment dots on the epidermis similar to the pigment mutant quail (q). In addition, q-lp mutant larvae are inactive, consume little and grow slowly, with a high death rate and other developmental abnormalities. Pigmentation of the silkworm epidermis consists of melanin, ommochrome and pteridine. Silkworm development is regulated by ecdysone and juvenile hormone. In this study, we performed RNA-Seq on the epidermis of the q-lp mutant in the 4th instar during molting, with 932VR serving as the control. The results showed 515 differentially expressed genes, of which 234 were upregulated and 281 downregulated in q-lp. BLASTGO analysis indicated that the downregulated genes mainly encode protein-binding proteins, membrane components, oxidation/reduction enzymes, and proteolytic enzymes, whereas the upregulated genes largely encode cuticle structural constituents, membrane components, transport related proteins, and protein-binding proteins. Quantitative reverse transcription PCR was used to verify the accuracy of the RNA-Seq data, focusing on key genes for biosynthesis of the three pigments and chitin as well as genes encoding cuticular proteins and several related nuclear receptors, which are thought to play key roles in the q-lp mutant. We drew three conclusions based on the results: 1) melanin, ommochrome and pteridine pigments are all increased in the q-lp mutant; 2) more cuticle proteins are expressed in q-lp than in 932VR, and the number of upregulated cuticular genes is significantly greater than downregulated genes; 3) the downstream pathway regulated by ecdysone is blocked in the q-lp mutant. Our research findings lay the foundation for further research on the developmental changes responsible for the q-lp mutant.

  8. Beyond the Sparsity-Based Target Detector: A Hybrid Sparsity and Statistics Based Detector for Hyperspectral Images.

    PubMed

    Du, Bo; Zhang, Yuxiang; Zhang, Liangpei; Tao, Dacheng

    2016-08-18

    Hyperspectral images provide great potential for target detection, however, new challenges are also introduced for hyperspectral target detection, resulting that hyperspectral target detection should be treated as a new problem and modeled differently. Many classical detectors are proposed based on the linear mixing model and the sparsity model. However, the former type of model cannot deal well with spectral variability in limited endmembers, and the latter type of model usually treats the target detection as a simple classification problem and pays less attention to the low target probability. In this case, can we find an efficient way to utilize both the high-dimension features behind hyperspectral images and the limited target information to extract small targets? This paper proposes a novel sparsitybased detector named the hybrid sparsity and statistics detector (HSSD) for target detection in hyperspectral imagery, which can effectively deal with the above two problems. The proposed algorithm designs a hypothesis-specific dictionary based on the prior hypotheses for the test pixel, which can avoid the imbalanced number of training samples for a class-specific dictionary. Then, a purification process is employed for the background training samples in order to construct an effective competition between the two hypotheses. Next, a sparse representation based binary hypothesis model merged with additive Gaussian noise is proposed to represent the image. Finally, a generalized likelihood ratio test is performed to obtain a more robust detection decision than the reconstruction residual based detection methods. Extensive experimental results with three hyperspectral datasets confirm that the proposed HSSD algorithm clearly outperforms the stateof- the-art target detectors.

  9. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    NASA Astrophysics Data System (ADS)

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  10. Sparsity-driven coupled imaging and autofocusing for interferometric SAR

    NASA Astrophysics Data System (ADS)

    Zengin, Oǧuzcan; Khwaja, Ahmed Shaharyar; ćetin, Müjdat

    2018-04-01

    We propose a sparsity-driven method for coupled image formation and autofocusing based on multi-channel data collected in interferometric synthetic aperture radar (IfSAR). Relative phase between SAR images contains valuable information. For example, it can be used to estimate the height of the scene in SAR interferometry. However, this relative phase could be degraded when independent enhancement methods are used over SAR image pairs. Previously, Ramakrishnan et al. proposed a coupled multi-channel image enhancement technique, based on a dual descent method, which exhibits better performance in phase preservation compared to independent enhancement methods. Their work involves a coupled optimization formulation that uses a sparsity enforcing penalty term as well as a constraint tying the multichannel images together to preserve the cross-channel information. In addition to independent enhancement, the relative phase between the acquisitions can be degraded due to other factors as well, such as platform location uncertainties, leading to phase errors in the data and defocusing in the formed imagery. The performance of airborne SAR systems can be affected severely by such errors. We propose an optimization formulation that combines Ramakrishnan et al.'s coupled IfSAR enhancement method with the sparsity-driven autofocus (SDA) approach of Önhon and Çetin to alleviate the effects of phase errors due to motion errors in the context of IfSAR imaging. Our method solves the joint optimization problem with a Lagrangian optimization method iteratively. In our preliminary experimental analysis, we have obtained results of our method on synthetic SAR images and compared its performance to existing methods.

  11. Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI.

    PubMed

    Aggarwal, Priya; Gupta, Anubha

    2017-12-01

    A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Prospective Comparison of Robotic and Laparoscopic Pyeloplasty

    PubMed Central

    Link, Richard E.; Bhayani, Sam B.; Kavoussi, Louis R.

    2006-01-01

    Objective: To determine whether robotic-assisted pyeloplasty (RLP) has any significant clinical or cost advantages over laparoscopic pyeloplasty (LP) for surgeons already facile with intracorporeal suturing. Summary Background Data: LP has become an established management approach for primary ureteropelvic junction obstruction. More recently, the da Vinci robot has been applied to this procedure (RLP) in an attempt to shorten the learning curve. Whether RLP provides any significant advantage over LP for the experienced laparoscopist remains unclear. Methods: Ten consecutive cases each of transperitoneal RLP and LP performed by a single surgeon were compared prospectively with respect to surgical times and perioperative outcomes. Cost assessment was performed by sensitivity analysis using a mathematical cost model incorporating operative time, anesthesia fees, consumables, and capital equipment depreciation. Results: The RLP and LP groups had statistically indistinguishable demographics, pathology, and similar perioperative outcomes. Mean operative and total room time for RLP was significantly longer than LP by 19.5 and 39.0 minutes, respectively. RLP was much more costly than LP (2.7 times), due to longer operative time, increased consumables costs, and depreciation of the costly da Vinci system. However, even if depreciation was eliminated, RLP was still 1.7 times as costly as LP. One-way sensitivity analysis showed that LP operative time must increase to almost 6.5 hours for it to become cost equivalent to RLP. Conclusions: For the experienced laparoscopist, application of the da Vinci robot resulted in no significant clinical advantage and added substantial cost to transperitoneal laparoscopic dismembered pyeloplasty. PMID:16552199

  13. A prospective comparison of robotic and laparoscopic pyeloplasty.

    PubMed

    Link, Richard E; Bhayani, Sam B; Kavoussi, Louis R

    2006-04-01

    To determine whether robotic-assisted pyeloplasty (RLP) has any significant clinical or cost advantages over laparoscopic pyeloplasty (LP) for surgeons already facile with intracorporeal suturing. LP has become an established management approach for primary ureteropelvic junction obstruction. More recently, the da Vinci robot has been applied to this procedure (RLP) in an attempt to shorten the learning curve. Whether RLP provides any significant advantage over LP for the experienced laparoscopist remains unclear. Ten consecutive cases each of transperitoneal RLP and LP performed by a single surgeon were compared prospectively with respect to surgical times and perioperative outcomes. Cost assessment was performed by sensitivity analysis using a mathematical cost model incorporating operative time, anesthesia fees, consumables, and capital equipment depreciation. The RLP and LP groups had statistically indistinguishable demographics, pathology, and similar perioperative outcomes. Mean operative and total room time for RLP was significantly longer than LP by 19.5 and 39.0 minutes, respectively. RLP was much more costly than LP (2.7 times), due to longer operative time, increased consumables costs, and depreciation of the costly da Vinci system. However, even if depreciation was eliminated, RLP was still 1.7 times as costly as LP. One-way sensitivity analysis showed that LP operative time must increase to almost 6.5 hours for it to become cost equivalent to RLP. For the experienced laparoscopist, application of the da Vinci robot resulted in no significant clinical advantage and added substantial cost to transperitoneal laparoscopic dismembered pyeloplasty.

  14. Expanding proteome coverage with orthogonal-specificity α-Lytic proteases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Jesse G.; Kim, Sangtae; Maltby, David A.

    2014-03-01

    Bottom-up proteomics studies traditionally involve proteome digestion with a single protease, trypsin. However, trypsin alone does not generate peptides that encompass the entire proteome. Alternative proteases have been explored, but most have specificity for charged amino acid side chains. Therefore, additional proteases that improve proteome coverage by cleavage at sequences complimentary to trypsin may increase proteome coverage. We demonstrate the novel application of two proteases for bottom-up proteomics: wild type alpha-lytic protease (WaLP), and an active site mutant of WaLP, M190A alpha-lytic protease (MaLP). We assess several relevant factors including MS/MS fragmentation, peptide length, peptide yield, and protease specificity. Bymore » combining data from separate digestions with trypsin, LysC, WaLP, and MaLP, proteome coverage was increased 101% compared to trypsin digestion alone. To demonstrate how the gained sequence coverage can access additional PTM information, we show identification of a number of novel phosphorylation sites in the S. pombe proteome and include an illustrative example from the protein MPD2, wherein two novel sites are identified, one in a tryptic peptide too short to identify and the other in a sequence devoid of tryptic sites. The specificity of WaLP and MaLP for aliphatic amino acid side chains was particularly valuable for coverage of membrane protein sequences, which increased 350% when the data from trypsin, LysC, WaLP, and MaLP were combined.« less

  15. 78 FR 32294 - DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P., License No. 02/02-0662,02/02...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-29

    ... Small Business Investment Act of 1958, as amended (``the Act''), in connection with the financing of a... SMALL BUSINESS ADMINISTRATION DeltaPoint Capital IV, L.P., DeltaPoint Capital IV (New York), L.P., License No. 02/02-0662,02/02-0661; Notice Seeking Exemption Under Section 312 of the Small Business...

  16. Production and characterization of a tributyrin esterase from Lactobacillus plantarum suitable for cheese lipolysis.

    PubMed

    Esteban-Torres, M; Mancheño, J M; de las Rivas, B; Muñoz, R

    2014-11-01

    Lactobacillus plantarum is a lactic acid bacterium that can be found during cheese ripening. Lipolysis of milk triacylglycerols to free fatty acids during cheese ripening has fundamental consequences on cheese flavor. In the present study, the gene lp_1760, encoding a putative esterase or lipase, was cloned and expressed in Escherichia coli BL21 (DE3) and the overproduced Lp_1760 protein was biochemically characterized. Lp_1760 hydrolyzed p-nitrophenyl esters of fatty acids from C2 to C16, with a preference for p-nitrophenyl butyrate. On triglycerides, Lp_1760 showed higher activity on tributyrin than on triacetin. Although optimal conditions for activity were 45°C and pH 7, Lp_1760 retains activity under conditions commonly found during cheese making and ripening. The Lp_1760 showed more than 50% activity at 5°C and exhibited thermal stability at high temperatures. Enzymatic activity was strongly inhibited by sodium dodecyl sulfate and phenylmethylsulfonyl fluoride. The Lp_1760 tributyrin esterase showed high activity in the presence of NaCl, lactic acid, and calcium chloride. The results suggest that Lp_1760 might be a useful tributyrin esterase to be used in cheese manufacturing. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  17. Smooth Approximation l 0-Norm Constrained Affine Projection Algorithm and Its Applications in Sparse Channel Estimation

    PubMed Central

    2014-01-01

    We propose a smooth approximation l 0-norm constrained affine projection algorithm (SL0-APA) to improve the convergence speed and the steady-state error of affine projection algorithm (APA) for sparse channel estimation. The proposed algorithm ensures improved performance in terms of the convergence speed and the steady-state error via the combination of a smooth approximation l 0-norm (SL0) penalty on the coefficients into the standard APA cost function, which gives rise to a zero attractor that promotes the sparsity of the channel taps in the channel estimation and hence accelerates the convergence speed and reduces the steady-state error when the channel is sparse. The simulation results demonstrate that our proposed SL0-APA is superior to the standard APA and its sparsity-aware algorithms in terms of both the convergence speed and the steady-state behavior in a designated sparse channel. Furthermore, SL0-APA is shown to have smaller steady-state error than the previously proposed sparsity-aware algorithms when the number of nonzero taps in the sparse channel increases. PMID:24790588

  18. Fast and robust reconstruction for fluorescence molecular tomography via a sparsity adaptive subspace pursuit method.

    PubMed

    Ye, Jinzuo; Chi, Chongwei; Xue, Zhenwen; Wu, Ping; An, Yu; Xu, Han; Zhang, Shuang; Tian, Jie

    2014-02-01

    Fluorescence molecular tomography (FMT), as a promising imaging modality, can three-dimensionally locate the specific tumor position in small animals. However, it remains challenging for effective and robust reconstruction of fluorescent probe distribution in animals. In this paper, we present a novel method based on sparsity adaptive subspace pursuit (SASP) for FMT reconstruction. Some innovative strategies including subspace projection, the bottom-up sparsity adaptive approach, and backtracking technique are associated with the SASP method, which guarantees the accuracy, efficiency, and robustness for FMT reconstruction. Three numerical experiments based on a mouse-mimicking heterogeneous phantom have been performed to validate the feasibility of the SASP method. The results show that the proposed SASP method can achieve satisfactory source localization with a bias less than 1mm; the efficiency of the method is much faster than mainstream reconstruction methods; and this approach is robust even under quite ill-posed condition. Furthermore, we have applied this method to an in vivo mouse model, and the results demonstrate the feasibility of the practical FMT application with the SASP method.

  19. Sampling limits for electron tomography with sparsity-exploiting reconstructions.

    PubMed

    Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A

    2018-03-01

    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Interferometric redatuming by sparse inversion

    NASA Astrophysics Data System (ADS)

    van der Neut, Joost; Herrmann, Felix J.

    2013-02-01

    Assuming that transmission responses are known between the surface and a particular depth level in the subsurface, seismic sources can be effectively mapped to this level by a process called interferometric redatuming. After redatuming, the obtained wavefields can be used for imaging below this particular depth level. Interferometric redatuming consists of two steps, namely (i) the decomposition of the observed wavefields into downgoing and upgoing constituents and (ii) a multidimensional deconvolution of the upgoing constituents with the downgoing constituents. While this method works in theory, sensitivity to noise and artefacts due to incomplete acquisition require a different formulation. In this letter, we demonstrate the benefits of formulating the two steps that undergird interferometric redatuming in terms of a transform-domain sparsity-promoting program. By exploiting compressibility of seismic wavefields in the curvelet domain, the method not only becomes robust with respect to noise but we are also able to remove certain artefacts while preserving the frequency content. Although we observe improvements when we promote sparsity in the redatumed data space, we expect better results when interferometric redatuming would be combined or integrated with least-squares migration with sparsity promotion in the image space.

  1. Benefits of Manometer in Non-Invasive Ventilatory Support.

    PubMed

    Lacerda, Rodrigo Silva; de Lima, Fernando Cesar Anastácio; Bastos, Leonardo Pereira; Fardin Vinco, Anderson; Schneider, Felipe Britto Azevedo; Luduvico Coelho, Yves; Fernandes, Heitor Gomes Costa; Bacalhau, João Marcus Ramos; Bermudes, Igor Matheus Simonelli; da Silva, Claudinei Ferreira; da Silva, Luiza Paterlini; Pezato, Rogério

    2017-12-01

    Introduction Effective ventilation during cardiopulmonary resuscitation (CPR) is essential to reduce morbidity and mortality rates in cardiac arrest. Hyperventilation during CPR reduces the efficiency of compressions and coronary perfusion. Problem How could ventilation in CPR be optimized? The objective of this study was to evaluate non-invasive ventilator support using different devices. The study compares the regularity and intensity of non-invasive ventilation during simulated, conventional CPR and ventilatory support using three distinct ventilation devices: a standard manual resuscitator, with and without airway pressure manometer, and an automatic transport ventilator. Student's t-test was used to evaluate statistical differences between groups. P values <.05 were regarded as significant. Peak inspiratory pressure during ventilatory support and CPR was significantly increased in the group with manual resuscitator without manometer when compared with the manual resuscitator with manometer support (MS) group or automatic ventilator (AV) group. The study recommends for ventilatory support the use of a manual resuscitator equipped with MS or AVs, due to the risk of reduction in coronary perfusion pressure and iatrogenic thoracic injury during hyperventilation found using manual resuscitator without manometer. Lacerda RS , de Lima FCA , Bastos LP , Vinco AF , Schneider FBA , Coelho YL , Fernandes HGC , Bacalhau JMR , Bermudes IMS , da Silva CF , da Silva LP , Pezato R . Benefits of manometer in non-invasive ventilatory support. Prehosp Disaster Med. 2017;32(6):615-620.

  2. Immunohistochemical expression of perforin in lichen planus lesions.

    PubMed

    Gaber, Mohamed Abdelwahed; Maraee, Alaa Hassan; Alsheraky, Dalia Rifaat; Azeem, Marwa Hussain Abdel

    2014-12-01

    Lichen planus (LP) is a chronic inflammatory papulosquamous skin disease characterized by epidermal basal cell damage and a particular band-like infiltrate predominantly of T cells in the upper dermis. It is characterized by the formation of colloid bodies representing apoptotic keratinocytes. The apoptotic process mediated by CD8+ cytotoxic T lymphocytes and natural killer cells mainly involves two distinct pathways: the perforin/granzyme pathway and the Fas/FasL pathway. So far, little is known regarding the role of perforin-mediated apoptosis in LP. Is to study the expression and distribution of perforin in the epidermis and dermis of lesional LP skin. Skin biopsy specimens from lesional skin of 31 patients with LP and 10 healthy persons were analyzed by immunohistochemistry. Significant accumulation of perforin + cells was found in both epidermis and dermis of LP lesions compared with healthy skin. Perforin expression was significantly upregulated in the epidermis of LP lesions. Accumulation of perforin + cells in the epidermis of LP lesions suggest a potential role of perforin in the apoptosis of basal keratinocytes.

  3. The chemistry and pharmacology of Ligularia przewalskii: A review.

    PubMed

    Liu, Shi-Jun; Tang, Zhi-Shu; Liao, Zhi-Xin; Cui, Chun-Li; Liu, Hong-Bo; Liang, Yan-Ni; Zhang, Yu; Xu, Hong-Bo; Zhang, Dong-Bo; Zheng, Ya-Ting; Shi, Huan-Xian; Li, Shi-Ying

    2018-06-12

    Ligularia przewalskii (Maxim.) Diels (LP) (called zhangyetuowu in Chinese), is generally found in moist forest areas in the western regions of China. The root, leaves and flower of LP are utilized as a common traditional medicine in China. It has been utilized conventionally in herbal remedies for the remedy of haemoptysis, asthma, pulmonary phthisis, jaundice hepatitis, food poisoning, bronchitis, cough, fever, wound healing, measles, carbuncle, swelling and phlegm diseases. The review aims to provide a systematic summary of LP and to reveal the correlation between the traditional uses and pharmacological activities in order to provide updated, comprehensive and categorized information and identify the therapeutic potential for its use as a new medicine. The relevant data were searched by using the keywords "Ligularia przewalskii" "phytochemistry", "pharmacology", "Traditional uses", and "Toxicity" in "Scopus", "Scifinder", "Springer", "Pubmed", "Wiley", "Web of Science", "China Knowledge Resource Integrated databases (CNKI)", "Ph.D." and "M.Sc. dissertations", and a hand-search was done to acquire peer-reviewed articles and reports about LP. The plant taxonomy was validated by the databases "The Plant List", "Flora Reipublicae Popularis Sinicae", "A Collection of Qinghai Economic Plants", "Inner Mongolia plant medicine Chi", Zhonghua-bencao and the Standard of Chinese herbal medicine in Gansu. Based on the traditional uses, the chemical nature and biological effects of LP have been the focus of research. In modern research, approximately seventy-six secondary metabolites, including thirty-eight terpenoids, nine benzofuran derivatives, seven flavonoids, ten sterols and others, were isolated from this plant. They exhibit anti-inflammatory, antioxidative, anti-bacterial and anti-tumour effects, and so on. Currently, there is no report on the toxicity of LP, but hepatotoxic pyrrolizidine alkaloids (HPA) were first detected with LC/MS n in LP, and they have potential hepatotoxicity. The lung-moistening, cough-relieving and phlegm-resolving actions of the root of LP are attributed to the anti-inflammatory properties of flavonoids and terpenoids. The heat-clearing, dampness-removing and gallbladder-normalizing (to cure jaundice) actions of the flowers of LP are based on the anti-inflammatory, antioxidant and hepatoprotective activity properties of terpenoids, flavonoids and sterols. The Traditional Chinese Medicine (TCM) characteristics of LP (bitter flavour) corroborate its potent anti-inflammatory effects. In addition, the remarkable anti-inflammatory and antioxidant capacities of LP contribute to its anti-tumour and antitussive activities. Many conventional uses of LP have now been validated by modernized pharmacological research. For future research, further phytochemical and biological studies need to be conducted on LP, In particular, the safety, mechanism of action and efficacy of LP could be of future research interest before beginning clinical trials. More in vivo experiments and clinical studies are encouraged to further clarify the relation between traditional uses and modern applications. Regarding the roots, leaves and flowers of LP, their chemical compositions and clinical effects should be compared. The information on LP will be helpful in providing and identifying its therapeutic potential and economic value for its use as a new medicine in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Discordant response of low-density lipoprotein cholesterol and lipoprotein(a) levels to monoclonal antibodies targeting proprotein convertase subtilisin/kexin type 9.

    PubMed

    Edmiston, Jonathan B; Brooks, Nathan; Tavori, Hagai; Minnier, Jessica; Duell, Bart; Purnell, Jonathan Q; Kaufman, Tina; Wojcik, Cezary; Voros, Szilard; Fazio, Sergio; Shapiro, Michael D

    Clinical trials testing proprotein convertase subtilisin/kexin type 9 inhibitors (PCSK9i) have demonstrated an unanticipated but significant lipoprotein (a) (Lp(a))-lowering effect, on the order of 25% to 30%. Although the 50% to 60% reduction in low-density lipoprotein (LDL)-cholesterol (LDL-C) achieved by PCSK9i is mediated through its effect on LDL receptor (LDLR) preservation, the mechanism for Lp(a) lowering is unknown. We sought to characterize the degree of concordance between LDL-C and Lp(a) lowering because of PCSK9i in a standard of care patient cohort. Participants were selected from our Center for Preventive Cardiology, an outpatient referral center in a tertiary academic medical center. Subjects were included in this study if they had (1) at least 1 measurement of LDL-C and Lp(a) before and after initiation of the PCSK9i; (2) baseline Lp(a) > 10 mg/dL; and (3) continued adherence to PCSK9i therapy. They were excluded if (1) they were undergoing LDL apheresis; (2) pre- or post-PCSK9i LDL-C or Lp(a) laboratory values were censored; or (3) subjects discontinued other lipid-modifying therapies. In total, 103 subjects were identified as taking a PCSK9i and 26 met all inclusion and exclusion criteria. Concordant response to therapy was defined as an LDL-C reduction >35% and an Lp(a) reduction >10%. The cohort consisted of 26 subjects (15 females, 11 males, mean age 63 ± 12 years). Baseline mean LDL-C and median Lp(a) levels were 167.4 ± 72 mg/dL and 81 mg/dL (interquartile range 38-136 mg/dL), respectively. The average percent reductions in LDL-C and Lp(a) were 52.8% (47.0-58.6) and 20.2% (12.2-28.1). The correlation between %LDL and %Lp(a) reduction was moderate, with a Spearman's correlation of 0.56 (P < .01). All subjects except for 1 had a protocol-appropriate LDL-C response to therapy. However, only 16 of the 26 (62%; 95% confidence interval 41%-82%) subjects had a protocol-concordant Lp(a) response. Although some subjects demonstrated negligible Lp(a) reduction associated with PCSK9i, there were some whose Lp(a) decreased as much as 60%. In this standard-of-care setting, we demonstrate moderate correlation but large discordance (∼40%) in these 2 lipid fractions in response to PCSK9i. The results suggest that pathways beyond the LDLR are responsible for Lp(a) lowering and indicate that PCSK9i have the potential to significantly lower Lp(a) in select patients, although confirmation in larger multicenter studies is required. Copyright © 2017 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  5. A novel optical waveguide LP01/LP02 mode converter

    NASA Astrophysics Data System (ADS)

    Shen, Dongya; Wang, Changhui; Ma, Chuan; Mellah, Hakim; Zhang, Xiupu; Yuan, Hong; Ren, Wenping

    2018-07-01

    A novel optical waveguide LP01 /LP02 mode converter is proposed using combination of bicone structure based on the coupled-mode theory. It is composed of a cladding, a tapered core and combined bicone structure. It is found that this mode converter can have operating bandwidth of 1350-1700 nm, i.e. 350 nm, with a conversion efficiency of ∼90% (∼0.5 dB) and low crosstalk from other modes

  6. Wavelength-independent all-fiber mode converters.

    PubMed

    Lai, K; Leon-Saval, S G; Witkowska, A; Wadsworth, W J; Birks, T A

    2007-02-15

    We have used two different photonic crystal fiber (PCF) techniques to make all-fiber mode converters. An LP(01) to LP(11) mode converter was made by the ferrule technique on a drawing tower, and an LP(01) to LP(02) mode converter was made by controlled hole inflation of an existing PCF on a tapering rig. Both devices rely on adiabatic propagation rather than resonant coupling; so high extinction was achieved across a wide wavelength range.

  7. LP01 to LP11 mode convertor based on side-polished small-core single-mode fiber

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Li, Yang; Li, Wei-dong

    2018-03-01

    An all-fiber LP01-LP11 mode convertor based on side-polished small-core single-mode fibers (SMFs) is numerically demonstrated. The linearly polarized incident beam in one arm experiences π shift through a fiber half waveplate, and the side-polished parts merge into an equivalent twin-core fiber (TCF) which spatially shapes the incident LP01 modes to the LP11 mode supported by the step-index few-mode fiber (FMF). Optimum conditions for the highest conversion efficiency are investigated using the beam propagation method (BPM) with an approximate efficiency as high as 96.7%. The proposed scheme can operate within a wide wavelength range from 1.3 μm to1.7 μm with overall conversion efficiency greater than 95%. The effective mode area and coupling loss are also characterized in detail by finite element method (FEM).

  8. Advanced Russian Mission Laplace-P to Study the Planetary System of Jupiter: Scientific Goals, Objectives, Special Features and Mission Profile

    NASA Astrophysics Data System (ADS)

    Martynov, M. B.; Merkulov, P. V.; Lomakin, I. V.; Vyatlev, P. A.; Simonov, A. V.; Leun, E. V.; Barabanov, A. A.; Nasyrov, A. F.

    2017-12-01

    The advanced Russian project Laplace-P is aimed at developing and launching two scientific spacecraft (SC)— Laplace-P1 ( LP1 SC) and Laplace-P2 ( LP2 SC)—designed for remote and in-situ studies of the system of Jupiter and its moon Ganymede. The LP1 and LP2 spacecraft carry an orbiter and a lander onboard, respectively. One of the orbiter's objectives is to map the surface of Ganymede from the artificial satellite's orbit and to acquire the data for the landing site selection. The main objective of the lander is to carry out in-situ investigations of Ganymede's surface. The paper describes the scientific goals and objectives of the mission, its special features, and the LP1 and LP2 mission profiles during all of the phases—from the launch to the landing on the surface of Ganymede.

  9. 65 nm LP/GP mix low cost platform for multi-media wireless and consumer applications

    NASA Astrophysics Data System (ADS)

    Tavel, B.; Duriez, B.; Gwoziecki, R.; Basso, M. T.; Julien, C.; Ortolland, C.; Laplanche, Y.; Fox, R.; Sabouret, E.; Detcheverry, C.; Boeuf, F.; Morin, P.; Barge, D.; Bidaud, M.; Biénacel, J.; Garnier, P.; Cooper, K.; Chapon, J. D.; Trouiller, Y.; Belledent, J.; Broekaart, M.; Gouraud, P.; Denais, M.; Huard, V.; Rochereau, K.; Difrenza, R.; Planes, N.; Marin, M.; Boret, S.; Gloria, D.; Vanbergue, S.; Abramowitz, P.; Vishnubhotla, L.; Reber, D.; Stolk, P.; Woo, M.; Arnaud, F.

    2006-04-01

    A complete 65 nm CMOS platform, called LP/GP Mix, has been developed employing thick oxide transistor (IO), Low Power (LP) and General Purpose (GP) devices on the same chip. Dedicated to wireless multi-media and consumer applications, this new triple gate oxide platform is low cost (+1mask only) and saves over 35% of dynamic power with the use of the low operating voltage GP. The LP/GP mix shows competitive digital performance with a ring oscillator (FO = 1) speed equal to 7 ps per stage (GP) and 6T-SRAM static power lower than 10 pA/cell (LP). Compatible with mixed-signal design requirements, transistors show high voltage gain, low mismatch factor and low flicker noise. Moreover, to address mobile phone demands, excellent RF performance has been achieved with FT = 160 GHz for LP and 280 GHz for GP nMOS transistors.

  10. The influence of low protein diet on the testicular toxicity of di(2-ethylhexyl)phthalate.

    PubMed

    Tandon, R; Paramar, D; Singh, G B; Seth, P K; Srivastava, S P

    1992-12-01

    Oral administration of di(2-ethylhexyl)phthalate (DEHP) at 1000 mg/kg body weight to adult male albino rats maintained on low protein (LP) diet for 15 d resulted in a greater decrease in absolute and relative weights of the testis and in epididymal sperm count than in those rats maintained on a normal protein (NP) diet. A marked increase in the activity of testicular beta-glucuronidase and gamma-glutamyl transpeptidase (GGT) in the LP-fed animals suggested that LP diet enhanced the vulnerability of Sertoli cells towards DEHP. A greater decrease in the activity of testicular acid phosphatase, lactate dehydrogenase isoenzyme-X (LDH-X) and sorbitol dehydrogenase (SDH) in the LP-fed animals occurred in comparison to NP-fed animals. Degeneration of mature germinal cells in the LP-fed animals on exposure to DEHP suggested that LP diets enhance the susceptibility of the testis towards DEHP.

  11. Theoretical Study of Operational Limits of High-Speed Quantum Dot Lasers

    DTIC Science & Technology

    2012-09-09

    esc − vLn,captnL − b1 BnL pL, (1) b1 ∂pL ∂ t = p L QW τLp,esc − vLp,capt pL − b1 BnL pL, (2) for free holes and electrons on the right-hand side of...on the left- hand side of the OCL can be written as follows: pLQW τp,esc = vLp,capt pL + b1 BnL pL. (28) Substituting pLQW/τp,esc−vLp,capt pL = b1 BnL ...pL in (6), we have B2Dn L QW p L QW + b1 BnL pL = wLp,tunn pL,QW1 NS fp − wLp,tunn NS(1 − fp)pLQW. (29) As seen from (29), bimolecular recombination

  12. Distinguishing high surf from volcanic long-period earthquakes

    USGS Publications Warehouse

    Lyons, John; Haney, Matt; Fee, David; Paskievitch, John F.

    2014-01-01

    Repeating long-period (LP) earthquakes are observed at active volcanoes worldwide and are typically attributed to unsteady pressure fluctuations associated with fluid migration through the volcanic plumbing system. Nonvolcanic sources of LP signals include ice movement and glacial outburst floods, and the waveform characteristics and frequency content of these events often make them difficult to distinguish from volcanic LP events. We analyze seismic and infrasound data from an LP swarm recorded at Pagan volcano on 12–14 October 2013 and compare the results to ocean wave data from a nearby buoy. We demonstrate that although the events show strong similarity to volcanic LP signals, the events are not volcanic but due to intense surf generated by a passing typhoon. Seismo-acoustic methods allow for rapid distinction of volcanic LP signals from those generated by large surf and other sources, a critical task for volcano monitoring.

  13. Characterization of a cold-active esterase from Lactobacillus plantarum suitable for food fermentations.

    PubMed

    Esteban-Torres, María; Mancheño, José Miguel; de las Rivas, Blanca; Muñoz, Rosario

    2014-06-04

    Lactobacillus plantarum is a lactic acid bacteria that can be found in numerous fermented foods. Esterases from L. plantarum exert a fundamental role in food aroma. In the present study, the gene lp_2631 encoding a putative esterase was cloned and expressed in Escherichia coli BL21 (DE3) and the overproduced Lp_2631 protein has been biochemically characterized. Lp_2631 exhibited optimal esterase activity at 20 °C and more than 90% of maximal activity at 5 °C, being the first cold-active esterase described in a lactic acid bacteria. Lp_2631 exhibited 40% of its maximal activity after 2 h of incubation at 65 °C. Lp_2631 also showed marked activity in the presence of compounds commonly found in food fermentations, such as NaCl, ethanol, or lactic acid. The results suggest that Lp_2631 might be a useful esterase to be used in food fermentations.

  14. Near-infrared-fluorescence imaging of lymph nodes by using liposomally formulated indocyanine green derivatives.

    PubMed

    Toyota, Taro; Fujito, Hiromichi; Suganami, Akiko; Ouchi, Tomoki; Ooishi, Aki; Aoki, Akira; Onoue, Kazutaka; Muraki, Yutaka; Madono, Tomoyuki; Fujinami, Masanori; Tamura, Yutaka; Hayashi, Hideki

    2014-01-15

    Liposomally formulated indocyanine green (LP-ICG) has drawn much attention as a highly sensitive near-infrared (NIR)-fluorescence probe for tumors or lymph nodes in vivo. We synthesized ICG derivatives tagged with alkyl chains (ICG-Cn), and we examined NIR-fluorescence imaging for lymph nodes in the lower extremities of mice by using liposomally formulated ICG-Cn (LP-ICG-Cn) as well as conventional liposomally formulated ICG (LP-ICG) and ICG. Analysis with a noninvasive preclinical NIR-fluorescence imaging system revealed that LP-ICG-Cn accumulates in only the popliteal lymph node 1h after injection into the footpad, whereas LP-ICG and ICG accumulate in the popliteal lymph node and other organs like the liver. This result indicates that LP-ICG-Cn is a useful NIR-fluorescence probe for noninvasive in vivo bioimaging, especially for the sentinel lymph node. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Hyperkeratotic Palmoplantar Lichen Planus in a child

    PubMed Central

    Madke, Bhushan; Gutte, Rameshwar; Doshi, Bhavana; Khopkar, Uday

    2013-01-01

    Lichen planus (LP) is a common idiopathic inflammatory disorder that affects the flexor aspect of the wrists, the legs, and the oral and genital mucosa. Depending upon the site of involvement, LP can be divided into mucosal, nail, scalp, or palmoplantar types. Palmoplantar LP can pose a diagnostic problem to the clinician as it resembles common dermatoses like psoriasis, verruca, corn, calluses, lichenoid drug eruption, and papular syphilide of secondary syphilis. In this case report, we describe a 4-year-old male child who presented with highly pruritic erythematous to violaceous hyperkeratotic papules and plaques on his palms and soles. Typical LP papules were noted on the upper back. Histopathology of the papular lesion showed features of LP. Dermatoscopy of a papule from the back showed the characteristic Wickham striae. We report this rare involvement of palm and soles in a case of childhood LP. PMID:24082195

  16. Immunogenicity, Safety, and Tolerability of Bivalent rLP2086 Meningococcal Group B Vaccine Administered Concomitantly With Diphtheria, Tetanus, and Acellular Pertussis and Inactivated Poliomyelitis Vaccines to Healthy Adolescents.

    PubMed

    Vesikari, Timo; Wysocki, Jacek; Beeslaar, Johannes; Eiden, Joseph; Jiang, Qin; Jansen, Kathrin U; Jones, Thomas R; Harris, Shannon L; O'Neill, Robert E; York, Laura J; Perez, John L

    2016-06-01

    Concomitant administration of bivalent rLP2086 (Trumenba [Pfizer, Inc] and diphtheria, tetanus, and acellular pertussis and inactivated poliovirus vaccine (DTaP/IPV) was immunologically noninferior to DTaP/IPV and saline and was safe and well tolerated. Bivalent rLP2086 elicited robust and broad bactericidal antibody responses to diverse Neisseria meningitidis serogroup B strains expressing antigens heterologous to vaccine antigens after 2 and 3 vaccinations. Bivalent rLP2086, a Neisseria meningitidis serogroup B (MnB) vaccine (Trumenba [Pfizer, Inc]) recently approved in the United States to prevent invasive MnB disease in individuals aged 10-25 years, contains recombinant subfamily A and B factor H binding proteins (fHBPs). This study evaluated the coadministration of Repevax (diphtheria, tetanus, and acellular pertussis and inactivated poliovirus vaccine [DTaP/IPV]) (Sanofi Pasteur MSD, Ltd) and bivalent rLP2086. Healthy adolescents aged ≥11 to <19 years received bivalent rLP2086 + DTaP/IPV or saline + DTaP/IPV at month 0 and bivalent rLP2086 or saline at months 2 and 6. The primary end point was the proportion of participants in whom prespecified levels of antibodies to DTaP/IPV were achieved 1 month after DTaP/IPV administration. Immune responses to bivalent rLP2086 were measured with serum bactericidal assays using human complement (hSBAs) against 4 MnB test strains expressing fHBP subfamily A or B proteins different from the vaccine antigens. Participants were randomly assigned to receive bivalent rLP2086 + DTaP/IPV (n = 373) or saline + DTaP/IPV (n = 376). Immune responses to DTaP/IPV in participants who received bivalent rLP2086 + DTaP/IPV were noninferior to those in participants who received saline + DTaP/IPV.The proportions of bivalent rLP2086 + DTaP/IPV recipients with prespecified seroprotective hSBA titers to the 4 MnB test strains were 55.5%-97.3% after vaccination 2 and 81.5%-100% after vaccination 3. The administration of bivalent rLP2086 was well tolerated and resulted in few serious adverse events. Immune responses to DTaP/IPV administered with bivalent rLP2086 to adolescents were noninferior to DTaP/IPV administered alone. Bivalent rLP2086 was well tolerated and elicited substantial and broad bactericidal responses to diverse MnB strains in a high proportion of recipients after 2 vaccinations, and these responses were further enhanced after 3 vaccinations.ClinicalTrials.gov identifier NCT01323270. © The Author 2016. Published by Oxford University Press on behalf of the Pediatric Infectious Diseases Society.

  17. Multilevel regularized regression for simultaneous taxa selection and network construction with metagenomic count data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; Braun, Jonathan; McGovern, Dermot P B; Piantadosi, Steven

    2015-04-01

    Identifying disease associated taxa and constructing networks for bacteria interactions are two important tasks usually studied separately. In reality, differentiation of disease associated taxa and correlation among taxa may affect each other. One genus can be differentiated because it is highly correlated with another highly differentiated one. In addition, network structures may vary under different clinical conditions. Permutation tests are commonly used to detect differences between networks in distinct phenotypes, and they are time-consuming. In this manuscript, we propose a multilevel regularized regression method to simultaneously identify taxa and construct networks. We also extend the framework to allow construction of a common network and differentiated network together. An efficient algorithm with dual formulation is developed to deal with the large-scale n ≪ m problem with a large number of taxa (m) and a small number of samples (n) efficiently. The proposed method is regularized with a general Lp (p ∈ [0, 2]) penalty and models the effects of taxa abundance differentiation and correlation jointly. We demonstrate that it can identify both true and biologically significant genera and network structures. Software MLRR in MATLAB is available at http://biostatistics.csmc.edu/mlrr/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  19. Assessment of User Home Location Geoinference Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrison, Joshua J.; Bell, Eric B.; Corley, Courtney D.

    2015-05-29

    This study presents an assessment of multiple approaches to determine the home and/or other important locations to a Twitter user. In this study, we present a unique approach to the problem of geotagged data sparsity in social media when performing geoinferencing tasks. Given the sparsity of explicitly geotagged Twitter data, the ability to perform accurate and reliable user geolocation from a limited number of geotagged posts has proven to be quite useful. In our survey, we have achieved accuracy rates of over 86% in matching Twitter user profile locations with their inferred home locations derived from geotagged posts.

  20. Proteins from latex of Calotropis procera prevent septic shock due to lethal infection by Salmonella enterica serovar Typhimurium.

    PubMed

    Lima-Filho, José V; Patriota, Joyce M; Silva, Ayrles F B; Filho, Nicodemos T; Oliveira, Raquel S B; Alencar, Nylane M N; Ramos, Márcio V

    2010-06-16

    The latex of Calotropis procera has been used in traditional medicine to treat different inflammatory diseases. The anti-inflammatory activity of latex proteins (LP) has been well documented using different inflammatory models. In this work the anti-inflammatory protein fraction was evaluated in a true inflammatory process by inducing a lethal experimental infection in the murine model caused by Salmonella enterica Subsp. enterica serovar Typhimurium. Experimental Swiss mice were given 0.2 ml of LP (30 or 60 mg/kg) by the intraperitoneal route 24 h before or after lethal challenge (0.2 ml) containing 10(6) CFU/ml of Salmonella Typhimurium using the same route of administration. All the control animals succumbed to infection within 6 days. When given before bacterial inoculums LP prevented the death of mice, which remained in observation until day 28. Even, LP-treated animals exhibited only discrete signs of infection which disappeared latter. LP fraction was also protective when given orally or by subcutaneous route. Histopathological examination revealed that necrosis and inflammatory infiltrates were similar in both the experimental and control groups on days 1 and 5 after infection. LP activity did not clear Salmonella Typhimurium, which was still present in the spleen at approximately 10(4) cells/g of organ 28 days after challenge. However, no bacteria were detected in the liver at this stage. LP did not inhibit bacterial growth in culture medium at all. In the early stages of infection bacteria population was similar in organs and in the peritoneal fluid but drastically reduced in blood. Titration of TNF-alpha in serum revealed no differences between experimental and control groups on days 1 and 5 days after infection while IL-12 was only discretely diminished in serum of experimental animals on day 5. Moreover, cultured macrophages treated with LP and stimulated by LPS released significantly less IL-1beta. LP-treated mice did not succumb to septic shock when submitted to a lethal infection. LP did not exhibit in vitro bactericidal activity. It is thought that protection of LP-treated mice against Salmonella Typhimurium possibly involves down-regulation of pro-inflammatory cytokines (other than TNF-alpha). LP inhibited IL-1beta release in cultured macrophages and discretely reduced IL-12 in serum of animals given LP. Results reported here support the folk use of latex to treat skin infections by topic application. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Do We Know When and How to Lower Lipoprotein(a)?

    PubMed

    Joshi, Parag H; Krivitsky, Eric; Qian, Zhen; Vazquez, Gustavo; Voros, Szilard; Miller, Joseph

    2010-08-01

     : Currently, there are significant data to support a link between lipoprotein(a) [Lp(a)] levels and cardiovascular risk. However, there has not been a clinical trial examining the effects of Lp(a) reduction on cardiovascular risk in a primary prevention population. Until such a trial is conducted, current consensus supports using an Lp(a) percentile greater than 75% for race and gender as a risk stratification tool to target more aggressive low-density lipoprotein cholesterol (LDL-C) or apolipoprotein B (apoB) goals. Therefore, Lp(a) measurements should be considered in the following patients: individuals with early-onset vascular disease determined by clinical presentation or subclinical imaging, intermediate and high Framingham risk patients with a family history of premature coronary disease, and low Framingham risk patients with a family history and low high-density lipoprotein cholesterol (HDL-C) levels. Once LDL-C goals are met, Lp(a) levels may be taken into account in selecting secondary agents to reach more aggressive secondary goals, including non-HDL-C and apoB. To achieve Lp(a) reduction, one evidence-based approach is to initiate therapy with low-dose aspirin and extended-release niacin, titrated from 0.5 g up to 2 g over several weeks. If higher doses of niacin are desired, crystalline niacin allows for titration to a dosage as high as 2 g three times a day; however, the flushing side effect usually is quite prominent. Although hormone replacement therapy (HRT) has been shown to lower Lp(a), there are no indications for using HRT for primary or secondary prevention; therefore, we do not advocate initiating it solely for Lp(a) reduction. LDL apheresis is an option to lower LDL-C levels in patients with homozygous familial hypercholesterolemia who are not responsive to medical therapy. Although it does lower Lp(a), there is no treatment indication for this. A recent study supports the cholesterol absorption inhibitor ezetimibe's ability to lower Lp(a), a finding that deserves further investigation as it has not been previously reported in multiple ezetimibe trials. Additionally, the apoB messenger RNA antisense therapy mipomersen currently is in phase 3 trials and may serve as a potential inhibitor of Lp(a) production. Ultimately, more trial evidence is needed to determine whether lowering Lp(a) actually reduces cardiovascular risk, although this may be difficult to isolate without a specific Lp(a)-lowering therapy.

  2. Lyophilized plasma attenuates vascular permeability, inflammation and lung injury in hemorrhagic shock.

    PubMed

    Pati, Shibani; Peng, Zhanglong; Wataha, Katherine; Miyazawa, Byron; Potter, Daniel R; Kozar, Rosemary A

    2018-01-01

    In severe trauma and hemorrhage the early and empiric use of fresh frozen plasma (FFP) is associated with decreased morbidity and mortality. However, utilization of FFP comes with the significant burden of shipping and storage of frozen blood products. Dried or lyophilized plasma (LP) can be stored at room temperature, transported easily, reconstituted rapidly with ready availability in remote and austere environments. We have previously demonstrated that FFP mitigates the endothelial injury that ensues after hemorrhagic shock (HS). In the current study, we sought to determine whether LP has similar properties to FFP in its ability to modulate endothelial dysfunction in vitro and in vivo. Single donor LP was compared to single donor FFP using the following measures of endothelial cell (EC) function in vitro: permeability and transendothelial monolayer resistance; adherens junction preservation; and leukocyte-EC adhesion. In vivo, using a model of murine HS, LP and FFP were compared in measures of HS- induced pulmonary vascular inflammation and edema. Both in vitro and in vivo in all measures of EC function, LP demonstrated similar effects to FFP. Both FFP and LP similarly reduced EC permeability, increased transendothelial resistance, decreased leukocyte-EC binding and persevered adherens junctions. In vivo, LP and FFP both comparably reduced pulmonary injury, inflammation and vascular leak. Both FFP and LP have similar potent protective effects on the vascular endothelium in vitro and in lung function in vivo following hemorrhagic shock. These data support the further development of LP as an effective plasma product for human use after trauma and hemorrhagic shock.

  3. Unhealthy and healthy food consumption inside and outside of the school by pre-school and elementary school Mexican children in Tijuana, Mexico.

    PubMed

    Vargas, Lilian; Jiménez-Cruz, Arturo; Bacardí-Gascón, Montserrat

    2013-12-01

    Food from lunch packs (LP) or food available inside and outside of school can play an important role in the development of obesity. The purpose of this study was to evaluate the LP of elementary school (ES) and preschool children (PS) in Tijuana, and the foods available to them inside and outside of school. Eight public schools participated in the study. A random sample of all the groups from a school district was conducted. A questionnaire was administered to children in first through sixth grade (ES) and to the parents of PS. LP and food available inside and outside of the school were classified as healthy, unhealthy, and adequate according to the guidelines set forth by the Secretariat of Health. A total of 2,716 questionnaires were administered and the content of 648 LP was assessed. It was observed that 99% of PS had LP prepared at home, a higher percentage than ES. None of the LP of the ES was classified as healthy, and 1% was classified as adequate. Among PS, 21% of the LP were classified as healthy and 6% as adequate. More than half of the children recognized the brand name of foods high in fat, salt, and added sugar available inside and outside of school grounds. Most of the LP of ES and PS and the foods available inside and outside of school were unhealthy and inadequate. A strategy to prevent the availability of unhealthy and inadequate food in LP and foods available inside and outside schools is recommended.

  4. Genome-wide Linkage Analysis for Identifying Quantitative Trait Loci Involved in the Regulation of Lipoprotein a (Lpa) Levels

    PubMed Central

    López, Sonia; Buil, Alfonso; Ordoñez, Jordi; Souto, Juan Carlos; Almasy, Laura; Lathrop, Mark; Blangero, John; Blanco-Vaca, Francisco; Fontcuberta, Jordi; Soria, José Manuel

    2009-01-01

    Lipoprotein Lp(a) levels are highly heritable and are associated with cardiovascular risk. We performed a genome-wide linkage analysis to delineate the genomic regions that influence the concentration of Lp(a) in families from the Genetic Analysis of Idiopathic Thrombophilia (GAIT) Project. Lp(a) levels were measured in 387 individuals belonging to 21 extended Spanish families. A total of 485 DNA microsatellite markers were genotyped to provide a 7.1 cM genetic map. A variance component linkage method was used to evaluate linkage and to detect quantitative trait loci (QTLs). The main QTL that showed strong evidence of linkage with Lp(a) levels was located at the structural gene for apo(a) on Chromosome 6 (LOD score=13.8). Interestingly, another QTL influencing Lp(a) concentration was located on Chromosome 2 with a LOD score of 2.01. This region contains several candidate genes. One of them is the tissue factor pathway inhibitor (TFPI), which has antithrombotic action and also has the ability to bind lipoproteins. However, quantitative trait association analyses performed with 12 SNPs in TFPI gene revealed no association with Lp(a) levels. Our study confirms previous results on the genetic basis of Lp(a) levels. In addition, we report a new QTL on Chromosome 2 involved in the quantitative variation of Lp(a). These data should serve as the basis for further detection of candidate genes and to elucidate the relationship between the concentration of Lp(a) and cardiovascular risk. PMID:18560444

  5. Lymphatic-targeted cationic liposomes: a robust vaccine adjuvant for promoting long-term immunological memory.

    PubMed

    Wang, Ce; Liu, Peng; Zhuang, Yan; Li, Ping; Jiang, Boling; Pan, Hong; Liu, Lanlan; Cai, Lintao; Ma, Yifan

    2014-09-22

    Although retaining antigens at the injection site (the so-called "depot effect") is an important strategy for vaccine development, increasing evidence showed that lymphatic-targeted vaccine delivery with liposomes could be a promising approach for improving vaccine efficacy. However, it remains unclear whether antigen depot or lymphatic targeting would benefit long-term immunological memory, a major determinant of vaccine efficacy. In the present study, OVA antigen was encapsulated with DOTAP cationic liposomes (LP) or DOTAP-PEG-mannose liposomes (LP-Man) to generate depot or lymphatic-targeted liposome vaccines, respectively. The result of in vivo imaging showed that LP mostly accumulated near the injection site, whereas LP-Man not only effectively accumulated in draining lymph nodes (LNs) and the spleen, but also enhanced the uptake by resident antigen-presenting cells. Although LP vaccines with depot effect induced anti-OVA IgG more potently than LP-Man vaccines did on day 40 after priming, they failed to mount an effective B-cell memory response upon OVA re-challenge after three months. In contrast, lymphatic-targeted LP-Man vaccines elicited sustained antibody production and robust recall responses three months after priming, suggesting lymphatic targeting rather than antigen depot promoted the establishment of long-term memory responses. The enhanced long-term immunological memory by LP-Man was attributed to vigorous germinal center responses as well as increased Tfh cells and central memory CD4(+) T cells in the secondary lymphoid organs. Hence, lymphatic-targeted vaccine delivery with LP-Man could be an effective strategy to promote long-lasting immunological memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Antisense inhibition of apolipoprotein (a) to lower plasma lipoprotein (a) levels in humans

    PubMed Central

    Graham, Mark J.; Viney, Nick; Crooke, Rosanne M.; Tsimikas, Sotirios

    2016-01-01

    Epidemiological, genetic association, and Mendelian randomization studies have provided strong evidence that lipoprotein (a) [Lp(a)] is an independent causal risk factor for CVD, including myocardial infarction, stroke, peripheral arterial disease, and calcific aortic valve stenosis. Lp(a) levels >50 mg/dl are highly prevalent (20% of the general population) and are overrepresented in patients with CVD and aortic stenosis. These data support the notion that Lp(a) should be a target of therapy for CVD event reduction and to reduce progression of aortic stenosis. However, effective therapies to specifically reduce plasma Lp(a) levels are lacking. Recent animal and human studies have shown that Lp(a) can be specifically targeted with second generation antisense oligonucleotides (ASOs) that inhibit apo(a) mRNA translation. In apo(a) transgenic mice, an apo(a) ASO reduced plasma apo(a)/Lp(a) levels and their associated oxidized phospholipid (OxPL) levels by 86 and 93%, respectively. In cynomolgus monkeys, a second generation apo(a) ASO, ISIS-APO(a)Rx, significantly reduced hepatic apo(a) mRNA expression and plasma Lp(a) levels by >80%. Finally, in a phase I study in normal volunteers, ISIS-APO(a)Rx ASO reduced Lp(a) levels and their associated OxPL levels up to 89 and 93%, respectively, with minimal effects on other lipoproteins. ISIS-APO(a)Rx represents the first specific and potent drug in clinical development to lower Lp(a) levels and may be beneficial in reducing CVD events and progression of calcific aortic valve stenosis. PMID:26538546

  7. Correlates of serum lipoprotein (A) in children and adolescents in the United States. The third National Health Nutrition and Examination Survey (NHANES-III)

    PubMed Central

    Obisesan, Thomas O; Aliyu, Muktar H; Adediran, Abayomi S; Bond, Vernon; Maxwell, Celia J; Rotimi, Charles N

    2004-01-01

    Objective To determine the correlates of serum lipoprotein (a) (Lp(a)) in children and adolescents in the United States. Methods Cross-sectional study using representative data from a US national sample for persons aged 4–19 years participating in The Third National Health Nutrition and Examination Survey (NHANES-III). Results We observed ethnicity-related differences in levels of Lp(a) > 30 mg/dl, with values being markedly higher in African American (black) than nonhispanic white (white) and Mexican American children in multivariate model (P < 0.001). Higher levels of Lp(a) > 30 mg/dl associated with parental history of body mass index and residence in metro compared to nonmetro in Blacks, and high birth weight in Mexican American children in the NHANES-III. In the entire group, total cholesterol (which included Lp(a)) and parental history of premature heart attack/angina before age 50 (P < 0.02) showed consistent, independent, positive association with Lp(a). In subgroup analysis, this association was only evident in white (P = 0.04) and black (P = 0.05) children. However, no such collective consistent associations of Lp(a) were found with age, gender, or birth weight. Conclusion Ethnicity-related differences in mean Lp(a) exist among children and adolescents in the United States and parental history of premature heart attack/angina significantly associated with levels of Lp(a) in children. Further research on the associations of Lp(a) levels in childhood with subsequent risk of atherosclerosis is needed. PMID:15601478

  8. Validation of ozone profile retrievals derived from the OMPS LP version 2.5 algorithm against correlative satellite measurements

    NASA Astrophysics Data System (ADS)

    Kramarova, Natalya A.; Bhartia, Pawan K.; Jaross, Glen; Moy, Leslie; Xu, Philippe; Chen, Zhong; DeLand, Matthew; Froidevaux, Lucien; Livesey, Nathaniel; Degenstein, Douglas; Bourassa, Adam; Walker, Kaley A.; Sheese, Patrick

    2018-05-01

    The Limb Profiler (LP) is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km) LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing vertical, spatial and temporal ozone distribution associated with natural processes, like the seasonal cycle and quasi-biennial oscillations. We found a small positive drift ˜ 0.5 % yr-1 in the LP ozone record against MLS and OSIRIS that is more pronounced at altitudes above 35 km. This pattern in the relative drift is consistent with a possible 100 m drift in the LP sensor pointing detected by one of our altitude-resolving methods.

  9. Recent progress in making protein microarray through BioLP

    NASA Astrophysics Data System (ADS)

    Yang, Rusong; Wei, Lian; Feng, Ying; Li, Xiujian; Zhou, Quan

    2017-02-01

    Biological laser printing (BioLP) is a promising biomaterial printing technique. It has the advantage of high resolution, high bioactivity, high printing frequency and small transported liquid amount. In this paper, a set of BioLP device is design and made, and protein microarrays are printed by this device. It's found that both laser intensity and fluid layer thickness have an influence on the microarrays acquired. Besides, two kinds of the fluid layer coating methods are compared, and the results show that blade coating method is better than well-coating method in BioLP. A microarray of 0.76pL protein microarray and a "NUDT" patterned microarray are printed to testify the printing ability of BioLP.

  10. Fast implementation for compressive recovery of highly accelerated cardiac cine MRI using the balanced sparse model.

    PubMed

    Ting, Samuel T; Ahmad, Rizwan; Jin, Ning; Craft, Jason; Serafim da Silveira, Juliana; Xue, Hui; Simonetti, Orlando P

    2017-04-01

    Sparsity-promoting regularizers can enable stable recovery of highly undersampled magnetic resonance imaging (MRI), promising to improve the clinical utility of challenging applications. However, lengthy computation time limits the clinical use of these methods, especially for dynamic MRI with its large corpus of spatiotemporal data. Here, we present a holistic framework that utilizes the balanced sparse model for compressive sensing and parallel computing to reduce the computation time of cardiac MRI recovery methods. We propose a fast, iterative soft-thresholding method to solve the resulting ℓ1-regularized least squares problem. In addition, our approach utilizes a parallel computing environment that is fully integrated with the MRI acquisition software. The methodology is applied to two formulations of the multichannel MRI problem: image-based recovery and k-space-based recovery. Using measured MRI data, we show that, for a 224 × 144 image series with 48 frames, the proposed k-space-based approach achieves a mean reconstruction time of 2.35 min, a 24-fold improvement compared a reconstruction time of 55.5 min for the nonlinear conjugate gradient method, and the proposed image-based approach achieves a mean reconstruction time of 13.8 s. Our approach can be utilized to achieve fast reconstruction of large MRI datasets, thereby increasing the clinical utility of reconstruction techniques based on compressed sensing. Magn Reson Med 77:1505-1515, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  11. A Stochastic Model for Detecting Overlapping and Hierarchical Community Structure

    PubMed Central

    Cao, Xiaochun; Wang, Xiao; Jin, Di; Guo, Xiaojie; Tang, Xianchao

    2015-01-01

    Community detection is a fundamental problem in the analysis of complex networks. Recently, many researchers have concentrated on the detection of overlapping communities, where a vertex may belong to more than one community. However, most current methods require the number (or the size) of the communities as a priori information, which is usually unavailable in real-world networks. Thus, a practical algorithm should not only find the overlapping community structure, but also automatically determine the number of communities. Furthermore, it is preferable if this method is able to reveal the hierarchical structure of networks as well. In this work, we firstly propose a generative model that employs a nonnegative matrix factorization (NMF) formulization with a l2,1 norm regularization term, balanced by a resolution parameter. The NMF has the nature that provides overlapping community structure by assigning soft membership variables to each vertex; the l2,1 regularization term is a technique of group sparsity which can automatically determine the number of communities by penalizing too many nonempty communities; and hence the resolution parameter enables us to explore the hierarchical structure of networks. Thereafter, we derive the multiplicative update rule to learn the model parameters, and offer the proof of its correctness. Finally, we test our approach on a variety of synthetic and real-world networks, and compare it with some state-of-the-art algorithms. The results validate the superior performance of our new method. PMID:25822148

  12. Lp-mixed affine surface area

    NASA Astrophysics Data System (ADS)

    Wang, Weidong; Leng, Gangsong

    2007-11-01

    According to the three notions of mixed affine surface area, Lp-affine surface area and Lp-mixed affine surface area proposed by Lutwak, in this article, we give the concept of ith Lp-mixed affine surface area such that the first and second notions of Lutwak are its special cases. Further, some Lutwak's results are extended associated with this concept. Besides, applying this concept, we establish an inequality for the volumes and dual quermassintegrals of a class of star bodies.

  13. Interactive effect of inoculant and dried jujube powder on the fermentation quality and nitrogen fraction of alfalfa silage.

    PubMed

    Tian, Jipeng; Li, Zhenzhen; Yu, Zhu; Zhang, Qing; Li, Xujiao

    2017-04-01

    The interactive effect of inoculants and dried jujube powder (DJP) on the fermentation and nitrogen fraction (PA, PB1, PB2, PB3 and PC fractions) of alfalfa silage was investigated. Three of the Lactobacillus plantarum inoculants (LP1, LP2 or LP3) were used. The DJP was added at rates of 0, 3, 6, 9, 12 or 15% of the whole fresh forage. The combination of DJP and inoculants decreased the pH value and ammonia nitrogen content and increased the PC portion. As the DJP ratio increased, there was a peak in lactic acid : acetic acid ratio (12% of DJP ratio) and PB2 fraction (9% of DJP ratio) while the PA content decreased linearly. The LP1 and LP2 had the highest lactic acid content. Inoculants decreased the PB1 portion of true protein. The LP1 treated silage had the highest acetic acid content with the lowest lactic acid : acetic acid ratio and had lower PB3 and PC and higher PB2 than LP2 or LP3 treated silages. The result showed that the application of DJP or inoculants have positive effect on the fermentation, nutrition and N fraction value in the high moisture alfalfa silages, and the combination of DJP and inoculants preserves best. © 2016 Japanese Society of Animal Science.

  14. Lipoprotein(a) Levels in Patients With Abdominal Aortic Aneurysm.

    PubMed

    Kotani, Kazuhiko; Sahebkar, Amirhossein; Serban, Maria-Corina; Ursoniu, Sorin; Mikhailidis, Dimitri P; Mariscalco, Giovanni; Jones, Steven R; Martin, Seth; Blaha, Michael J; Toth, Peter P; Rizzo, Manfredi; Kostner, Karam; Rysz, Jacek; Banach, Maciej

    2017-02-01

    Circulating markers relevant to the development of abdominal aortic aneurysm (AAA) are currently required. Lipoprotein(a), Lp(a), is considered a candidate marker associated with the presence of AAA. The present meta-analysis aimed to evaluate the association between circulating Lp(a) levels and the presence of AAA. The PubMed-based search was conducted up to April 30, 2015, to identify the studies focusing on Lp(a) levels in patients with AAA and controls. Quantitative data synthesis was performed using a random effects model, with standardized mean difference (SMD) and 95% confidence interval (CI) as summary statistics. Overall, 9 studies were identified. After a combined analysis, patients with AAA were found to have a significantly higher level of Lp(a) compared to the controls (SMD: 0.87, 95% CI: 0.41-1.33, P < .001). This result remained robust in the sensitivity analysis, and its significance was not influenced after omitting each of the included studies from the meta-analysis. The present meta-analysis confirmed a higher level of circulating Lp(a) in patients with AAA compared to controls. High Lp(a) levels can be associated with the presence of AAA, and Lp(a) may be a marker in screening for AAA. Further studies are needed to establish the clinical utility of measuring Lp(a) in the prevention and management of AAA.

  15. Antitumor effect of laticifer proteins of Himatanthus drasticus (Mart.) Plumel - Apocynaceae.

    PubMed

    Mousinho, Kristiana C; Oliveira, Cecília de C; Ferreira, José Roberto de O; Carvalho, Adriana A; Magalhães, Hemerson Iury F; Bezerra, Daniel P; Alves, Ana Paula N N; Costa-Lotufo, Letícia V; Pessoa, Claúdia; de Matos, Mayara Patrícia V; Ramos, Márcio V; Moraes, Manoel O

    2011-09-01

    Himatanthus drasticus (Mart.) Plumel - Apocynaceae is a medicinal plant popularly known as Janaguba. Its bark and latex have been used by the public for cancer treatment, among other medicinal uses. However, there is almost no scientific research report on its medicinal properties. The aim of this study was to investigate the antitumor effects of Himatanthus drasticus latex proteins (HdLP) in experimental models. The in vitro cytotoxic activity of the HdLP was determined on cultured tumor cells. HdLP was also tested for its ability to induce lysis of mouse erythrocytes. In vivo antitumor activity was assessed in two experimental models, Sarcoma 180 and Walker 256 carcinosarcoma. Additionally, its effects on the immunological system were also investigated. HdLP did not show any significant in vitro cytotoxic effect at experimental exposure levels. When intraperitoneally administered, HdLP was active against both in vivo experimental tumors. However, it was inactive by oral administration. The histopathological analysis indicates that the liver and kidney were only weakly affected by HdLP treatment. It was also demonstrated that HdLP acts as an immunomodulatory agent, increasing the production of OVA-specific antibodies. Additionally, it increased relative spleen weight and the incidence of megakaryocyte colonies. In summary, HdLP has some interesting anticancer activity that could be associated with its immunostimulating properties. Copyright © 2011. Published by Elsevier Ireland Ltd.

  16. What factors affect the prices of low-priced U.S. solar PV systems?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nemet, Gregory F.; O'Shaughnessy, Eric; Wiser, Ryan

    The price of solar PV systems has declined rapidly, yet there are some much lower-priced systems than others. This study explores the factors that determine prices in these low-priced (LP) systems. Using a data set of 42,611 residential-scale PV systems installed in the U.S. in 2013, we use quantile regressions to estimate the importance of factors affecting the installed prices for LP systems (those at the 10th percentile) in comparison to median-priced systems. We find that the value of solar to consumers-a variable that accounts for subsidies, electric rates, and PV generation levels-is associated with lower prices for LP systemsmore » but higher prices for median priced systems. Conversely, systems installed in new home construction are associated with lower prices at the median but higher prices for LP. Other variables have larger price-reducing effects on LP than on median priced systems: systems installed in Arizona and Florida, as well as commercial and thin film systems. In contrast, the following have a smaller effect on prices for LP systems than median priced systems: tracking systems, self-installations, systems installed in Massachusetts, the system size, and installer experience. Furthermore, these results highlight the complex factors at play that lead to LP systems and shed light into how such LP systems can come about.« less

  17. What factors affect the prices of low-priced U.S. solar PV systems?

    DOE PAGES

    Nemet, Gregory F.; O'Shaughnessy, Eric; Wiser, Ryan; ...

    2017-08-09

    The price of solar PV systems has declined rapidly, yet there are some much lower-priced systems than others. This study explores the factors that determine prices in these low-priced (LP) systems. Using a data set of 42,611 residential-scale PV systems installed in the U.S. in 2013, we use quantile regressions to estimate the importance of factors affecting the installed prices for LP systems (those at the 10th percentile) in comparison to median-priced systems. We find that the value of solar to consumers-a variable that accounts for subsidies, electric rates, and PV generation levels-is associated with lower prices for LP systemsmore » but higher prices for median priced systems. Conversely, systems installed in new home construction are associated with lower prices at the median but higher prices for LP. Other variables have larger price-reducing effects on LP than on median priced systems: systems installed in Arizona and Florida, as well as commercial and thin film systems. In contrast, the following have a smaller effect on prices for LP systems than median priced systems: tracking systems, self-installations, systems installed in Massachusetts, the system size, and installer experience. Furthermore, these results highlight the complex factors at play that lead to LP systems and shed light into how such LP systems can come about.« less

  18. Changes in root hydraulic conductivity facilitate the overall hydraulic response of rice (Oryza sativa L.) cultivars to salt and osmotic stress.

    PubMed

    Meng, Delong; Fricke, Wieland

    2017-04-01

    The aim of the present work was to assess the significance of changes in root AQP gene expression and hydraulic conductivity (Lp) in the regulation of water balance in two hydroponically-grown rice cultivars (Azucena, Bala) which differ in root morphology, stomatal regulation and aquaporin (AQP) isoform expression. Plants were exposed to NaCl (25 mM, 50 mM) and osmotic stress (5%, 10% PEG6000). Root Lp was determined for exuding root systems (osmotic forces driving water uptake; 'exudation Lp') and transpiring plants (hydrostatic forces dominating; 'transpiration-Lp'). Gene expression was analysed by qPCR. Stress treatments caused a consistent and significant decrease in plant growth, transpirational water loss, stomatal conductance, shoot-to-root surface area ratio and root Lp. Comparison of exudation-with transpiration-Lp supported a significant contribution of AQP-facilitated water flow to root water uptake. Changes in root Lp in response to treatments were correlated much stronger with root morphological characteristics, such as the number of main and lateral roots, surface area ratio of root to shoot and plant transpiration rate than with AQP gene expression. Changes in root Lp, involving AQP function, form an integral part of the plant hydraulic response to stress and facilitate changes in the root-to-shoot surface area ratio, transpiration and stomatal conductance. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  19. Galleria mellonella apolipophorin III - an apolipoprotein with anti-Legionella pneumophila activity.

    PubMed

    Zdybicka-Barabas, Agnieszka; Palusińska-Szysz, Marta; Gruszecki, Wiesław I; Mak, Paweł; Cytryńska, Małgorzata

    2014-10-01

    The greater wax moth Galleria mellonella has been exploited worldwide as an alternative model host for studying pathogenicity and virulence factors of different pathogens, including Legionella pneumophila, a causative agent of a severe form of pneumonia called Legionnaires' disease. An important role in the insect immune response against invading pathogens is played by apolipophorin III (apoLp-III), a lipid- and pathogen associated molecular pattern-binding protein able to inhibit growth of some Gram-negative bacteria, including Legionella dumoffii. In the present study, anti-L. pneumophila activity of G. mellonella apoLp-III and the effects of the interaction of this protein with L. pneumophila cells are demonstrated. Alterations in the bacteria cell surface occurring upon apoLp-III treatment, revealed by Fourier transform infrared (FTIR) spectroscopy and atomic force microscopy, are also documented. ApoLp-III interactions with purified L. pneumophila LPS, an essential virulence factor of the bacteria, were analysed using electrophoresis and immunoblotting with anti-apoLp-III antibodies. Moreover, FTIR spectroscopy was used to gain detailed information on the type of conformational changes in L. pneumophila LPS and G. mellonella apoLp-III induced by their mutual interactions. The results indicate that apoLp-III binding to components of bacterial cell envelope, including LPS, may be responsible for anti-L. pneumophila activity of G. mellonella apoLp-III. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Staphylococcus aureus lipoproteins trigger human corneal epithelial innate response through toll-like receptor-2

    PubMed Central

    Li, Qiong; Kumar, Ashok; Gui, Jian-Fang; Yu, Fu-Shin X.

    2008-01-01

    Bacterial lipoproteins (LP) are a family of cell wall components found in a wide variety of bacteria. In this study, we characterized the response of HUCL, a telomerase-immortalized human corneal epithelial cell (HCEC) line, to LP isolated from Staphylococcus (S) aureus. S. aureus LP (saLP) prepared by Triton X-114 extraction stimulated the activation of NF-κB, JNK, and P38 signaling pathways in HUCL cells. The extracts failed to stimulate NF-κB activation in HUCL cells after lipoprotein lipase treatment and in cell lines expressing TLR4 or TLR9, but TLR2, indicating lipoprotein nature of the extracts. saLP induced the up-regulation of a variety of inflammatory cytokines and chemokines (IL-6, IL-8, ICAM-1) and antimicrobial molecules (hBD-2, LL-37, and iNOS), and homeostasis genes (Mn-SOD) at both the mRNA level and protein level. Similar inflammatory response to saLP was also observed in primarily cultured HCECs using the production of IL-6 as readout. Moreover, TLR2 neutralizing antibody blocked the saLP-induced secretion of IL-6, IL-8 and hBD2 in HUCL cells. Our findings suggest that saLP activates TLR2 and contributes to innate immune response in the cornea to S. aureus infection via production of proinflammatory cytokines and defense molecules. PMID:18191935

  1. Staphylococcus aureus lipoproteins trigger human corneal epithelial innate response through toll-like receptor-2.

    PubMed

    Li, Qiong; Kumar, Ashok; Gui, Jian-Fang; Yu, Fu-Shin X

    2008-05-01

    Bacterial lipoproteins (LP) are a family of cell wall components found in a wide variety of bacteria. In this study, we characterized the response of HUCL, a telomerase-immortalized human corneal epithelial cell (HCEC) line, to LP isolated from Staphylococcus (S) aureus. S. aureus LP (saLP) prepared by Triton X-114 extraction stimulated the activation of NF-kappaB, JNK, and P38 signaling pathways in HUCL cells. The extracts failed to stimulate NF-kappaB activation in HUCL cells after lipoprotein lipase treatment and in cell lines expressing TLR4 or TLR9, but not TLR2, indicating lipoprotein nature of the extracts. saLP induced the up-regulation of a variety of inflammatory cytokines and chemokines (IL-6, IL-8, ICAM-1), antimicrobial molecules (hBD-2, LL-37, and iNOS), and homeostasis genes (Mn-SOD) at both the mRNA level and protein level. Similar inflammatory response to saLP was also observed in primarily cultured HCECs using the production of IL-6 as readout. Moreover, TLR2 neutralizing antibody blocked the saLP-induced secretion of IL-6, IL-8 and hBD2 in HUCL cells. Our findings suggest that saLP activates TLR2 and triggers innate immune response in the cornea to S. aureus infection via production of proinflammatory cytokines and defense molecules.

  2. Associations of Lipoprotein(a) Levels With Incident Atrial Fibrillation and Ischemic Stroke: The ARIC (Atherosclerosis Risk in Communities) Study.

    PubMed

    Aronis, Konstantinos N; Zhao, Di; Hoogeveen, Ron C; Alonso, Alvaro; Ballantyne, Christie M; Guallar, Eliseo; Jones, Steven R; Martin, Seth S; Nazarian, Saman; Steffen, Brian T; Virani, Salim S; Michos, Erin D

    2017-12-15

    Lipoprotein(a) (Lp[a]) is proatherosclerotic and prothrombotic, causally related to coronary disease, and associated with other cardiovascular diseases. The association of Lp(a) with incident atrial fibrillation (AF) and with ischemic stroke among individuals with AF remains to be elucidated. In the community-based ARIC (Atherosclerosis Risk in Communities) study cohort, Lp(a) levels were measured by a Denka Seiken assay at visit 4 (1996-1998). We used multivariable-adjusted Cox models to compare AF and ischemic stroke risk across Lp(a) levels. First, we evaluated incident AF in 9908 participants free of AF at baseline. AF was ascertained by electrocardiography at study visits, hospital International Statistical Classification of Diseases, 9th Revision ( ICD-9 ) codes, and death certificates. We then evaluated incident ischemic stroke in 10 127 participants free of stroke at baseline. Stroke was identified by annual phone calls, hospital ICD-9 Revision codes, and death certificates. The baseline age was 62.7±5.6 years. Median Lp(a) levels were 13.3 mg/dL (interquartile range, 5.2-39.7 mg/dL). Median follow-up was 13.9 and 15.8 years for AF and stroke, respectively. Lp(a) was not associated with incident AF (hazard ratio, 0.98; 95% confidence interval, 0.82-1.17), comparing those with Lp(a) ≥50 with those with Lp(a) <10 mg/dL. High Lp(a) was associated with a 42% relative increase in stroke risk among participants without AF (hazard ratio, 1.42; 95% confidence interval, 1.07-1.90) but not in those with AF (hazard ratio, 1.06; 95% confidence interval, 0.70-1.61 [ P interaction for AF=0.25]). There were no interactions by race or sex. No association was found for cardioembolic stroke subtype. High Lp(a) levels were not associated with incident AF. Lp(a) levels were associated with increased ischemic stroke risk, primarily among individuals without AF but not in those with AF. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.

  3. Lipoprotein-associated phospholipase A2 mass and activity in children with heterozygous familial hypercholesterolemia and unaffected siblings: effect of pravastatin.

    PubMed

    Ryu, Sung Kee; Hutten, Barbara A; Vissers, Maud N; Wiegman, Albert; Kastelein, John J P; Tsimikas, Sotirios

    2011-01-01

    Lipoprotein-associated phospholipase A(2) (Lp-PLA(2)) is an independent risk factor of cardiovascular disease and a target of treatment. Lp-PLA(2) levels in children have not been previously reported. The effect of statin therapy on Lp-PLA(2) mass and activity in children with familial hypercholesterolemia (FH) is also not known. Lp-PLA(2) mass and activity levels were measured at baseline and after 2 years in 178 children with FH randomized to pravastatin or placebo and in 78 unaffected and untreated siblings. At the end of the randomized period, all FH children were then placed on pravastatin for an additional 2 years, and Lp-PLA(2) mass and activity levels were correlated with changes in carotid intima-media thickness during 4 years of follow-up. Baseline levels of Lp-PLA(2) mass and activity were significantly greater in children with FH compared with unaffected siblings (mass: 240.3 ± 41.6 vs 222.1 ± 36.5 ng/mL, P = .002; activity: 205.7 ± 41.6 vs 124.3±23.0 nmol/min/mL, P < .0001). In the randomized FH cohort, after 2 years treatment, Lp-PLA(2) mass (217.8 ± 35.0 vs 231.5 ± 34.8 ng/mL, P = .001) and activity (178.8 ± 37.3 vs 206.2 ± 33.5 nmol/min/mL, P < .0001) were significantly reduced by pravastatin compared with placebo. Change in Lp-PLA(2) activity was related to change in low-density lipoprotein cholesterol (pravastatin: r = 0.53, P < .0001, placebo: r = 0.23, P < .001) but change in Lp-PLA(2) mass was not related to change in low-density lipoprotein cholesterol. Baseline levels of Lp-PLA(2) mass and activity were not significantly associated with carotid intima-media thickness at baseline or at 4 years. Lp-PLA(2) mass and activity are significantly elevated in children with heterozygous FH compared with unaffected siblings and are significantly reduced by pravastatin therapy. Copyright © 2011 National Lipid Association. Published by Elsevier Inc. All rights reserved.

  4. Comparison between gradient-dependent hydraulic conductivities of roots using the root pressure probe: the role of pressure propagations and implications for the relative roles of parallel radial pathways.

    PubMed

    Bramley, Helen; Turner, Neil C; Turner, David W; Tyerman, Stephen D

    2007-07-01

    Hydrostatic pressure relaxations with the root pressure probe are commonly used for measuring the hydraulic conductivity (Lp(r)) of roots. We compared the Lp(r) of roots from species with different root hydraulic properties (Lupinus angustifolius L. 'Merrit', Lupinus luteus L. 'Wodjil', Triticum aestivum L. 'Kulin' and Zea mays L. 'Pacific DK 477') using pressure relaxations, a pressure clamp and osmotic gradients to induce water flow across the root. Only the pressure clamp measures water flow under steady-state conditions. Lp(r) determined by pressure relaxations was two- to threefold greater than Lp(r) from pressure clamps and was independent of the direction of water flow. Lp(r) (pressure clamp) was two- to fourfold higher than for Lp(r) (osmotic) for all species except Triticum aestivum where Lp(r) (pressure clamp) and Lp(r) (osmotic) were not significantly different. A novel technique was developed to measure the propagation of pressure through roots to investigate the cause of the differences in Lp(r). Root segments were connected between two pressure probes so that when root pressure (P(r)) was manipulated by one probe, the other probe recorded changes in P(r). Pressure relaxations did not induce the expected kinetics in pressure in the probe at the other end of the root when axial hydraulic conductance, and probe and root capacitances were accounted for. An electric circuit model of the root was constructed that included an additional capacitance in the root loaded by a series of resistances. This accounted for the double exponential kinetics for intact roots in pressure relaxation experiments as well as the reduced response observed with the double probe experiments. Although there were potential errors with all the techniques, we considered that the measurement of Lp(r) using the pressure clamp was the most unambiguous for small pressure changes, and provided that sufficient time was allowed for pressure propagation through the root. The differences in Lp(r) from different methods of measurement have implications for the models describing water transport through roots and the potential role of aquaporins.

  5. Association of inflammatory, lipid and mineral markers with cardiac calcification in older adults.

    PubMed

    Bortnick, Anna E; Bartz, Traci M; Ix, Joachim H; Chonchol, Michel; Reiner, Alexander; Cushman, Mary; Owens, David; Barasch, Eddy; Siscovick, David S; Gottdiener, John S; Kizer, Jorge R

    2016-07-13

    Calcification of the aortic valve and adjacent structures involves inflammatory, lipid and mineral metabolism pathways. We hypothesised that circulating biomarkers reflecting these pathways are associated with cardiac calcification in older adults. We investigated the associations of various biomarkers with valvular and annular calcification in the Cardiovascular Health Study. Of the 5888 participants, up to 3585 were eligible after exclusions for missing biomarker, covariate or echocardiographic data. We evaluated analytes reflecting lipid (lipoprotein (Lp) (a), Lp-associated phospholipase A 2 (LpPLA 2 ) mass and activity), inflammatory (interleukin-6, soluble (s) CD14) and mineral metabolism (fetuin-A, fibroblast growth factor (FGF)-23) pathways that were measured within 5 years of echocardiography. The relationships of plasma biomarkers with aortic valve calcification (AVC), aortic annular calcification (AAC) and mitral annular calcification (MAC) were assessed with relative risk (RR) regression. Calcification was prevalent: AVC 59%, AAC 45% and MAC 41%. After adjustment, Lp(a), LpPLA 2 mass and activity and sCD14 were positively associated with AVC. RRs for AVC per SD (95% CI) were as follows: Lp(a), 1.051 (1.022 to 1.081); LpPLA 2 mass, 1.036 (1.006 to 1.066) and LpPLA 2 activity, 1.037 (1.004 to 1.071); sCD14, 1.039 (1.005 to 1.073). FGF-23 was positively associated with MAC, 1.040 (1.004 to 1.078) and fetuin-A was negatively associated, 0.949 (0.911 to 0.989). No biomarkers were significantly associated with AAC. This study shows novel associations of circulating FGF-23 and fetuin-A with MAC, and LpPLA 2 and sCD14 with AVC, confirming that previously reported for Lp(a). Further investigation of Lp and inflammatory pathways may provide added insight into the aetiology of AVC, while study of phosphate regulation may illuminate the pathogenesis of MAC. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  6. The Effects of Curcuma longa L., Purple Sweet Potato, and Mixtures of the Two on Immunomodulation in C57BL/6J Mice Infected with LP-BM5 Murine Leukemia Retrovirus.

    PubMed

    Park, Soo-Jeung; Lee, Dasom; Lee, Minhee; Kwon, Han-Ol; Kim, Hyesook; Park, Jeongjin; Jeon, Woojin; Cha, Minseok; Jun, Suhwa; Park, Kwangjin; Lee, Jeongmin

    2018-06-04

    The immune response is stimulated to protect the body from external antigens and is controlled by several types of immune cells. In the present study, the immunomodulatory effects of Curcuma longa L., purple sweet potato, and mixtures of the two (CPM) were investigated in C57BL/6 mice infected with LP-BM5 murine leukemia virus (MuLV). Mice were divided into seven groups as follows: normal control, infected control (LP-BM5 MuLV infection), positive control (LP-BM5 MuLV infection+dietary supplement of red ginseng 300 mg/kg body weight), the original powder of C. longa L. (C; LP-BM5 MuLV infection+dietary supplement of C 189 mg/kg body weight), the original powder of purple sweet potato (P; LP-BM5 MuLV infection+dietary supplement of P 1811 mg/kg body weight), CPM Low (CPL; LP-BM5 MuLV infection+CPM 2 g/kg body weight), and CPM High (CPH; LP-BM5 MuLV infection+CPM 5 g/kg body weight). Dietary supplementation lasted for 12 weeks. Dietary supplementation of CPM inhibited LP-BM5 MuLV-induced lymphadenopathy and splenomegaly and inhibited reduction of messenger RNA (mRNA) expression of major histocompatibility complex (MHC) I and II. Moreover, CPM reduced the decrease in T- and B cell proliferation, reduced the population of CD4(+)/CD8(+) T cells, and remedied the unbalanced production of T helper-1 (Th1)/T helper-2 (Th2) cytokines in LP-BM5 MuLV-infected mice. In addition, CPM inhibited reduction of phagocytosis in peritoneal macrophages and decreased serum levels of immunoglobulin A (IgA), immunoglobulin E (IgE), and immunoglobulin G (IgG). These results suggest that CPM had a positive effect on immunomodulation in C57BL/6 mice induced by LP-BM5 leukemia retrovirus infection.

  7. Differential expression of Lp-PLA2 in obesity and type 2 diabetes and the influence of lipids.

    PubMed

    Jackisch, Laura; Kumsaiyai, Warunee; Moore, Jonathan D; Al-Daghri, Nasser; Kyrou, Ioannis; Barber, Thomas M; Randeva, Harpal; Kumar, Sudhesh; Tripathi, Gyanendra; McTernan, Philip G

    2018-05-01

    Lipoprotein-associated phospholipase A2 (Lp-PLA2) is a circulatory macrophage-derived factor that increases with obesity and leads to a higher risk of cardiovascular disease (CVD). Despite this, its role in adipose tissue and the adipocyte is unknown. Therefore, the aims of this study were to clarify the expression of Lp-PLA2 in relation to different adipose tissue depots and type 2 diabetes, and ascertain whether markers of obesity and type 2 diabetes correlate with circulating Lp-PLA2. A final aim was to evaluate the effect of cholesterol on cellular Lp-PLA2 in an in vitro adipocyte model. Analysis of anthropometric and biochemical variables from a cohort of lean (age 44.4 ± 6.2 years; BMI 22.15 ± 1.8 kg/m 2 , n = 23), overweight (age 45.4 ± 12.3 years; BMI 26.99 ± 1.5 kg/m 2 , n = 24), obese (age 49.0 ± 9.1 years; BMI 33.74 ± 3.3 kg/m 2 , n = 32) and type 2 diabetic women (age 53.0 ± 6.13 years; BMI 35.08 ± 8.6 kg/m 2 , n = 35), as part of an ethically approved study. Gene and protein expression of PLA2 and its isoforms were assessed in adipose tissue samples, with serum analysis undertaken to assess circulating Lp-PLA2 and its association with cardiometabolic risk markers. A human adipocyte cell model, Chub-S7, was used to address the intracellular change in Lp-PLA2 in adipocytes. Lp-PLA2 and calcium-independent PLA2 (iPLA2) isoforms were altered by adiposity, as shown by microarray analysis (p < 0.05). Type 2 diabetes status was also observed to significantly alter gene and protein levels of Lp-PLA2 in abdominal subcutaneous (AbdSc) (p < 0.01), but not omental, adipose tissue. Furthermore, multivariate stepwise regression analysis of circulating Lp-PLA2 and metabolic markers revealed that the greatest predictor of Lp-PLA2 in non-diabetic individuals was LDL-cholesterol (p = 0.004). Additionally, in people with type 2 diabetes, oxidised LDL (oxLDL), triacylglycerols and HDL-cholesterol appeared important predictors, accounting for 59.7% of the variance (p < 0.001). Subsequent in vitro studies determined human adipocytes to be a source of Lp-PLA2, as confirmed by mRNA expression, protein levels and immunochemistry. Further in vitro experiments revealed that treatment with LDL-cholesterol or oxLDL resulted in significant upregulation of Lp-PLA2, while inhibition of Lp-PLA2 reduced oxLDL production by 19.8% (p < 0.05). Our study suggests adipose tissue and adipocytes are active sources of Lp-PLA2, with differential regulation by fat depot and metabolic state. Moreover, levels of circulating Lp-PLA2 appear to be influenced by unfavourable lipid profiles in type 2 diabetes, which may occur in part through regulation of LDL-cholesterol and oxLDL metabolism in adipocytes.

  8. Most significant reduction of cardiovascular events in patients undergoing lipoproteinapheresis due to raised Lp(a) levels - A multicenter observational study.

    PubMed

    Schatz, U; Tselmin, S; Müller, G; Julius, U; Hohenstein, B; Fischer, S; Bornstein, S R

    2017-11-01

    Lipoprotein(a) (Lp(a)) is an independent cardiovascular (CV) risk factor, predisposing to premature and progressive CV events. Lipoproteinapheresis (LA) is the only efficacious therapy for reducing Lp(a). Data comparing the clinical efficacy of LA with respect to reduction of CV events in subjects with elevated Lp(a) versus LDL-C versus both disorders is scarce. We aimed to perform this comparison in a multicenter observational study. 113 LA patients from 8 apheresis centers were included (mean age 56.3 years). They were divided into 3 groups: Group I: Lp(a) < 600 mg/l, LDL-C > 2.6 mmol/l, Group II: Lp(a) > 600 mg/l, LDL-C < 2.6 mmol/l, and Group III: Lp(a) > 600 mg/l, LDL-C > 2.6 mmol/l. CV events were documented 2 years before versus 2 years after LA start. Before start of LA Group II showed the highest CV event rate (p 0.001). Group III had a higher CV event rate than Group I (p 0.03). During LA there was a significant reduction of CV events/patient in all vessel beds (1.22 ± 1.16 versus 0.33 ± 0.75, p < 0.001). The highest CV event rate during LA was seen in coronaries followed by peripheral arteries, cerebrovascular events were least common. Greater CV event reduction rates were achieved in patients with isolated Lp(a) elevation (-77%, p < 0.001) and in patients with Lp(a) and LDL-C elevation (-74%, p < 0.001) than in subjects with isolated hypercholesterolemia (-53%, p 0.06). This study demonstrates that patients with Lp(a) elevation benefit most from LA treatment. Prospective trials to confirm these data are warranted. Copyright © 2017. Published by Elsevier B.V.

  9. An alternative physiological role for the EmhABC efflux pump in Pseudomonas fluorescens cLP6a

    PubMed Central

    2011-01-01

    Background Efflux pumps belonging to the resistance-nodulation-division (RND) superfamily in bacteria are involved in antibiotic resistance and solvent tolerance but have an unknown physiological role. EmhABC, a RND-type efflux pump in Pseudomonas fluorescens strain cLP6a, extrudes hydrophobic antibiotics, dyes and polycyclic aromatic hydrocarbons including phenanthrene. The effects of physico-chemical factors such as temperature or antibiotics on the activity and expression of EmhABC were determined in order to deduce its physiological role(s) in strain cLP6a in comparison to the emhB disruptant strain, cLP6a-1. Results Efflux assays conducted with 14C-phenanthrene showed that EmhABC activity is affected by incubation temperature. Increased phenanthrene efflux was measured in cLP6a cells grown at 10°C and decreased efflux was observed at 35°C compared with cells grown at the optimum temperature of 28°C. Membrane fatty acids in cLP6a cells were substantially altered by changes in growth temperature and in the presence of tetracycline. Changed membrane fatty acids and increased membrane permeability were associated with ~30-fold increased expression of emhABC in cLP6a cells grown at 35°C, and with increased extracellular free fatty acids. Growth of P. fluorescens cLP6a at supra-optimal temperature was enhanced by the presence of EmhABC compared to strain cLP6a-1. Conclusions Combined, these observations suggest that the EmhABC efflux pump may be involved in the management of membrane stress effects such as those due to unfavourable incubation temperatures. Efflux of fatty acids replaced as a result of membrane damage or phospholipid turnover may be the primary physiological role of the EmhABC efflux pump in P. fluorescens cLP6a. PMID:22085438

  10. Calcium regulates the cell-to-cell water flow pathway in maize roots during variable water conditions.

    PubMed

    Wu, Yan; Liu, Xiaofang; Wang, Weifeng; Zhang, Suiqi; Xu, Bingcheng

    2012-09-01

    Soil water shortages can decrease root hydraulic conductivity and affect Ca uptake and movement through the plant. In this study, the effects of extra Ca(2+) applied in nutrient solution on the hydraulic properties of the whole roots (Lp(r)) and cortical cells (Lp(cell)) of maize (Zea mays L.) subjected to variable water conditions were investigated. Under well-watered conditions, extra Ca(2+) significantly increased the root Ca content, total root length, and lateral root number; however, it reduced the root cortical cell volume, Lp(r), and Lp(cell). Hg(2+) inhibition experiments suggested that extra Ca(2+) could reduce the contribution of the cell-to-cell water flow pathway. Osmotic stress (10% PEG6000) significantly decreased the cortical cell volume, Lp(r), and Lp(cell) in the control plants, but smaller decreases were observed in the extra Ca(2+) plants. The Hg(2+) treatment reduced the Lp(r) larger in the extra Ca(2+) plants (74.6%) than in the control plants (53.2%), suggesting a higher contribution of the cell-to-cell pathway. The larger Hg(2+) inhibition of the Lp(cell) in the extra Ca(2+) roots (67.2%) when compared to the controls (56.4%) indicated that extra Ca(2+) can mitigate the inhibition of aquaporin expression and/or activity levels via osmotic stress. After 2 d of rehydration, the extra Ca(2+) helped the Lp(r) and Lp(cell) to recover almost completely, but these properties only partially recovered in the control plants. In conclusion, extra Ca(2+) may adjust the contribution of cell-to-cell pathway by regulating the expression and/or activity levels of AQPs according to water availability; this regulation may weaken negative effects and optimize water use. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  11. Temporal and spectral characteristics of seismicity observed at Popocatepetl volcano, central Mexico

    USGS Publications Warehouse

    Arciniega-Ceballos, A.; Valdes-Gonzalez, C.; Dawson, P.

    2000-01-01

    Popocatepetl volcano entered an eruptive phase from December 21, 1994 to March 30, 1995, which was characterized by ash and fumarolic emissions. During this eruptive episode, the observed seismicity consisted of volcano-tectonic (VT) events, long-period (LP) events and sustained tremor. Before the initial eruption on December 21, VT seismicity exhibited no increase in number until a swarm of VT earthquakes was observed at 01:31 hours local time. Visual observations of the eruption occurred at dawn the next morning. LP activity increased from an average of 7 events a day in October 1994 to 22 events per day in December 1994. At the onset of the eruption, LP activity peaked at 49 events per day. LP activity declined until mid-January 1995 when no events were observed. Tremor was first observed about one day after the initial eruption and averaged 10 h per episode. By late February 1995, tremor episodes became more intermittent, lasting less than 5 min, and the number of LP events returned to pre-eruption levels (7 events per day). Using a spectral ratio technique, low-frequency oceanic microseismic noise with a predominant peak around 7 s was removed from the broadband seismic signal of tremor and LP events. Stacks of corrected tremor episodes and LP events show that both tremor and LP events contain similar frequency features with major peaks around 1.4 Hz. Frequency analyses of LP events and tremor suggest a shallow extended source with similar radiation pattern characteristics. The distribution of VT events (between 2.5 and 10 km) also points to a shallow source of the tremor and LP events located in the first 2500 m beneath the crater. Under the assumption that the frequency characteristics of the signals are representative of an oscillator we used a fluid-filled-crack model to infer the length of the resonator.

  12. Lp-PLA2 silencing protects against ox-LDL-induced oxidative stress and cell apoptosis via Akt/mTOR signaling pathway in human THP1 macrophages.

    PubMed

    Zheng, HuaDong; Cui, DaJiang; Quan, XiaoJuan; Yang, WeiLin; Li, YingNa; Zhang, Lin; Liu, EnQi

    2016-09-02

    Atherosclerosis is a disease of the large- and medium-size arteries that is characterized by the formation of atherosclerotic plaques, in which foam cells are the characteristic pathological cells. However, the key underlying pathomechanisms are still not fully elucidated. In this study, we investigated the role of lipoprotein-associated phospholipase A2 (Lp-PLA2) in ox-LDL-induced oxidative stress and cell apoptosis, and further, elucidated the potential machanisms in human THP1 macrophages. Flow cytometry and western blot analyses showed that both cell apoptosis and Lp-PLA2 expression were dose-dependently elevated after ox-LDL treatment for 24 h and also time-dependently increased after 50 mg/L ox-LDL incubation in THP1 macrophages. In addition, Lp-PLA2 silencing decreased ox-LDL-induced Lp-PLA2 and CD36 expression in THP1 macrophages. We also found that the levels of oil red O-staining, triglyceride (TG) and total cholesterol (TC) were significantly upregulated in ox-LDL-treated THP1 cells, but inhibited by Lp-PLA2 silencing. Furthermore, ox-LDL treatment resulted in significant increases of ROS and MDA but a marked decrease of SOD, effects that were reversed by Lp-PLA2 silencing in THP1 cells. Lp-PLA2 silencing reduced ox-LDL-induced cell apoptosis and caspase-3 expression in THP1 cells. Moreover, Lp-PLA2 siRNA transfection dramatically lowered the elevated levels of p-Akt and p-mTOR proteins in ox-LDL-treated THP1 cells. Both PI3K inhibitor LY294002 and mTOR inhibitor rapamycin decreased the augmented caspase-3 expression and TC content induced by ox-LDL, respectively. Taken together, these results revealed that Lp-PLA2 silencing protected against ox-LDL-induced oxidative stress and cell apoptosis via Akt/mTOR signaling pathway in human THP1 macrophages. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Racial variation in lipoprotein-associated phospholipase A2 in older adults

    PubMed Central

    2011-01-01

    Background Lipoprotein-associated phospholipase A2 (Lp-PLA2) is a predictor of cardiovascular events that has been shown to vary with race. The objective of this study was to examine factors associated with this racial variation. Methods We measured Lp-PLA2 mass and activity in 714 healthy older adults with no clinical coronary heart disease and not taking dyslipidemia medication. We evaluated the association between race and Lp-PLA2 mass and activity levels after adjustment for various covariates using multivariable linear regression. These covariates included age, sex, diabetes, hypertension, body mass index, lipid measurements, C-reactive protein, smoking status, physical activity, diet, income, and education level. We further examined genetic covariates that included three single nucleotide polymorphisms shown to be associated with Lp-PLA2 activity levels. Results The mean age was 66 years. Whites had the highest Lp-PLA2 mass and activity levels, followed by Hispanics and Asians, and then African-Americans; in age and sex adjusted analyses, these differences were significant for each non-White race as compared to Whites (p < 0.0001). For example, African-Americans were predicted to have a 55.0 ng/ml lower Lp-PLA2 mass and 24.7 nmol/ml-min lower activity, compared with Whites, independent of age and sex (p < 0.0001). After adjustment for all covariates, race remained significantly correlated with Lp-PLA2 mass and activity levels (p < 0.001) with African-Americans having 44.8 ng/ml lower Lp-PLA2 mass and 17.3 nmol/ml-min lower activity compared with Whites (p < 0.0001). Conclusion Biological, lifestyle, demographic, and select genetic factors do not appear to explain variations in Lp-PLA2 mass and activity levels between Whites and non-Whites, suggesting that Lp-PLA2 mass and activity levels may need to be interpreted differently for various races. PMID:21714927

  14. Lipoprotein (a) level, apolipoprotein (a) size, and risk of unexplained ischemic stroke in young and middle-aged adults.

    PubMed

    Beheshtian, Azadeh; Shitole, Sanyog G; Segal, Alan Z; Leifer, Dana; Tracy, Russell P; Rader, Daniel J; Devereux, Richard B; Kizer, Jorge R

    2016-10-01

    Circulating lipoprotein (a) [Lp(a)] level relates inversely to apolipoprotein (a) [apo(a)] size. Both smaller apo(a) isoforms and higher Lp(a) levels have been linked to coronary heart disease and stroke, but their independent contributions are less well defined. We examined the role of Lp(a) in younger adults with cryptogenic stroke. Lp(a) and apo(a) isoforms were evaluated in a prospectively designed case-control study of patients with unexplained ischemic stroke and stroke-free controls, ages 18 to 64. Serum Lp(a) was measured among 255 cases and 390 controls with both apo(a)-size independent and dependent assays. Apo(a) size was determined by agarose gel electrophoresis. Cases and controls were similar in socio-demographic characteristics, but cases had more hypertension, diabetes, smoking, and migraine with aura. In race-specific analyses, Lp(a) levels showed positive associations with cryptogenic stroke in whites, but not in the smaller subgroups of blacks and Hispanics. After full adjustment, comparison of the highest versus lowest quartile in whites was significant for apo(a)-size-independent (OR = 2.10 [95% CI = 1.04, 4.27], p = 0.040), and near-significant for apo(a)-size-dependent Lp(a) (OR = 1.81 [95% CI = 0.95, 3.47], p = 0.073). Apo(a) size was not associated with cryptogenic stroke in any race-ethnic subgroup. This study underscores the importance of Lp(a) level, but not apo(a) size, as an independent risk factor for unexplained ischemic stroke in young and middle-aged white adults. Given the emergence of effective Lp(a)-lowering therapies, these findings support routine testing for Lp(a) in this setting, along with further research to assess the extent to which such therapies improve outcomes in this population. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Increased stroke risk and lipoprotein(a) in a multiethnic community: the Northern Manhattan Stroke Study.

    PubMed

    Boden-Albala, Bernadette; Kargman, Douglas E; Lin, I-Feng; Paik, Myunghee C; Sacco, Ralph L; Berglund, Lars

    2010-08-01

    Elevated lipoprotein(a) [Lp(a)] is associated with ischemic stroke (IS) among Whites, but data is sparse for non-White populations. Using a population-based case-control study design with subjects from the Northern Manhattan Stroke Study, we assessed whether Lp(a) levels were independently associated with IS risk among Whites, Blacks and Hispanics. Lp(a) levels were measured in 317 IS cases (mean age 69 +/- 13 years; 56% women; 16% Whites, 31% Blacks and 52% Hispanics) and 413 community-based controls, matched by age, race/ethnicity and gender. In-person assessments included demographics, socioeconomic status, presence of vascular risk factors and fasting lipid levels. Logistic regression was used to determine the independent association of Lp(a) and IS. Stratified analyses investigated gender and race/ethnic differences. Mean Lp(a) levels were greater among cases than controls (46.3 +/- 41.0 vs. 38.9 +/- 38.2 mg/dl; p < 0.01). After adjusting for stroke risk factors (hypertension, diabetes mellitus, coronary artery disease, cigarette smoking), lipid levels, and socioeconomic status, Lp(a) levels > or =30 mg/dl were independently associated with an increased stroke risk in the overall cohort (adjusted odds ratio, OR, 1.8, 95% confidence interval, CI, 1.20-2.6; p = 0.004). There was a significant linear dose-response relationship between Lp(a) levels and IS risk. The association between IS risk and Lp(a) > or =30 mg/dl was more pronounced among men (adjusted OR 2.0, 95% CI 1.1-3.5; p = 0.02) and among Blacks (adjusted OR 2.7, 95% CI 1.2-6.2; p = 0.02). Elevated Lp(a) levels were significantly and independently associated with increased stroke risk, suggesting that Lp(a) is a risk factor for IS across White, Black and Hispanic race/ethnic groups. Copyright 2010 S. Karger AG, Basel.

  16. Transfer of C-terminal residues of human apolipoprotein A-I to insect apolipophorin III creates a two-domain chimeric protein with enhanced lipid binding activity.

    PubMed

    Horn, James V C; Ellena, Rachel A; Tran, Jesse J; Beck, Wendy H J; Narayanaswami, Vasanthy; Weers, Paul M M

    2017-08-01

    Apolipophorin III (apoLp-III) is an insect apolipoprotein (18kDa) that comprises a single five-helix bundle domain. In contrast, human apolipoprotein A-I (apoA-I) is a 28kDa two-domain protein: an α-helical N-terminal domain (residues 1-189) and a less structured C-terminal domain (residues 190-243). To better understand the apolipoprotein domain organization, a novel chimeric protein was engineered by attaching residues 179 to 243 of apoA-I to the C-terminal end of apoLp-III. The apoLp-III/apoA-I chimera was successfully expressed and purified in E. coli. Western blot analysis and mass spectrometry confirmed the presence of the C-terminal domain of apoA-I within the chimera. While parent apoLp-III did not self-associate, the chimera formed oligomers similar to apoA-I. The chimera displayed a lower α-helical content, but the stability remained similar compared to apoLp-III, consistent with the addition of a less structured domain. The chimera was able to solubilize phospholipid vesicles at a significantly higher rate compared to apoLp-III, approaching that of apoA-I. The chimera was more effective in protecting phospholipase C-treated low density lipoprotein from aggregation compared to apoLp-III. In addition, binding interaction of the chimera with phosphatidylglycerol vesicles and lipopolysaccharides was considerably improved compared to apoLp-III. Thus, addition of the C-terminal domain of apoA-I to apoLp-III created a two-domain protein, with self-association, lipid and lipopolysaccharide binding properties similar to apoA-I. The apoA-I like behavior of the chimera indicate that these properties are independent from residues residing in the N-terminal domain of apoA-I, and that they can be transferred from apoA-I to apoLp-III. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Dynamics of genome change among Legionella species

    PubMed Central

    Joseph, Sandeep J.; Cox, Daniel; Wolff, Bernard; Morrison, Shatavia S.; Kozak-Muiznieks, Natalia A.; Frace, Michael; Didelot, Xavier; Castillo-Ramirez, Santiago; Winchell, Jonas; Read, Timothy D.; Dean, Deborah

    2016-01-01

    Legionella species inhabit freshwater and soil ecosystems where they parasitize protozoa. L. pneumonphila (LP) serogroup-1 (Lp1) is the major cause of Legionnaires’ Disease (LD), a life-threatening pulmonary infection that can spread systemically. The increased global frequency of LD caused by Lp and non-Lp species underscores the need to expand our knowledge of evolutionary forces underlying disease pathogenesis. Whole genome analyses of 43 strains, including all known Lp serogroups 1–17 and 17 emergent LD-causing Legionella species (of which 33 were sequenced in this study) in addition to 10 publicly available genomes, resolved the strains into four phylogenetic clades along host virulence demarcations. Clade-specific genes were distinct for genetic exchange and signal-transduction, indicating adaptation to specific cellular and/or environmental niches. CRISPR spacer comparisons hinted at larger pools of accessory DNA sequences in Lp than predicted by the pan-genome analyses. While recombination within Lp was frequent and has been reported previously, population structure analysis identified surprisingly few DNA admixture events between species. In summary, diverse Legionella LD–causing species share a conserved core-genome, are genetically isolated from each other, and selectively acquire genes with potential for enhanced virulence. PMID:27633769

  18. Up-regulation of Proinflammatory Genes and Cytokines Induced by S100A8 in CD8+ T Cells in Lichen Planus.

    PubMed

    de Carvalho, Gabriel Costa; Domingues, Rosana; de Sousa Nogueira, Marcelle Almeida; Calvielli Castelo Branco, Anna C; Gomes Manfrere, Kelly C; Pereira, Naiura Vieira; Aoki, Valéria; Sotto, Mirian Nacagami; Da Silva Duarte, Alberto J; Sato, Maria Notomi

    2016-05-01

    Lichen planus (LP) is a chronic inflammatory mucocutaneous disease. The inflammatory status of LP may be related to S100A8 (myeloid-related protein 8; MRP8) activation of cytotoxic cells. The aims of this study were to evaluate S100A8 expression in skin lesions and the in vitro effects of S100A8 on CD8+ T cells and natural killer (NK) cells in LP. Increased levels of S100A8/S100A9 were detected in the skin lesions as well as in the sera of subjects with LP. S100A8 expression induced an increased cytotoxic response by peripheral blood CD8+CD107a+ T cells as well as by NK CD56bright cells in patients with LP. Increased expression of interleukin (IL)-1?, tumour necrosis factor (TNF) and IL-6 in the CD8+ T cells of patients with LP was induced by S100A8, in contrast to the control group that produced IL- 10 and interferon type I genes. These data suggest that, in individuals with LP, S100A8 may exert distinct immunomodulatory and cytotoxicity functions.

  19. Squamous cell carcinoma from oral lichen planus: a case report of a lesion with 28 years of evolution.

    PubMed

    Silveira, Wanessa da Silva; Bottezini, Ezequiel Gregolin; Linden, Maria Salete; Rinaldi, Isadora; Paranhos, Luiz Renato; de Carli, João Paulo; Trentin, Micheline; Dos Santos, Pâmela Letícia

    2017-12-01

    Lichen planus (LP) is a relatively common mucocutaneous disease with autoimmune etiology. Considering its malignancy potential, it is important to define the correct diagnosis, treatment, and clinical follow-up for patients with LP so that the disease is not diagnosed late, thus hindering the chances of curing the disease. This study aims to describe a clinical case of oral squamous cell carcinoma, potentially originated from LP. The patient is undergoing clinical and histopathological follow-up. A 64-year-old Caucasian male patient presented with a proliferative verrucous lesion on the tongue and sought treatment at the School of Dentistry, University of Passo Fundo (UPF), Passo Fundo, Brazil. He claimed the lesion had been present since 1988, and had been initially diagnoses as "oral lichen planus." The physical exam presented three diagnostic hypotheses: plaque-like oral LP, verrucous carcinoma, and squamous cell carcinoma. After incisional biopsy and histopathological analysis, squamous cell carcinoma was diagnosed, probably originating from oral LP. The case study shows that malignancy from oral LP is possible, which justifies periodic clinical and histopathological follow-up, as well as the elimination of risk factors for carcinoma in patients with oral LP.

  20. Dietary Nanosized Lactobacillus plantarum Enhances the Anticancer Effect of Kimchi on Azoxymethane and Dextran Sulfate Sodium-Induced Colon Cancer in C57BL/6J Mice.

    PubMed

    Lee, Hyun Ah; Kim, Hyunung; Lee, Kwang-Won; Park, Kun-Young

    2016-01-01

    This study was undertaken to evaluate enhancement of the chemopreventive properties of kimchi by dietary nanosized Lactobacillus (Lab.)plantarum (nLp) in an azoxymethane (AOM)/dextran sulfate sodium (DSS)-induced colitis-associated colorectal cancer C57BL/6J mouse model. nLp is a dead, shrunken, processed form of Lab. Plantarum isolated from kimchi that is 0.5-1.0 µm in size. The results obtained showed that animals fed kimchi with nLp (K-nLp) had longer colons and lower colon weights/length ratios and developed fewer tumors than mice fed kimchi alone (K). In addition, K-nLp administration reduced levels of proinflammatory cytokine serum levels and mediated the mRNA and protein expressions of inflammatory, apoptotic, and cell-cycle markers to suppress inflammation and induce tumor-cell apoptosis and cell-cycle arrest. Moreover, it elevated natural killer-cell cytotoxicity. The study suggests adding nLp to kimchi could improve the suppressive effect of kimchi on AOM/DSS-induced colorectal cancer. These findings indicate nLp has potential use as a functional chemopreventive ingredient in the food industry.

  1. A novel liquid-phase piezoelectric immunosensor for detecting Schistosoma japonicum circulating antigen.

    PubMed

    Wen, Zhili; Wang, Shiping; Wu, Zhaoyang; Shen, Guoli

    2011-09-01

    A new liquid-phase piezoelectric immunosensor (LP-PEIS), which can detect Schistosoma japonicum (Sj) circulating antigens (SjCAg) quantificationally, was developed. The IgG antibodies were purified from the sera of rabbits which had been infected or immunized by Sj and were immobilized on the surface of piezoelectric quartz crystal in LP-PEIS by staphylococcal protein A (SPA). It was used to detect SjCAg in sera of rabbits which had been infected by Sj in order to acquire some optimum conditions for detecting SjCAg. Finally, the LP-PEIS with optimum conditions was used to detect SjCAg in sera of patients who had been infected by Sj, and was compared with sandwich ELISA. A lot of optimum conditions of LP-PEIS for detecting SjCAg had been acquired. In the detection of patients' sera with acute Schistosomiasis, LP-PEIS has higher positive rate (100%) and lower false positive rate (3.0%) than sandwich ELISA (92.8%, 6.0%). However, there were no significant difference between LP-PEIS and sandwich ELISA. LP-PEIS can quantificationally detect SjCAg in patients' sera as well as sandwich ELISA. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Synthesis and enzymatic degradation of epichlorohydrin cross-linked pectins.

    PubMed

    Semdé, Rasmané; Moës, André J; Devleeschouwer, Michel J; Amighi, Karim

    2003-02-01

    The water solubility of pectin was successfully decreased by cross-linking with increasing amounts of epichlorohydrin in the reaction media. The initial molar ratios of epichlorohydrin/ galacturonic acid monomer in the reaction mixtures were 0, 0.37, 0.56, 0.74, 1.00, 1.47, and 2.44. The resulting epichlorohydrin cross-linked pectins were thus referred to as C-LP0, C-LP37, C-LP56, C-LP75, C-LP100, C-LP150, and C-LP250, respectively. Methoxylation degrees ranged from 60.5 +/- 0.9% to 68.0 +/- 0.6%, and the effective cross-linking degrees, determined by quantification of the hydroxyl anions consumed during the reaction, were 0, 17.8, 26.0, 38.3, 46.5, 53.5, and 58.7%. respectively. After incubating the different cross-linked pectins (0.5% w/v) in 25 mL of 0.05 M acetate-phosphate buffer (pH 4.5), containing 50 microL of Pectinex Ultra SP-L (pectinolytic enzymes), between 60 and 80% of the pectin osidic bounds were broken in less than 1 hr. Moreover, increasing the cross-linking degree only resulted in a weak slowing on the enzymatic degradation velocity.

  3. Critiques of Language Planning: A Minority Languages Perspective.

    ERIC Educational Resources Information Center

    Fishman, Joshua A.

    1994-01-01

    Examines neo-Marxist and poststructural critiques of classical language planning (lp) for relevance to lp on behalf of minority languages. Criticisms suggest lp is conducted by elites governed by self-interest, reproduces rather than overcomes sociocultural and econotechnical inequalities, inhibits multiculturalism, espouses worldwide…

  4. 78 FR 16846 - Notice of Application; Equitrans, L.P.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... Application; Equitrans, L.P. Take notice that on March 1, 2013, Equitrans, L.P. (Equitrans), 625 Liberty... (toll free). For TTY, call (202) 502-8659. Comment date: 5:00 p.m. Eastern Time on April 2, 2013. Dated... 6717-01-P ...

  5. Diagnostic Lumbar Puncture Among Children With Facial Palsy in a Lyme Disease Endemic Area.

    PubMed

    Paydar-Darian, Niloufar; Kimia, Amir A; Lantos, Paul M; Fine, Andrew M; Gordon, Caroline D; Gordon, Catherine R; Landschaft, Assaf; Nigrovic, Lise E

    2017-06-01

    We identified 620 children with peripheral facial palsy of which 211 (34%) had Lyme disease. The 140 children who had a lumbar puncture performed were more likely to be hospitalized (73% LP performed vs 2% no LP) and to receive parenteral antibiotics (62% LP performed vs 6% no LP). © The Author 2016. Published by Oxford University Press on behalf of The Journal of the Pediatric Infectious Diseases Society. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Kelvin waves in the tropical stratosphere observed in OMPS-LP ozone measurements

    NASA Astrophysics Data System (ADS)

    Randel, W. J.; Park, M.

    2017-12-01

    We investigate equatorial waves in the tropical stratosphere using OMPS limb profiler (LP) ozone measurements spanning 2012-2017. The OMPS-LP data show clear evidence of eastward propagating planetary-scale Kelvin waves with periods near 15-20 days, and these feature are strongly modulated by the background winds linked to the quasi-biennial oscillation (QBO). We study coherence between OMPS-LP ozone and GPS radio occultation temperature measurements, and use these analyses to evaluate data quality and variability in the tropical stratosphere.

  7. Improved FastICA algorithm in fMRI data analysis using the sparsity property of the sources.

    PubMed

    Ge, Ruiyang; Wang, Yubao; Zhang, Jipeng; Yao, Li; Zhang, Hang; Long, Zhiying

    2016-04-01

    As a blind source separation technique, independent component analysis (ICA) has many applications in functional magnetic resonance imaging (fMRI). Although either temporal or spatial prior information has been introduced into the constrained ICA and semi-blind ICA methods to improve the performance of ICA in fMRI data analysis, certain types of additional prior information, such as the sparsity, has seldom been added to the ICA algorithms as constraints. In this study, we proposed a SparseFastICA method by adding the source sparsity as a constraint to the FastICA algorithm to improve the performance of the widely used FastICA. The source sparsity is estimated through a smoothed ℓ0 norm method. We performed experimental tests on both simulated data and real fMRI data to investigate the feasibility and robustness of SparseFastICA and made a performance comparison between SparseFastICA, FastICA and Infomax ICA. Results of the simulated and real fMRI data demonstrated the feasibility and robustness of SparseFastICA for the source separation in fMRI data. Both the simulated and real fMRI experimental results showed that SparseFastICA has better robustness to noise and better spatial detection power than FastICA. Although the spatial detection power of SparseFastICA and Infomax did not show significant difference, SparseFastICA had faster computation speed than Infomax. SparseFastICA was comparable to the Infomax algorithm with a faster computation speed. More importantly, SparseFastICA outperformed FastICA in robustness and spatial detection power and can be used to identify more accurate brain networks than FastICA algorithm. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Two-population model for medial temporal lobe neurons: The vast majority are almost silent

    NASA Astrophysics Data System (ADS)

    Magyar, Andrew; Collins, John

    2015-07-01

    Recordings in the human medial temporal lobe have found many neurons that respond to pictures (and related stimuli) of just one particular person of those presented. It has been proposed that these are concept cells, responding to just a single concept. However, a direct experimental test of the concept cell idea appears impossible, because it would need the measurement of the response of each cell to enormous numbers of other stimuli. Here we propose a new statistical method for analysis of the data that gives a more powerful way to analyze how close data are to the concept-cell idea. Central to the model is the neuronal sparsity, defined as the total fraction of stimuli that elicit an above-threshold response in the neuron. The model exploits the large number of sampled neurons to give sensitivity to situations where the average response sparsity is much less than one response for the number of presented stimuli. We show that a conventional model where a single sparsity is postulated for all neurons gives an extremely poor fit to the data. In contrast, a model with two dramatically different populations gives an excellent fit to data from the hippocampus and entorhinal cortex. In the hippocampus, one population has 7% of the cells with a 2.6% sparsity. But a much larger fraction (93%) respond to only 0.1% of the stimuli. This can result in an extreme bias in the responsiveness of reported neurons compared with a typical neuron. Finally, we show how to allow for the fact that some identified units correspond to multiple neurons and find that our conclusions at the neural level are quantitatively changed but strengthened, with an even stronger difference between the two populations.

  9. Association between lipoprotein(a) level and type 2 diabetes: no evidence for a causal role of lipoprotein(a) and insulin.

    PubMed

    Buchmann, Nikolaus; Scholz, Markus; Lill, Christina M; Burkhardt, Ralph; Eckardt, Rahel; Norman, Kristina; Loeffler, Markus; Bertram, Lars; Thiery, Joachim; Steinhagen-Thiessen, Elisabeth; Demuth, Ilja

    2017-11-01

    Inverse relationships have been described between the largely genetically determined levels of serum/plasma lipoprotein(a) [Lp(a)], type 2 diabetes (T2D) and fasting insulin. Here, we aimed to evaluate the nature of these relationships with respect to causality. We tested whether we could replicate the recent negative findings on causality between Lp(a) and T2D by employing the Mendelian randomization (MR) approach using cross-sectional data from three independent cohorts, Berlin Aging Study II (BASE-II; n = 2012), LIFE-Adult (n = 3281) and LIFE-Heart (n = 2816). Next, we explored another frequently discussed hypothesis in this context: Increasing insulin levels during the course of T2D disease development inhibits hepatic Lp(a) synthesis and thereby might explain the inverse Lp(a)-T2D association. We used two fasting insulin-associated variants, rs780094 and rs10195252, as instrumental variables in MR analysis of n = 4937 individuals from BASE-II and LIFE-Adult. We further investigated causality of the association between fasting insulin and Lp(a) by combined MR analysis of 12 additional SNPs in LIFE-Adult. While an Lp(a)-T2D association was observed in the combined analysis (meta-effect of OR [95% CI] = 0.91 [0.87-0.96] per quintile, p = 1.3x10 -4 ), we found no evidence of causality in the Lp(a)-T2D association (p = 0.29, fixed effect model) when using the variant rs10455872 as the instrumental variable in the MR analyses. Likewise, no evidence of a causal effect of insulin on Lp(a) levels was found. While these results await confirmation in larger cohorts, the nature of the inverse Lp(a)-T2D association remains to be elucidated.

  10. Measurement of Lp(a) with a two-step monoclonal competitive sandwich ELISA method.

    PubMed

    Morikawa, W; Iki, R; Terano, T; Funatsu, A; Sugiuchi, H; Uji, Y; Okabe, H

    1995-06-01

    To evaluate the results of Lipoprotein (a)[Lp(a)] measurements by a competitive two-step monoclonal enzyme-linked immuno sorbent assay method comparing them with those by a conventional ELISA. Serum having various isoforms of Lp(a) and purified Lp(a) were assayed using the method described here and commercially available kits. The reference range was determined with the use of 324 normal subjects by means of calculation from Lp(a) results of logarithmic transformation. Our method takes advantage of a competitive reaction between fixed antibody and free antibody to Lp(a), having the detection range up to 1000 mg/L with the lowest detection limit of 2 mg/L. The anti-Lp(a) monoclonal antibody employed in the assay system reacts uniformly with all phenotypes of Lp(a) but showing very low cross-reactivity for plasminogen and LDL. Within-run and between-run precisions were excellent, giving CVs of 2.9 and 4.0% with mean values of 145 and 635 mg/L, respectively. In comparison of the results by our method with those by a polyclonal method (Biopool) or a monoclonal antibody method (Terumo), they correlated well; Y (our method) = 0.99 x (polyclonal method, Biopool) - 1.9, r = 0.994 (n = 60), and Y = 0.94 X(monoclonal method, Terumo) -9.8, r = 0.97 (n = 60), respectively. The reference range was 105.9 +/- 25.4 mg/L, the difference between the sexes was not significant. Our method has proven highly accurate and specific. It is applicable with auto analyzer because it does not require such a pre-dilution step as is necessary for Lp(a) determination by conventional ELISA assay. Accordingly, we can conclude that our test method is workable for both clinical laboratories and mass screening.

  11. A Nationwide Cross-Sectional Survey of Sleep-Related Problems in Japanese Visually Impaired Patients: Prevalence and Association with Health-Related Quality of Life.

    PubMed

    Tamura, Norihisa; Sasai-Sakuma, Taeko; Morita, Yuko; Okawa, Masako; Inoue, Shigeru; Inoue, Yuichi

    2016-12-15

    This questionnaire-based cross-sectional study was conducted (1) to estimate the prevalence of sleep-related problems, and (2) to explore factors associated with lower physical/mental quality of life (QOL), particularly addressing sleep-related problems among Japanese visually impaired people. This nationwide questionnaire-based survey was administered to visually impaired individuals through the Japan Federation of the Blind. Visually impaired individuals without light perception (LP) (n = 311), those with LP (n = 287), and age-matched and gender-matched controls (n = 615) were eligible for this study. Study questionnaires elicited demographic information, and information about visual impairment status, sleep-related problems, and health-related quality of life. Visually impaired individuals with and without LP showed higher prevalence rates of irregular sleep-wake patterns and difficulty maintaining sleep than controls (34.7% and 29.4% vs. 15.8%, 60.1% and 46.7% vs. 26.8%, respectively; p < 0.001). These sleep-related problems were observed more frequently in visually impaired individuals without LP than in those with LP. Non-restorative sleep or excessive daytime sleepiness was associated with lower mental/physical QOL in visually impaired individuals with LP and in control subjects. However, visually impaired individuals without LP showed irregular sleep-wake pattern or difficulty waking up at the desired time, which was associated with lower mental/physical QOL. Sleep-related problems were observed more frequently in visually impaired individuals than in controls. Moreover, the rates of difficulties were higher among subjects without LP. Sleep-related problems, especially circadian rhythm-related ones, can be associated with lower mental/physical QOL in visually impaired individuals without LP. © 2016 American Academy of Sleep Medicine

  12. Location of long-period events below Kilauea Volcano using seismic amplitudes and accurate relative relocation

    USGS Publications Warehouse

    Battaglia, J.; Got, J.-L.; Okubo, P.

    2003-01-01

    We present methods for improving the location of long-period (LP) events, deep and shallow, recorded below Kilauea Volcano by the permanent seismic network. LP events might be of particular interest to understanding eruptive processes as their source mechanism is assumed to directly involve fluid transport. However, it is usually difficult or impossible to locate their source using traditional arrival time methods because of emergent wave arrivals. At Kilauea, similar LP waveform signatures suggest the existence of LP multiplets. The waveform similarity suggests spatially close sources, while catalog solutions using arrival time estimates are widely scattered beneath Kilauea's summit caldera. In order to improve estimates of absolute LP location, we use the distribution of seismic amplitudes corrected for station site effects. The decay of the amplitude as a function of hypocentral distance is used for inferring LP location. In a second stage, we use the similarity of the events to calculate their relative positions. The analysis of the entire LP seismicity recorded between January 1997 and December 1999 suggests that a very large part of the LP event population, both deep and shallow, is generated by a small number of compact sources. Deep events are systematically composed of a weak high-frequency onset followed by a low-frequency wave train. Aligning the low-frequency wave trains does not lead to aligning the onsets indicating the two parts of the signal are dissociated. This observation favors an interpretation in terms of triggering and resonance of a magmatic conduit. Instead of defining fault planes, the precise relocation of similar LP events, based on the alignment of the high-energy low-frequency wave trains, defines limited size volumes. Copyright 2003 by the American Geophysical Union.

  13. Finger tapping and verbal fluency post-tap test improvement in INPH: its value in differential diagnosis and shunt-treatment outcomes prognosis.

    PubMed

    Liouta, Evangelia; Gatzonis, Stylianos; Kalamatianos, Theodosis; Kalyvas, Aristotelis; Koutsarnakis, Christos; Liakos, Faidon; Anagnostopoulos, Christos; Komaitis, Spyridon; Giakoumettis, Dimitris; Stranjalis, George

    2017-12-01

    Idiopathic normal pressure hydrocephalus (INPH) diagnosis is challenging as it can be mimicked by other neurological conditions, such as neurodegenerative dementia and motor syndromes. Additionally, outcomes after lumbar puncture (LP) tap test and shunt treatment may vary due to the lack of a common protocol in INPH assessment. The present study aimed to assess whether a post-LP test amelioration of frontal cognitive dysfunctions, characterizing this syndrome, can differentiate INPH from similar neurological conditions and whether this improvement can predict INPH post-shunt outcomes. Seventy-one consecutive patients referred for INPH suspicion and LP testing, were enrolled. According to the consensus guidelines criteria, 29 patients were diagnosed as INPH and 42 were assigned an alternative diagnosis (INPH-like group) after reviewing clinical, neuropsychological and imaging data, and before LP results. A comprehensive neuropsychological assessment for frontal executive, upper extremity fine motor functions, aphasias, apraxias, agnosias and gait evaluation were administered at baseline. Executive, fine motor functions and gait were re-examined post-LP test in all patients and post-shunt placement in INPH patients. Of the INPH patients, 86.2% showed cognitive amelioration in the post-LP test; in addition, all but one (97%) presented with neurocognitive and gait improvement post-shunt. Verbal phonological fluency and finger tapping task post-LP improvement predicted positive clinical outcome post-shunt. None of the INPH-like group presented with neurocognitive improvement post-LP. Post-LP amelioration of verbal fluency and finger tapping deficits can differentiate INPH from similar disorders and predict positive post-shunt clinical outcome in INPH. This becomes of great importance when gait assessment is difficult to perform in clinical practice.

  14. Late presentation for HIV care across Europe: update from the Collaboration of Observational HIV Epidemiological Research Europe (COHERE) study, 2010 to 2013.

    PubMed

    Mocroft, Amanda; Lundgren, Jens; Antinori, Andrea; Monforte, Antonella d'Arminio; Brännström, Johanna; Bonnet, Fabrice; Brockmeyer, Norbert; Casabona, Jordi; Castagna, Antonella; Costagliola, Dominique; De Wit, Stéphane; Fätkenheuer, Gerd; Furrer, Hansjakob; Jadand, Corinne; Johnson, Anne; Lazanas, Mario; Leport, Catherine; Moreno, Santiago; Mussini, Christina; Obel, Niels; Post, Frank; Reiss, Peter; Sabin, Caroline; Skaletz-Rorowski, Adriane; Suarez-Loano, Ignacio; Torti, Carlo; Warszawski, Josiane; Wittkop, Linda; Zangerle, Robert; Chene, Genevieve; Raben, Dorthe; Kirk, Ole

    2015-01-01

    Late presentation (LP) for HIV care across Europe remains a significant issue. We provide a cross-European update from 34 countries on the prevalence and risk factors of LP for 2010-2013. People aged ≥ 16 presenting for HIV care (earliest of HIV-diagnosis, first clinic visit or cohort enrollment) after 1 January 2010 with available CD4 count within six months of presentation were included. LP was defined as presentation with a CD4 count < 350/mm(3) or an AIDS defining event (at any CD4), in the six months following HIV diagnosis. Logistic regression investigated changes in LP over time. A total of 30,454 people were included. The median CD4 count at presentation was 368/mm(3) (interquartile range (IQR) 193-555/mm(3)), with no change over time (p = 0.70). In 2010, 4,775/10,766 (47.5%) were LP whereas in 2013, 1,642/3,375 (48.7%) were LP (p = 0.63). LP was most common in central Europe (4,791/9,625, 49.8%), followed by northern (5,704/11,692; 48.8%), southern (3,550/7,760; 45.8%) and eastern Europe (541/1,377; 38.3%; p < 0.0001). There was a significant increase in LP in male and female people who inject drugs (PWID) (adjusted odds ratio (aOR)/year later 1.16; 95% confidence interval (CI): 1.02-1.32), and a significant decline in LP in northern Europe (aOR/year later 0.89; 95% CI: 0.85-0.94). Further improvements in effective HIV testing strategies, with a focus on vulnerable groups, are required across the European continent.

  15. L-Carnitine/Simvastatin Reduces Lipoprotein (a) Levels Compared with Simvastatin Monotherapy: A Randomized Double-Blind Placebo-Controlled Study.

    PubMed

    Florentin, M; Elisaf, M S; Rizos, C V; Nikolaou, V; Bilianou, E; Pitsavos, C; Liberopoulos, E N

    2017-01-01

    Lipoprotein (a) [Lp(a)] is an independent risk factor for cardiovascular disease. There are currently limited therapeutic options to lower Lp(a) levels. L-Carnitine has been reported to reduce Lp(a) levels. The aim of this study was to compare the effect of L-carnitine/simvastatin co-administration with that of simvastatin monotherapy on Lp(a) levels in subjects with mixed hyperlipidemia and elevated Lp(a) concentration. Subjects with levels of low-density lipoprotein cholesterol (LDL-C) >160 mg/dL, triacylglycerol (TAG) >150 mg/dL and Lp(a) >20 mg/dL were included in this study. Subjects were randomly allocated to receive L-carnitine 2 g/day plus simvastatin 20 mg/day (N = 29) or placebo plus simvastatin 20 mg/day (N = 29) for a total of 12 weeks. Lp(a) was significantly reduced in the L-carnitine/simvastatin group [-19.4%, from 52 (20-171) to 42 (15-102) mg/dL; p = 0.01], but not in the placebo/simvastatin group [-6.7%, from 56 (26-108) to 52 (27-93) mg/dL, p = NS versus baseline and p = 0.016 for the comparison between groups]. Similar significant reductions in total cholesterol, LDL-C, apolipoprotein (apo) B and TAG were observed in both groups. Co-administration of L-carnitine with simvastatin was associated with a significant, albeit modest, reduction in Lp(a) compared with simvastatin monotherapy in subjects with mixed hyperlipidemia and elevated baseline Lp(a) levels.

  16. A Model of Best Vitelliform Macular Dystrophy in Rats

    PubMed Central

    Marmorstein, Alan D.; Stanton, J. Brett; Yocom, John; Bakall, Benjamin; Schiavone, Marc T.; Wadelius, Claes; Marmorstein, Lihua Y.; Peachey, Neal S.

    2010-01-01

    PURPOSE The VMD2 gene, mutated in Best macular dystrophy (BMD) encodes bestrophin, a 68-kDa basolateral plasma membrane protein expressed in retinal pigment epithelial (RPE) cells. BMD is characterized by a depressed light peak (LP) in the electro-oculogram. Bestrophin is thought to be the Cl channel that generates the LP. The goal was to generate an animal model of BMD and to determine the effects of bestrophin overexpression on the RPE-generated components of the ERG. METHODS Bestrophin or bestrophin mutants (W93C or R218C) were overexpressed in the RPE of rats by injection of replication-defective adenovirus. Immunofluorescence microscopy and ERG recordings were used to study subsequent effects. RESULTS Bestrophin was confined to the basolateral plasma membrane of the RPE. Neither wild-type (wt) nor mutant bestrophin affected the a- or b-waves of the ERG. Wt bestrophin, however, increased the c-wave and fast oscillation (FO), but not the LP. In contrast, both mutants had little or no effect on the c-wave and FO, but did reduce LP amplitude. LP amplitudes across a range of stimuli were not altered by wt bestrophin, though the luminance response function was desensitized. LP response functions were unaffected by bestrophin R218C but were significantly altered by bestrophin W93C. CONCLUSIONS A model of BMD was developed in the present study. Because overexpression of wt bestrophin shifted luminance response but did not alter the range of LP response amplitudes, the authors conclude that the rate-limiting step for generating LP amplitude occurs before activation of bestrophin or that bestrophin does not directly generate the LP conductance. PMID:15452084

  17. Lipoprotein(A) with An Intact Lysine Binding Site Protects the Retina From an Age-Related Macular Degeneration Phenotype in Mice (An American Ophthalmological Society Thesis)

    PubMed Central

    Handa, James T.; Tagami, Mizuki; Ebrahimi, Katayoon; Leibundgut, Gregor; Janiak, Anna; Witztum, Joseph L.; Tsimikas, Sotirios

    2015-01-01

    Purpose: To test the hypothesis that the accumulation of oxidized phospholipids (OxPL) in the macula is toxic to the retina unless neutralized by a variety of mechanisms, including binding by lipoprotein(a) [Lp(a)], which is composed of apolipoprotein(a) [apo(a)] and apolipoprotein B-100 (apoB). Methods: Human maculas and eyes from two Lp(a) transgenic murine models were subjected to morphologic, ultrastructural, and immunohistochemical analysis. “Wild-type Lp(a)” mice, which express human apoB-100 and apo(a) that contains oxidized phospholipid, and “mutant LBS− Lp(a)” mice with a defective apo(a) lysine binding site (LBS) for oxidized phospholipid binding, were fed a chow or high-fat diet for 2 to 12 months. Oxidized phospholipid–containing lipoproteins were detected by immunoreactivity to E06, a murine monoclonal antibody binding to the phosphocholine headgroup of oxidized, but not native, phospholipids. Results: Oxidized phospholipids, apo(a), and apoB accumulate in maculas, including drusen, of age-related macular degeneration (AMD) samples and age-matched controls. Lp(a) mice fed a high-fat diet developed age-related changes. However, mutant LBS− Lp(a) mice fed a high-fat diet developed retinal pigment epithelial cell degeneration and drusen. These changes were associated with increased OxPL, decreased antioxidant defenses, increased complement, and decreased complement regulators. Conclusions: Human maculas accumulate Lp(a) and OxPL. Mutant LBS− Lp(a) mice, lacking the ability to bind E06-detectable oxidized phospholipid, develop AMD-like changes. The ability of Lp(a) to bind E06-detectable OxPL may play a protective role in AMD. PMID:26538774

  18. Comparison of Robotic Pyeloplasty and Standard Laparoscopic Pyeloplasty in Infants: A Bi-Institutional Study.

    PubMed

    Neheman, Amos; Kord, Eyal; Zisman, Amnon; Darawsha, Abd Elhalim; Noh, Paul H

    2018-04-01

    To compare outcomes between robotic pyeloplasty (RP) and standard laparoscopic pyeloplasty (LP) in the infant population for the treatment of ureteropelvic junction (UPJ) obstruction. We performed a retrospective cohort study of all children under 1 year of age who underwent RP or LP at two different medical centers between October 2009 and February 2016. Patient demographics, perioperative data, complications, and results were reviewed. Thirteen patients underwent standard LP, and 21 patients underwent RP during the study period. Median age and median weight at time of operation for the whole cohort were 6.1 months and 7.9 kg. Surgery success rates were similar with 95% and 92% in RP and LP, respectively. There was no statistically significant difference in operating time between the 2 groups, with a median time of 156 minutes in RP (range 125-249) and 192 minutes (range 98-229) in standard LP (P = .35). Median length of hospital stay was significantly shorter in the robotic group with a median stay of 1 day (range 1-3) and 7 days (range 7-12) in the standard LP group.(P < .0001) Drains or nephrostomy tubes were used more often in the laparoscopic group (100%, 13/13) as opposed to RP (9.5%, 2/21, P < .0001) There was a comparable complication rate between the 2 groups, 30.8% for LP and 23.8% for RP (P = .65). The minimally invasive dismembered pyeloplasty is safe and effective in the infant population and produces high success rates. The results, complication rates, and operative time were comparable between the two surgical methods while the standard LP demonstrated longer hospital stay. Both the robotic approach and the LP can be successfully utilized for the benefit of infants with UPJ obstruction.

  19. Lipoprotein(a) levels are associated with aortic valve calcification in asymptomatic patients with familial hypercholesterolaemia.

    PubMed

    Vongpromek, R; Bos, S; Ten Kate, G-J R; Yahya, R; Verhoeven, A J M; de Feyter, P J; Kronenberg, F; Roeters van Lennep, J E; Sijbrands, E J G; Mulder, M T

    2015-08-01

    Lipoprotein(a) [Lp(a)] is an independent risk factor for aortic valve stenosis and aortic valve calcification (AVC) in the general population. In this study, we determined the association between AVC and both plasma Lp(a) levels and apolipoprotein(a) [apo(a)] kringle IV repeat polymorphisms in asymptomatic statin-treated patients with heterozygous familial hypercholesterolaemia (FH). A total of 129 asymptomatic heterozygous FH patients (age 40-69 years) were included in this study. AVC was detected using computed tomography scanning. Lp(a) concentration and apo(a) kringle IV repeat number were measured using immunoturbidimetry and immunoblotting, respectively. Univariate and multivariate logistic regression were used to assess the association between Lp(a) concentration and the presence of AVC. Aortic valve calcification was present in 38.2% of patients, including three with extensive AVC (>400 Agatston units). Lp(a) concentration was significantly correlated with gender, number of apo(a) kringle IV repeats and the presence and severity of AVC, but not with coronary artery calcification (CAC). AVC was significantly associated with plasma Lp(a) level, age, body mass index, blood pressure, duration of statin use, cholesterol-year score and CAC score. After adjustment for all significant covariables, plasma Lp(a) concentration remained a significant predictor of AVC, with an odds ratio per 10-mg dL(-1) increase in Lp(a) concentration of 1.11 (95% confidence interval 1.01-1.20, P = 0.03). In asymptomatic statin-treated FH patients, plasma Lp(a) concentration is an independent risk indicator for AVC. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  20. Association of Lp-PLA2-A and early recurrence of vascular events after TIA and minor stroke.

    PubMed

    Lin, Jinxi; Zheng, Hongwei; Cucchiara, Brett L; Li, Jiejie; Zhao, Xingquan; Liang, Xianhong; Wang, Chunxue; Li, Hao; Mullen, Michael T; Johnston, S Claiborne; Wang, Yilong; Wang, Yongjun

    2015-11-03

    To determine the association of lipoprotein-associated phospholipase A2 (Lp-PLA2) measured in the acute period and the short-term risk of recurrent vascular events in patients with TIA or minor stroke. We measured Lp-PLA2 activity (Lp-PLA2-A) in a subset of 3,201 participants enrolled in the CHANCE (Clopidogrel in High-Risk Patients with Acute Non-disabling Cerebrovascular Events) trial. Participants with TIA or minor stroke were enrolled within 24 hours of symptom onset and randomized to single or dual antiplatelet therapy. In the current analysis, the primary outcome was defined as the composite of ischemic stroke, myocardial infarction, or death within 90 days. The composite endpoint occurred in 299 of 3,021 participants (9.9%). The population average Lp-PLA2-A level was 209 ± 59 nmol/min/mL (95% confidence interval [CI] 207-211). Older age, male sex, and current smoking were associated with higher Lp-PLA2-A levels. Lp-PLA2-A was significantly associated with the primary endpoint (adjusted hazard ratio 1.07, 95% CI 1.01-1.13 for every 30 nmol/min/mL increase). Similar results were seen for ischemic stroke alone. Adjustment for low-density lipoprotein cholesterol attenuated the association between Lp-PLA2-A and the primary endpoint (adjusted hazard ratio 1.04, 95% CI 0.97-1.11 for every 30 nmol/min/mL increase). Higher levels of Lp-PLA2-A in the acute period are associated with increased short-term risk of recurrent vascular events. © 2015 American Academy of Neurology.

Top